Error-associated behaviors and error rates for robotic geology
NASA Technical Reports Server (NTRS)
Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin
2004-01-01
This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.
Errors in Aviation Decision Making: Bad Decisions or Bad Luck?
NASA Technical Reports Server (NTRS)
Orasanu, Judith; Martin, Lynne; Davison, Jeannie; Null, Cynthia H. (Technical Monitor)
1998-01-01
Despite efforts to design systems and procedures to support 'correct' and safe operations in aviation, errors in human judgment still occur and contribute to accidents. In this paper we examine how an NDM (naturalistic decision making) approach might help us to understand the role of decision processes in negative outcomes. Our strategy was to examine a collection of identified decision errors through the lens of an aviation decision process model and to search for common patterns. The second, and more difficult, task was to determine what might account for those patterns. The corpus we analyzed consisted of tactical decision errors identified by the NTSB (National Transportation Safety Board) from a set of accidents in which crew behavior contributed to the accident. A common pattern emerged: about three quarters of the errors represented plan-continuation errors, that is, a decision to continue with the original plan despite cues that suggested changing the course of action. Features in the context that might contribute to these errors were identified: (a) ambiguous dynamic conditions and (b) organizational and socially-induced goal conflicts. We hypothesize that 'errors' are mediated by underestimation of risk and failure to analyze the potential consequences of continuing with the initial plan. Stressors may further contribute to these effects. Suggestions for improving performance in these error-inducing contexts are discussed.
Lobb, M L; Stern, J A
1986-08-01
Sequential patterns of eye and eyelid motion were identified in seven subjects performing a modified serial probe recognition task under drowsy conditions. Using simultaneous EOG and video recordings, eyelid motion was divided into components above, within, and below the pupil and the durations in sequence were recorded. A serial probe recognition task was modified to allow for distinguishing decision errors from attention errors. Decision errors were found to be more frequent following a downward shift in the gaze angle which the eyelid closing sequence was reduced from a five element to a three element sequence. The velocity of the eyelid moving over the pupil during decision errors was slow in the closing and fast in the reopening phase, while on decision correct trials it was fast in closing and slower in reopening. Due to the high variability of eyelid motion under drowsy conditions these findings were only marginally significant. When a five element blink occurred, the velocity of the lid over pupil motion component of these endogenous eye blinks was significantly faster on decision correct than on decision error trials. Furthermore, the highly variable, long duration closings associated with the decision response produced slow eye movements in the horizontal plane (SEM) which were more frequent and significantly longer in duration on decision error versus decision correct responses.
A Theoretical Foundation for the Study of Inferential Error in Decision-Making Groups.
ERIC Educational Resources Information Center
Gouran, Dennis S.
To provide a theoretical base for investigating the influence of inferential error on group decision making, current literature on both inferential error and decision making is reviewed and applied to the Watergate incident. Although groups tend to make fewer inferential errors because members' inferences are generally not biased in the same…
Diagnostic decision-making and strategies to improve diagnosis.
Thammasitboon, Satid; Cutrer, William B
2013-10-01
A significant portion of diagnostic errors arises through cognitive errors resulting from inadequate knowledge, faulty data gathering, and/or faulty verification. Experts estimate that 75% of diagnostic failures can be attributed to clinician diagnostic thinking failure. The cognitive processes that underlie diagnostic thinking of clinicians are complex and intriguing, and it is imperative that clinicians acquire explicit appreciation and application of different cognitive approaches to make decisions better. A dual-process model that unifies many theories of decision-making has emerged as a promising template for understanding how clinicians think and judge efficiently in a diagnostic reasoning process. The identification and implementation of strategies for decreasing or preventing such diagnostic errors has become a growing area of interest and research. Suggested strategies to decrease diagnostic error incidence include increasing clinician's clinical expertise and avoiding inherent cognitive errors to make decisions better. Implementing Interventions focused solely on avoiding errors may work effectively for patient safety issues such as medication errors. Addressing cognitive errors, however, requires equal effort on expanding the individual clinician's expertise. Providing cognitive support to clinicians for robust diagnostic decision-making serves as the final strategic target for decreasing diagnostic errors. Clinical guidelines and algorithms offer another method for streamlining decision-making and decreasing likelihood of cognitive diagnostic errors. Addressing cognitive processing errors is undeniably the most challenging task in reducing diagnostic errors. While many suggested approaches exist, they are mostly based on theories and sciences in cognitive psychology, decision-making, and education. The proposed interventions are primarily suggestions and very few of them have been tested in the actual practice settings. Collaborative research effort is required to effectively address cognitive processing errors. Researchers in various areas, including patient safety/quality improvement, decision-making, and problem solving, must work together to make medical diagnosis more reliable. © 2013 Mosby, Inc. All rights reserved.
The Sustained Influence of an Error on Future Decision-Making.
Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel
2017-01-01
Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.
Cognitive processes in anesthesiology decision making.
Stiegler, Marjorie Podraza; Tung, Avery
2014-01-01
The quality and safety of health care are under increasing scrutiny. Recent studies suggest that medical errors, practice variability, and guideline noncompliance are common, and that cognitive error contributes significantly to delayed or incorrect diagnoses. These observations have increased interest in understanding decision-making psychology.Many nonrational (i.e., not purely based in statistics) cognitive factors influence medical decisions and may lead to error. The most well-studied include heuristics, preferences for certainty, overconfidence, affective (emotional) influences, memory distortions, bias, and social forces such as fairness or blame.Although the extent to which such cognitive processes play a role in anesthesia practice is unknown, anesthesia care frequently requires rapid, complex decisions that are most susceptible to decision errors. This review will examine current theories of human decision behavior, identify effects of nonrational cognitive processes on decision making, describe characteristic anesthesia decisions in this context, and suggest strategies to improve decision making.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Preisig, James C
2005-07-01
Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.
Error affect inoculation for a complex decision-making task.
Tabernero, Carmen; Wood, Robert E
2009-05-01
Individuals bring knowledge, implicit theories, and goal orientations to group meetings. Group decisions arise out of the exchange of these orientations. This research explores how a trainee's exploratory and deliberate process (an incremental theory and learning goal orientation) impacts the effectiveness of individual and group decision-making processes. The effectiveness of this training program is compared with another program that included error affect inoculation (EAI). Subjects were 40 Spanish Policemen in a training course. They were distributed in two training conditions for an individual and group decision-making task. In one condition, individuals received the Self-Guided Exploration plus Deliberation Process instructions, which emphasised exploring the options and testing hypotheses. In the other condition, individuals also received instructions based on Error Affect Inoculation (EAI), which emphasised positive affective reactions to errors and mistakes when making decisions. Results show that the quality of decisions increases when the groups share their reasoning. The AIE intervention promotes sharing information, flexible initial viewpoints, and improving the quality of group decisions. Implications and future directions are discussed.
The Relationship Between Technical Errors and Decision Making Skills in the Junior Resident
Nathwani, J. N.; Fiers, R.M.; Ray, R.D.; Witt, A.K.; Law, K. E.; DiMarco, S.M.; Pugh, C.M.
2017-01-01
Objective The purpose of this study is to co-evaluate resident technical errors and decision-making capabilities during placement of a subclavian central venous catheter (CVC). We hypothesize that there will be significant correlations between scenario based decision making skills, and technical proficiency in central line insertion. We also predict residents will have problems in anticipating common difficulties and generating solutions associated with line placement. Design Participants were asked to insert a subclavian central line on a simulator. After completion, residents were presented with a real life patient photograph depicting CVC placement and asked to anticipate difficulties and generate solutions. Error rates were analyzed using chi-square tests and a 5% expected error rate. Correlations were sought by comparing technical errors and scenario based decision making. Setting This study was carried out at seven tertiary care centers. Participants Study participants (N=46) consisted of largely first year research residents that could be followed longitudinally. Second year research and clinical residents were not excluded. Results Six checklist errors were committed more often than anticipated. Residents performed an average of 1.9 errors, significantly more than the 1 error, at most, per person expected (t(44)=3.82, p<.001). The most common error was performance of the procedure steps in the wrong order (28.5%, P<.001). Some of the residents (24%) had no errors, 30% committed one error, and 46 % committed more than one error. The number of technical errors committed negatively correlated with the total number of commonly identified difficulties and generated solutions (r(33)= −.429, p=.021, r(33)= −.383, p=.044 respectively). Conclusions Almost half of the surgical residents committed multiple errors while performing subclavian CVC placement. The correlation between technical errors and decision making skills suggests a critical need to train residents in both technique and error management. ACGME Competencies Medical Knowledge, Practice Based Learning and Improvement, Systems Based Practice PMID:27671618
Decision aids for multiple-decision disease management as affected by weather input errors.
Pfender, W F; Gent, D H; Mahaffee, W F; Coop, L B; Fox, A D
2011-06-01
Many disease management decision support systems (DSSs) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation, or estimation from off-site sources, may affect model calculations and management decision recommendations. The extent to which errors in weather inputs affect the quality of the final management outcome depends on a number of aspects of the disease management context, including whether management consists of a single dichotomous decision, or of a multi-decision process extending over the cropping season(s). Decision aids for multi-decision disease management typically are based on simple or complex algorithms of weather data which may be accumulated over several days or weeks. It is difficult to quantify accuracy of multi-decision DSSs due to temporally overlapping disease events, existence of more than one solution to optimizing the outcome, opportunities to take later recourse to modify earlier decisions, and the ongoing, complex decision process in which the DSS is only one component. One approach to assessing importance of weather input errors is to conduct an error analysis in which the DSS outcome from high-quality weather data is compared with that from weather data with various levels of bias and/or variance from the original data. We illustrate this analytical approach for two types of DSS, an infection risk index for hop powdery mildew and a simulation model for grass stem rust. Further exploration of analysis methods is needed to address problems associated with assessing uncertainty in multi-decision DSSs.
Decision Making In A High-Tech World: Automation Bias and Countermeasures
NASA Technical Reports Server (NTRS)
Mosier, Kathleen L.; Skitka, Linda J.; Burdick, Mark R.; Heers, Susan T.; Rosekind, Mark R. (Technical Monitor)
1996-01-01
Automated decision aids and decision support systems have become essential tools in many high-tech environments. In aviation, for example, flight management systems computers not only fly the aircraft, but also calculate fuel efficient paths, detect and diagnose system malfunctions and abnormalities, and recommend or carry out decisions. Air Traffic Controllers will soon be utilizing decision support tools to help them predict and detect potential conflicts and to generate clearances. Other fields as disparate as nuclear power plants and medical diagnostics are similarly becoming more and more automated. Ideally, the combination of human decision maker and automated decision aid should result in a high-performing team, maximizing the advantages of additional cognitive and observational power in the decision-making process. In reality, however, the presence of these aids often short-circuits the way that even very experienced decision makers have traditionally handled tasks and made decisions, and introduces opportunities for new decision heuristics and biases. Results of recent research investigating the use of automated aids have indicated the presence of automation bias, that is, errors made when decision makers rely on automated cues as a heuristic replacement for vigilant information seeking and processing. Automation commission errors, i.e., errors made when decision makers inappropriately follow an automated directive, or automation omission errors, i.e., errors made when humans fail to take action or notice a problem because an automated aid fails to inform them, can result from this tendency. Evidence of the tendency to make automation-related omission and commission errors has been found in pilot self reports, in studies using pilots in flight simulations, and in non-flight decision making contexts with student samples. Considerable research has found that increasing social accountability can successfully ameliorate a broad array of cognitive biases and resultant errors. To what extent these effects generalize to performance situations is not yet empirically established. The two studies to be presented represent concurrent efforts, with student and professional pilot samples, to determine the effects of accountability pressures on automation bias and on the verification of the accurate functioning of automated aids. Students (Experiment 1) and commercial pilots (Experiment 2) performed simulated flight tasks using automated aids. In both studies, participants who perceived themselves as accountable for their strategies of interaction with the automation were significantly more likely to verify its correctness, and committed significantly fewer automation-related errors than those who did not report this perception.
Automation: Decision Aid or Decision Maker?
NASA Technical Reports Server (NTRS)
Skitka, Linda J.
1998-01-01
This study clarified that automation bias is something unique to automated decision making contexts, and is not the result of a general tendency toward complacency. By comparing performance on exactly the same events on the same tasks with and without an automated decision aid, we were able to determine that at least the omission error part of automation bias is due to the unique context created by having an automated decision aid, and is not a phenomena that would occur even if people were not in an automated context. However, this study also revealed that having an automated decision aid did lead to modestly improved performance across all non-error events. Participants in the non- automated condition responded with 83.68% accuracy, whereas participants in the automated condition responded with 88.67% accuracy, across all events. Automated decision aids clearly led to better overall performance when they were accurate. People performed almost exactly at the level of reliability as the automation (which across events was 88% reliable). However, also clear, is that the presence of less than 100% accurate automated decision aids creates a context in which new kinds of errors in decision making can occur. Participants in the non-automated condition responded with 97% accuracy on the six "error" events, whereas participants in the automated condition had only a 65% accuracy rate when confronted with those same six events. In short, the presence of an AMA can lead to vigilance decrements that can lead to errors in decision making.
The thinking doctor: clinical decision making in contemporary medicine.
Trimble, Michael; Hamilton, Paul
2016-08-01
Diagnostic errors are responsible for a significant number of adverse events. Logical reasoning and good decision-making skills are key factors in reducing such errors, but little emphasis has traditionally been placed on how these thought processes occur, and how errors could be minimised. In this article, we explore key cognitive ideas that underpin clinical decision making and suggest that by employing some simple strategies, physicians might be better able to understand how they make decisions and how the process might be optimised. © 2016 Royal College of Physicians.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
USDA-ARS?s Scientific Manuscript database
Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...
Clinical errors that can occur in the treatment decision-making process in psychotherapy.
Park, Jake; Goode, Jonathan; Tompkins, Kelley A; Swift, Joshua K
2016-09-01
Clinical errors occur in the psychotherapy decision-making process whenever a less-than-optimal treatment or approach is chosen when working with clients. A less-than-optimal approach may be one that a client is unwilling to try or fully invest in based on his/her expectations and preferences, or one that may have little chance of success based on contraindications and/or limited research support. The doctor knows best and the independent choice models are two decision-making models that are frequently used within psychology, but both are associated with an increased likelihood of errors in the treatment decision-making process. In particular, these models fail to integrate all three components of the definition of evidence-based practice in psychology (American Psychological Association, 2006). In this article we describe both models and provide examples of clinical errors that can occur in each. We then introduce the shared decision-making model as an alternative that is less prone to clinical errors. PsycINFO Database Record (c) 2016 APA, all rights reserved
Evaluate the ability of clinical decision support systems (CDSSs) to improve clinical practice.
Ajami, Sima; Amini, Fatemeh
2013-01-01
Prevalence of new diseases, medical science promotion and increase of referring to health care centers, provide a good situation for medical errors growth. Errors can involve medicines, surgery, diagnosis, equipment, or lab reports. Medical errors can occur anywhere in the health care system: In hospitals, clinics, surgery centers, doctors' offices, nursing homes, pharmacies, and patients' homes. According to the Institute of Medicine (IOM), 98,000 people die every year from preventable medical errors. In 2010 from all referred medical error records to Iran Legal Medicine Organization, 46/5% physician and medical team members were known as delinquent. One of new technologies that can reduce medical errors is clinical decision support systems (CDSSs). This study was unsystematic-review study. The literature was searched on evaluate the "ability of clinical decision support systems to improve clinical practice" with the help of library, books, conference proceedings, data bank, and also searches engines available at Google, Google scholar. For our searches, we employed the following keywords and their combinations: medical error, clinical decision support systems, Computer-Based Clinical Decision Support Systems, information technology, information system, health care quality, computer systems in the searching areas of title, keywords, abstract, and full text. In this study, more than 100 articles and reports were collected and 38 of them were selected based on their relevancy. The CDSSs are computer programs, designed for help to health care careers. These systems as a knowledge-based tool could help health care manager in analyze evaluation, improvement and selection of effective solutions in clinical decisions. Therefore, it has a main role in medical errors reduction. The aim of this study was to express ability of the CDSSs to improve
Decision Aids for Multiple-Decision Disease Management as Affected by Weather Input Errors
USDA-ARS?s Scientific Manuscript database
Many disease management decision support systems (DSS) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation or estimation from off-site sources, may affect model calculations and manage...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2014 CFR
2014-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2013 CFR
2013-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
Outbreak Column 16: Cognitive errors in outbreak decision making.
Curran, Evonne T
2015-01-01
During outbreaks, decisions must be made without all the required information. People, including infection prevention and control teams (IPCTs), who have to make decisions during uncertainty use heuristics to fill the missing data gaps. Heuristics are mental model short cuts that by-and-large enable us to make good decisions quickly. However, these heuristics contain biases and effects that at times lead to cognitive (thinking) errors. These cognitive errors are not made to deliberately misrepresent any given situation; we are subject to heuristic biases when we are trying to perform optimally. The science of decision making is large; there are over 100 different biases recognised and described. Outbreak Column 16 discusses and relates these heuristics and biases to decision making during outbreak prevention, preparedness and management. Insights as to how we might recognise and avoid them are offered.
Automatic Recognition of Phonemes Using a Syntactic Processor for Error Correction.
1980-12-01
OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS AFIT/GE/EE/8D-45 Robert B. ’Taylor 2Lt USAF Approved for public release...distribution unlimilted. AbP AFIT/GE/EE/ 80D-45 AUTOMATIC RECOGNITION OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS Presented to the...Testing ..................... 37 Bayes Decision Rule for Minimum Error ........... 37 Bayes Decision Rule for Minimum Risk ............ 39 Mini Max Test
Complacency and Automation Bias in the Use of Imperfect Automation.
Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L
2015-08-01
We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.
Feys, Marjolein; Anseel, Frederik
2015-03-01
People's affective forecasts are often inaccurate because they tend to overestimate how they will feel after an event. As life decisions are often based on affective forecasts, it is crucial to find ways to manage forecasting errors. We examined the impact of a fair treatment on forecasting errors in candidates in a Belgian reality TV talent show. We found that perceptions of fair treatment increased the forecasting error for losers (a negative audition decision) but decreased it for winners (a positive audition decision). For winners, this effect was even more pronounced when candidates were highly invested in their self-view as a future pop idol whereas for losers, the effect was more pronounced when importance was low. The results in this study point to a potential paradox between maximizing happiness and decreasing forecasting errors. A fair treatment increased the forecasting error for losers, but actually made them happier. © 2014 The British Psychological Society.
Does the cost function matter in Bayes decision rule?
Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann
2012-02-01
In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.
STARS Proceedings (3-4 December 1991)
1991-12-04
PROJECT PROCESS OBJECTIVES & ASSOCIATED METRICS: Prioritize ECPs: complexity & error-history measures 0 Make vs Buy decisions: Effort & Quality (or...history measures, error- proneness and past histories of trouble with particular modules are very useful measures. Make vs Buy decisions: Does the...Effort offset the gain in Quality relative to buy ... Effort and Quality (or defect rate) histories give helpful indications of how to make this decision
[Cognitive errors in diagnostic decision making].
Gäbler, Martin
2017-10-01
Approximately 10-15% of our diagnostic decisions are faulty and may lead to unfavorable and dangerous outcomes, which could be avoided. These diagnostic errors are mainly caused by cognitive biases in the diagnostic reasoning process.Our medical diagnostic decision-making is based on intuitive "System 1" and analytical "System 2" diagnostic decision-making and can be deviated by unconscious cognitive biases.These deviations can be positively influenced on a systemic and an individual level. For the individual, metacognition (internal withdrawal from the decision-making process) and debiasing strategies, such as verification, falsification and rule out worst-case scenarios, can lead to improved diagnostic decisions making.
How infants' reaches reveal principles of sensorimotor decision making
NASA Astrophysics Data System (ADS)
Dineva, Evelina; Schöner, Gregor
2018-01-01
In Piaget's classical A-not-B-task, infants repeatedly make a sensorimotor decision to reach to one of two cued targets. Perseverative errors are induced by switching the cue from A to B, while spontaneous errors are unsolicited reaches to B when only A is cued. We argue that theoretical accounts of sensorimotor decision-making fail to address how motor decisions leave a memory trace that may impact future sensorimotor decisions. Instead, in extant neural models, perseveration is caused solely by the history of stimulation. We present a neural dynamic model of sensorimotor decision-making within the framework of Dynamic Field Theory, in which a dynamic instability amplifies fluctuations in neural activation into macroscopic, stable neural activation states that leave memory traces. The model predicts perseveration, but also a tendency to repeat spontaneous errors. To test the account, we pool data from several A-not-B experiments. A conditional probabilities analysis accounts quantitatively how motor decisions depend on the history of reaching. The results provide evidence for the interdependence among subsequent reaching decisions that is explained by the model, showing that by amplifying small differences in activation and affecting learning, decisions have consequences beyond the individual behavioural act.
ERIC Educational Resources Information Center
Beatty, Michael J.
1988-01-01
Examines the choice-making processes of students engaged in the selection of speech introduction strategies. Finds that the frequency of students making decision-making errors was a positive function of public speaking apprehension. (MS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappelli, M.; Gadomski, A. M.; Sepiellis, M.
In the field of nuclear power plant (NPP) safety modeling, the perception of the role of socio-cognitive engineering (SCE) is continuously increasing. Today, the focus is especially on the identification of human and organization decisional errors caused by operators and managers under high-risk conditions, as evident by analyzing reports on nuclear incidents occurred in the past. At present, the engineering and social safety requirements need to enlarge their domain of interest in such a way to include all possible losses generating events that could be the consequences of an abnormal state of a NPP. Socio-cognitive modeling of Integrated Nuclear Safetymore » Management (INSM) using the TOGA meta-theory has been discussed during the ICCAP 2011 Conference. In this paper, more detailed aspects of the cognitive decision-making and its possible human errors and organizational vulnerability are presented. The formal TOGA-based network model for cognitive decision-making enables to indicate and analyze nodes and arcs in which plant operators and managers errors may appear. The TOGA's multi-level IPK (Information, Preferences, Knowledge) model of abstract intelligent agents (AIAs) is applied. In the NPP context, super-safety approach is also discussed, by taking under consideration unexpected events and managing them from a systemic perspective. As the nature of human errors depends on the specific properties of the decision-maker and the decisional context of operation, a classification of decision-making using IPK is suggested. Several types of initial situations of decision-making useful for the diagnosis of NPP operators and managers errors are considered. The developed models can be used as a basis for applications to NPP educational or engineering simulators to be used for training the NPP executive staff. (authors)« less
C-fuzzy variable-branch decision tree with storage and classification error rate constraints
NASA Astrophysics Data System (ADS)
Yang, Shiueng-Bien
2009-10-01
The C-fuzzy decision tree (CFDT), which is based on the fuzzy C-means algorithm, has recently been proposed. The CFDT is grown by selecting the nodes to be split according to its classification error rate. However, the CFDT design does not consider the classification time taken to classify the input vector. Thus, the CFDT can be improved. We propose a new C-fuzzy variable-branch decision tree (CFVBDT) with storage and classification error rate constraints. The design of the CFVBDT consists of two phases-growing and pruning. The CFVBDT is grown by selecting the nodes to be split according to the classification error rate and the classification time in the decision tree. Additionally, the pruning method selects the nodes to prune based on the storage requirement and the classification time of the CFVBDT. Furthermore, the number of branches of each internal node is variable in the CFVBDT. Experimental results indicate that the proposed CFVBDT outperforms the CFDT and other methods.
Cognitive level and health decision-making in children: A preliminary study.
Okwumabua, J O; Okwumabua, T M; Hayes, A; Stovall, K
1994-06-01
The study examines children's stage of cognitive development in relation to their patterns of health decision-making, including their cognitive capabilities in integrating the sequential stages of the decision-making process. A sample of 81 male (N=33) and female (N=48) students were drawn from two urban public schools in West Tennessee. All participants in the study were of African-American descent. The Centers for Disease Control Decision-Making Instrument was used to assess students' decision-making as well as their understanding of the decision-making process. The children's cognitive level was determined by their performance on three Piagetian conservation tasks. Findings revealed that both the preoperational and concrete operational children performed significantly below the formal operational children in terms of total correct responses to the decision-making scenarios. Error type analyses indicated that the preoperational children made more errors involving "skipped step" than did either the concrete or formal operational children. There were no significant differences between children's level of cognitive development and any other error type. Implications for health promotion and disease prevention programs among prevention practitioners who work regularly with children are discussed.
Crew decision making under stress
NASA Technical Reports Server (NTRS)
Orasanu, J.
1992-01-01
Flight crews must make decisions and take action when systems fail or emergencies arise during flight. These situations may involve high stress. Full-missiion flight simulation studies have shown that crews differ in how effectively they cope in these circumstances, judged by operational errors and crew coordination. The present study analyzed the problem solving and decision making strategies used by crews led by captains fitting three different personality profiles. Our goal was to identify more and less effective strategies that could serve as the basis for crew selection or training. Methods: Twelve 3-member B-727 crews flew a 5-leg mission simulated flight over 1 1/2 days. Two legs included 4 abnormal events that required decisions during high workload periods. Transcripts of videotapes were analyzed to describe decision making strategies. Crew performance (errors and coordination) was judged on-line and from videotapes by check airmen. Results: Based on a median split of crew performance errors, analyses to date indicate a difference in general strategy between crews who make more or less errors. Higher performance crews showed greater situational awareness - they responded quickly to cues and interpreted them appropriately. They requested more decision relevant information and took into account more constraints. Lower performing crews showed poorer situational awareness, planning, constraint sensitivity, and coordination. The major difference between higher and lower performing crews was that poorer crews made quick decisions and then collected information to confirm their decision. Conclusion: Differences in overall crew performance were associated with differences in situational awareness, information management, and decision strategy. Captain personality profiles were associated with these differences, a finding with implications for crew selection and training.
Fargen, Kyle M; Friedman, William A
2014-01-01
During the last 2 decades, there has been a shift in the U.S. health care system towards improving the quality of health care provided by enhancing patient safety and reducing medical errors. Unfortunately, surgical complications, patient harm events, and malpractice claims remain common in the field of neurosurgery. Many of these events are potentially avoidable. There are an increasing number of publications in the medical literature in which authors address cognitive errors in diagnosis and treatment and strategies for reducing such errors, but these are for the most part absent in the neurosurgical literature. The purpose of this article is to highlight the complexities of medical decision making to a neurosurgical audience, with the hope of providing insight into the biases that lead us towards error and strategies to overcome our innate cognitive deficiencies. To accomplish this goal, we review the current literature on medical errors and just culture, explain the dual process theory of cognition, identify common cognitive errors affecting neurosurgeons in practice, review cognitive debiasing strategies, and finally provide simple methods that can be easily assimilated into neurosurgical practice to improve clinical decision making. Copyright © 2014 Elsevier Inc. All rights reserved.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
Chew, Keng Sheng; Kueh, Yee Cheng; Abdul Aziz, Adlihafizi
2017-03-21
Despite their importance on diagnostic accuracy, there is a paucity of literature on questionnaire tools to assess clinicians' awareness toward cognitive errors. A validation study was conducted to develop a questionnaire tool to evaluate the Clinician's Awareness Towards Cognitive Errors (CATChES) in clinical decision making. This questionnaire is divided into two parts. Part A is to evaluate the clinicians' awareness towards cognitive errors in clinical decision making while Part B is to evaluate their perception towards specific cognitive errors. Content validation for both parts was first determined followed by construct validation for Part A. Construct validation for Part B was not determined as the responses were set in a dichotomous format. For content validation, all items in both Part A and Part B were rated as "excellent" in terms of their relevance in clinical settings. For construct validation using exploratory factor analysis (EFA) for Part A, a two-factor model with total variance extraction of 60% was determined. Two items were deleted. Then, the EFA was repeated showing that all factor loadings are above the cut-off value of >0.5. The Cronbach's alpha for both factors are above 0.6. The CATChES questionnaire tool is a valid questionnaire tool aimed to evaluate the awareness among clinicians toward cognitive errors in clinical decision making.
Error-Related Negativities During Spelling Judgments Expose Orthographic Knowledge
Harris, Lindsay N.; Perfetti, Charles A.; Rickles, Benjamin
2014-01-01
In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects’ spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. PMID:24389506
29 CFR 18.103 - Rulings on evidence.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is more probably true than not true that the error did not materially contribute to the decision or... if explicitly not relied upon by the judge in support of the decision or order. (b) Record of offer... making of an offer in question and answer form. (c) Plain error. Nothing in this rule precludes taking...
Administration and Organizational Influences on AFDC Case Decision Errors: An Empirical Analysis.
ERIC Educational Resources Information Center
Piliavin, Irving; And Others
The quality of effort among public assistance personnel has been criticized virtually since the inception of welfare programs for the poor. However, until recently, empirical information on the performance of these workers has been nonexistent. The present study, concerned with Aid to Families with Dependent Children (AFDC) case decision errors,…
Neural basis of decision making guided by emotional outcomes
Matsuda, Yoshi-Taka; Fujimura, Tomomi; Ueno, Kenichi; Asamizuya, Takeshi; Suzuki, Chisato; Cheng, Kang; Okanoya, Kazuo; Okada, Masato
2015-01-01
Emotional events resulting from a choice influence an individual's subsequent decision making. Although the relationship between emotion and decision making has been widely discussed, previous studies have mainly investigated decision outcomes that can easily be mapped to reward and punishment, including monetary gain/loss, gustatory stimuli, and pain. These studies regard emotion as a modulator of decision making that can be made rationally in the absence of emotions. In our daily lives, however, we often encounter various emotional events that affect decisions by themselves, and mapping the events to a reward or punishment is often not straightforward. In this study, we investigated the neural substrates of how such emotional decision outcomes affect subsequent decision making. By using functional magnetic resonance imaging (fMRI), we measured brain activities of humans during a stochastic decision-making task in which various emotional pictures were presented as decision outcomes. We found that pleasant pictures differentially activated the midbrain, fusiform gyrus, and parahippocampal gyrus, whereas unpleasant pictures differentially activated the ventral striatum, compared with neutral pictures. We assumed that the emotional decision outcomes affect the subsequent decision by updating the value of the options, a process modeled by reinforcement learning models, and that the brain regions representing the prediction error that drives the reinforcement learning are involved in guiding subsequent decisions. We found that some regions of the striatum and the insula were separately correlated with the prediction error for either pleasant pictures or unpleasant pictures, whereas the precuneus was correlated with prediction errors for both pleasant and unpleasant pictures. PMID:25695644
Neural basis of decision making guided by emotional outcomes.
Katahira, Kentaro; Matsuda, Yoshi-Taka; Fujimura, Tomomi; Ueno, Kenichi; Asamizuya, Takeshi; Suzuki, Chisato; Cheng, Kang; Okanoya, Kazuo; Okada, Masato
2015-05-01
Emotional events resulting from a choice influence an individual's subsequent decision making. Although the relationship between emotion and decision making has been widely discussed, previous studies have mainly investigated decision outcomes that can easily be mapped to reward and punishment, including monetary gain/loss, gustatory stimuli, and pain. These studies regard emotion as a modulator of decision making that can be made rationally in the absence of emotions. In our daily lives, however, we often encounter various emotional events that affect decisions by themselves, and mapping the events to a reward or punishment is often not straightforward. In this study, we investigated the neural substrates of how such emotional decision outcomes affect subsequent decision making. By using functional magnetic resonance imaging (fMRI), we measured brain activities of humans during a stochastic decision-making task in which various emotional pictures were presented as decision outcomes. We found that pleasant pictures differentially activated the midbrain, fusiform gyrus, and parahippocampal gyrus, whereas unpleasant pictures differentially activated the ventral striatum, compared with neutral pictures. We assumed that the emotional decision outcomes affect the subsequent decision by updating the value of the options, a process modeled by reinforcement learning models, and that the brain regions representing the prediction error that drives the reinforcement learning are involved in guiding subsequent decisions. We found that some regions of the striatum and the insula were separately correlated with the prediction error for either pleasant pictures or unpleasant pictures, whereas the precuneus was correlated with prediction errors for both pleasant and unpleasant pictures. Copyright © 2015 the American Physiological Society.
Automation Bias: Decision Making and Performance in High-Tech Cockpits
NASA Technical Reports Server (NTRS)
Mosier, Kathleen L.; Skitka, Linda J.; Heers, Susan; Burdick, Mark; Rosekind, Mark R. (Technical Monitor)
1997-01-01
Automated aids and decision support tools are rapidly becoming indispensible tools in high-technology cockpits, and are assuming increasing control of "cognitive" flight tasks, such as calculating fuel-efficient routes, navigating, or detecting and diagnosing system malfunctions and abnormalities. This study was designed to investigate "automation bias," a recently documented factor in the use of automated aids and decision support systems. The term refers to omission and commission errors resulting from the use of automated cues as a heuristic replacement for vigilant information seeking and processing. Glass-cockpit pilots flew flight scenarios involving automation "events," or opportunities for automation-related omission and commission errors. Pilots who perceived themselves as "accountable" for their performance and strategies of interaction with the automation were more likely to double-check automated functioning against other cues, and less likely to commit errors. Pilots were also likely to erroneously "remember" the presence of expected cues when describing their decision-making processes.
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Hammer, John M.; Wan, C. Yoon; Vasandani, Vijay
1987-01-01
The current research is focused on detection of human error and protection from its consequences. A program for monitoring pilot error by comparing pilot actions to a script was described. It dealt primarily with routine errors (slips) that occurred during checklist activity. The model to which operator actions were compared was a script. Current research is an extension along these two dimensions. The ORS fault detection aid uses a sophisticated device model rather than a script. The newer initiative, the model-based and constraint-based warning system, uses an even more sophisticated device model and is to prevent all types of error, not just slips or bad decision.
Trial-to-trial adjustments of speed-accuracy trade-offs in premotor and primary motor cortex
Guberman, Guido; Cisek, Paul
2016-01-01
Recent studies have shown that activity in sensorimotor structures varies depending on the speed-accuracy trade-off (SAT) context in which a decision is made. Here we tested the hypothesis that the same areas also reflect a more local adjustment of SAT established between individual trials, based on the outcome of the previous decision. Two monkeys performed a reaching decision task in which sensory evidence continuously evolves during the time course of a trial. In two SAT contexts, we compared neural activity in trials following a correct choice vs. those following an error. In dorsal premotor cortex (PMd), we found that 23% of cells exhibited significantly weaker baseline activity after error trials, and for ∼30% of these this effect persisted into the deliberation epoch. These cells also contributed to the process of combining sensory evidence with the growing urgency to commit to a choice. We also found that the activity of 22% of PMd cells was increased after error trials. These neurons appeared to carry less information about sensory evidence and time-dependent urgency. For most of these modulated cells, the effect was independent of whether the previous error was expected or unexpected. We found similar phenomena in primary motor cortex (M1), with 25% of cells decreasing and 34% increasing activity after error trials, but unlike PMd, these neurons showed less clear differences in their response properties. These findings suggest that PMd and M1 belong to a network of brain areas involved in SAT adjustments established using the recent history of reinforcement. NEW & NOTEWORTHY Setting the speed-accuracy trade-off (SAT) is crucial for efficient decision making. Previous studies have reported that subjects adjust their SAT after individual decisions, usually choosing more conservatively after errors, but the neural correlates of this phenomenon are only partially known. Here, we show that neurons in PMd and M1 of monkeys performing a reach decision task support this mechanism by adequately modulating their firing rate as a function of the outcome of the previous decision. PMID:27852735
Park, Hame; Lueckmann, Jan-Matthis; von Kriegstein, Katharina; Bitzer, Sebastian; Kiebel, Stefan J.
2016-01-01
Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models. PMID:26752272
Aziz, Muhammad Tahir; Ur-Rehman, Tofeeq; Qureshi, Sadia; Bukhari, Nadeem Irfan
Medication errors in chemotherapy are frequent and lead to patient morbidity and mortality, as well as increased rates of re-admission and length of stay, and considerable extra costs. Objective: This study investigated the proposition that computerised chemotherapy ordering reduces the incidence and severity of chemotherapy protocol errors. A computerised physician order entry of chemotherapy order (C-CO) with clinical decision support system was developed in-house, including standardised chemotherapy protocol definitions, automation of pharmacy distribution, clinical checks, labeling and invoicing. A prospective study was then conducted in a C-CO versus paper based chemotherapy order (P-CO) in a 30-bed chemotherapy bay of a tertiary hospital. Both C-CO and P-CO orders, including pharmacoeconomic analysis and the severity of medication errors, were checked and validated by a clinical pharmacist. A group analysis and field trial were also conducted to assess clarity, feasibility and decision making. The C-CO was very usable in terms of its clarity and feasibility. The incidence of medication errors was significantly lower in the C-CO compared with the P-CO (10/3765 [0.26%] versus 134/5514 [2.4%]). There was also a reduction in dispensing time of chemotherapy protocols in the C-CO. The chemotherapy computerisation with clinical decision support system resulted in a significant decrease in the occurrence and severity of medication errors, improvements in chemotherapy dispensing and administration times, and reduction of chemotherapy cost.
Context Effects in Multi-Alternative Decision Making: Empirical Data and a Bayesian Model
ERIC Educational Resources Information Center
Hawkins, Guy; Brown, Scott D.; Steyvers, Mark; Wagenmakers, Eric-Jan
2012-01-01
For decisions between many alternatives, the benchmark result is Hick's Law: that response time increases log-linearly with the number of choice alternatives. Even when Hick's Law is observed for response times, divergent results have been observed for error rates--sometimes error rates increase with the number of choice alternatives, and…
ERIC Educational Resources Information Center
Byars, Alvin Gregg
The objectives of this investigation are to develop, describe, assess, and demonstrate procedures for constructing mastery tests to minimize errors of classification and to maximize decision reliability. The guidelines are based on conditions where item exchangeability is a reasonable assumption and the test constructor can control the number of…
Regan, Tracey J; Taylor, Barbara L; Thompson, Grant G; Cochrane, Jean Fitts; Ralls, Katherine; Runge, Michael C; Merrick, Richard
2013-08-01
Lack of guidance for interpreting the definitions of endangered and threatened in the U.S. Endangered Species Act (ESA) has resulted in case-by-case decision making leaving the process vulnerable to being considered arbitrary or capricious. Adopting quantitative decision rules would remedy this but requires the agency to specify the relative urgency concerning extinction events over time, cutoff risk values corresponding to different levels of protection, and the importance given to different types of listing errors. We tested the performance of 3 sets of decision rules that use alternative functions for weighting the relative urgency of future extinction events: a threshold rule set, which uses a decision rule of x% probability of extinction over y years; a concave rule set, where the relative importance of future extinction events declines exponentially over time; and a shoulder rule set that uses a sigmoid shape function, where relative importance declines slowly at first and then more rapidly. We obtained decision cutoffs by interviewing several biologists and then emulated the listing process with simulations that covered a range of extinction risks typical of ESA listing decisions. We evaluated performance of the decision rules under different data quantities and qualities on the basis of the relative importance of misclassification errors. Although there was little difference between the performance of alternative decision rules for correct listings, the distribution of misclassifications differed depending on the function used. Misclassifications for the threshold and concave listing criteria resulted in more overprotection errors, particularly as uncertainty increased, whereas errors for the shoulder listing criteria were more symmetrical. We developed and tested the framework for quantitative decision rules for listing species under the U.S. ESA. If policy values can be agreed on, use of this framework would improve the implementation of the ESA by increasing transparency and consistency. Conservation Biology © 2013 Society for Conservation Biology No claim to original US government works.
Study on Network Error Analysis and Locating based on Integrated Information Decision System
NASA Astrophysics Data System (ADS)
Yang, F.; Dong, Z. H.
2017-10-01
Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.
Fleming, Kevin K; Bandy, Carole L; Kimble, Matthew O
2010-01-01
The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.
Fleming, Kevin K.; Bandy, Carole L.; Kimble, Matthew O.
2014-01-01
The decision to shoot engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC) where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and EEG activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of middle-eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERN’s were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERN’s, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of middle-eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to middle-eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance. PMID:19813139
Error-related negativities during spelling judgments expose orthographic knowledge.
Harris, Lindsay N; Perfetti, Charles A; Rickles, Benjamin
2014-02-01
In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects' spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.
Huang, Yu-Ting; Georgiev, Dejan; Foltynie, Tom; Limousin, Patricia; Speekenbrink, Maarten; Jahanshahi, Marjan
2015-08-01
When choosing between two options, sufficient accumulation of information is required to favor one of the options over the other, before a decision is finally reached. To establish the effect of dopaminergic medication on the rate of accumulation of information, decision thresholds and speed-accuracy trade-offs, we tested 14 patients with Parkinson's disease (PD) on and off dopaminergic medication and 14 age-matched healthy controls on two versions of the moving-dots task. One version manipulated the level of task difficulty and hence effort required for decision-making and the other the urgency, requiring decision-making under speed vs. accuracy instructions. The drift diffusion model was fitted to the behavioral data. As expected, the reaction time data revealed an effect of task difficulty, such that the easier the perceptual decision-making task was, the faster the participants responded. PD patients not only made significantly more errors compared to healthy controls, but interestingly they also made significantly more errors ON than OFF medication. The drift diffusion model indicated that PD patients had lower drift rates when tested ON compared to OFF medication, indicating that dopamine levels influenced the quality of information derived from sensory information. On the speed-accuracy task, dopaminergic medication did not directly influence reaction times or error rates. PD patients OFF medication had slower RTs and made more errors with speed than accuracy instructions compared to the controls, whereas such differences were not observed ON medication. PD patients had lower drift rates and higher response thresholds than the healthy controls both with speed and accuracy instructions and ON and OFF medication. For the patients, only non-decision time was higher OFF than ON medication and higher with accuracy than speed instructions. The present results demonstrate that when task difficulty is manipulated, dopaminergic medication impairs perceptual decision-making and renders it more errorful in PD relative to when patients are tested OFF medication. In contrast, for the speed/accuracy task, being ON medication improved performance by eliminating the significantly higher errors and slower RTs observed for patients OFF medication compared to the HC group. There was no evidence of dopaminergic medication inducing impulsive decisions when patients were acting under speed pressure. For the speed-accuracy instructions, the sole effect of dopaminergic medication was on non-decision time, which suggests that medication primarily affected processes tightly coupled with the motor symptoms of PD. Interestingly, the current results suggest opposite effects of dopaminergic medication on the levels of difficulty and speed-accuracy versions of the moving dots task, possibly reflecting the differential effect of dopamine on modulating drift rate (levels of difficulty task) and non-decision time (speed-accuracy task) in the process of perceptual decision making. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.
2017-01-01
Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.
Reexamining our bias against heuristics.
McLaughlin, Kevin; Eva, Kevin W; Norman, Geoff R
2014-08-01
Using heuristics offers several cognitive advantages, such as increased speed and reduced effort when making decisions, in addition to allowing us to make decision in situations where missing data do not allow for formal reasoning. But the traditional view of heuristics is that they trade accuracy for efficiency. Here the authors discuss sources of bias in the literature implicating the use of heuristics in diagnostic error and highlight the fact that there are also data suggesting that under certain circumstances using heuristics may lead to better decisions that formal analysis. They suggest that diagnostic error is frequently misattributed to the use of heuristics and propose an alternative view whereby content knowledge is the root cause of diagnostic performance and heuristics lie on the causal pathway between knowledge and diagnostic error or success.
A simulator study of the interaction of pilot workload with errors, vigilance, and decisions
NASA Technical Reports Server (NTRS)
Smith, H. P. R.
1979-01-01
A full mission simulation of a civil air transport scenario that had two levels of workload was used to observe the actions of the crews and the basic aircraft parameters and to record heart rates. The results showed that the number of errors was very variable among crews but the mean increased in the higher workload case. The increase in errors was not related to rise in heart rate but was associated with vigilance times as well as the days since the last flight. The recorded data also made it possible to investigate decision time and decision order. These also varied among crews and seemed related to the ability of captains to manage the resources available to them on the flight deck.
Servant, Mathieu; White, Corey; Montagnini, Anna; Burle, Borís
2015-07-15
Most decisions that we make build upon multiple streams of sensory evidence and control mechanisms are needed to filter out irrelevant information. Sequential sampling models of perceptual decision making have recently been enriched by attentional mechanisms that weight sensory evidence in a dynamic and goal-directed way. However, the framework retains the longstanding hypothesis that motor activity is engaged only once a decision threshold is reached. To probe latent assumptions of these models, neurophysiological indices are needed. Therefore, we collected behavioral and EMG data in the flanker task, a standard paradigm to investigate decisions about relevance. Although the models captured response time distributions and accuracy data, EMG analyses of response agonist muscles challenged the assumption of independence between decision and motor processes. Those analyses revealed covert incorrect EMG activity ("partial error") in a fraction of trials in which the correct response was finally given, providing intermediate states of evidence accumulation and response activation at the single-trial level. We extended the models by allowing motor activity to occur before a commitment to a choice and demonstrated that the proposed framework captured the rate, latency, and EMG surface of partial errors, along with the speed of the correction process. In return, EMG data provided strong constraints to discriminate between competing models that made similar behavioral predictions. Our study opens new theoretical and methodological avenues for understanding the links among decision making, cognitive control, and motor execution in humans. Sequential sampling models of perceptual decision making assume that sensory information is accumulated until a criterion quantity of evidence is obtained, from where the decision terminates in a choice and motor activity is engaged. The very existence of covert incorrect EMG activity ("partial error") during the evidence accumulation process challenges this longstanding assumption. In the present work, we use partial errors to better constrain sequential sampling models at the single-trial level. Copyright © 2015 the authors 0270-6474/15/3510371-15$15.00/0.
Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C
2013-01-01
Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less
Complex contexts and relationships affect clinical decisions in group therapy.
Tasca, Giorgio A; Mcquaid, Nancy; Balfour, Louise
2016-09-01
Clinical errors tend to be underreported even though examining them can provide important training and professional development opportunities. The group therapy context may be prone to clinician errors because of the added complexity within which therapists work and patients receive treatment. We discuss clinical errors that occurred within a group therapy in which a patient for whom group was not appropriate was admitted to the treatment and then was not removed by the clinicians. This was countertherapeutic for both patient and group. Two clinicians were involved: a clinical supervisor who initially assessed and admitted the patient to the group, and a group therapist. To complicate matters, the group therapy occurred within the context of a clinical research trial. The errors, possible solutions, and recommendations are discussed within Reason's Organizational Accident Model (Reason, 2000). In particular, we discuss clinician errors in the context of countertransference and clinician heuristics, group therapy as a local work condition that complicates clinical decision-making, and the impact of the research context as a latent organizational factor. We also present clinical vignettes from the pregroup preparation, group therapy, and supervision. Group therapists are more likely to avoid errors in clinical decisions if they engage in reflective practice about their internal experiences and about the impact of the context in which they work. Therapists must keep in mind the various levels of group functioning, especially related to the group-as-a-whole (i.e., group composition, cohesion, group climate, and safety) when making complex clinical decisions in order to optimize patient outcomes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Pilot error in air carrier mishaps: longitudinal trends among 558 reports, 1983-2002.
Baker, Susan P; Qiang, Yandong; Rebok, George W; Li, Guohua
2008-01-01
Many interventions have been implemented in recent decades to reduce pilot error in flight operations. This study aims to identify longitudinal trends in the prevalence and patterns of pilot error and other factors in U.S. air carrier mishaps. National Transportation Safety Board investigation reports were examined for 558 air carrier mishaps during 1983-2002. Pilot errors and circumstances of mishaps were described and categorized. Rates were calculated per 10 million flights. The overall mishap rate remained fairly stable, but the proportion of mishaps involving pilot error decreased from 42% in 1983-87 to 25% in 1998-2002, a 40% reduction. The rate of mishaps related to poor decisions declined from 6.2 to 1.8 per 10 million flights, a 71% reduction; much of this decrease was due to a 76% reduction in poor decisions related to weather. Mishandling wind or runway conditions declined by 78%. The rate of mishaps involving poor crew interaction declined by 68%. Mishaps during takeoff declined by 70%, from 5.3 to 1.6 per 10 million flights. The latter reduction was offset by an increase in mishaps while the aircraft was standing, from 2.5 to 6.0 per 10 million flights, and during pushback, which increased from 0 to 3.1 per 10 million flights. Reductions in pilot errors involving decision making and crew coordination are important trends that may reflect improvements in training and technological advances that facilitate good decisions. Mishaps while aircraft are standing and during pushback have increased and deserve special attention.
Pilot Error in Air Carrier Mishaps: Longitudinal Trends Among 558 Reports, 1983–2002
Baker, Susan P.; Qiang, Yandong; Rebok, George W.; Li, Guohua
2009-01-01
Background Many interventions have been implemented in recent decades to reduce pilot error in flight operations. This study aims to identify longitudinal trends in the prevalence and patterns of pilot error and other factors in U.S. air carrier mishaps. Method National Transportation Safety Board investigation reports were examined for 558 air carrier mishaps during 1983–2002. Pilot errors and circumstances of mishaps were described and categorized. Rates were calculated per 10 million flights. Results The overall mishap rate remained fairly stable, but the proportion of mishaps involving pilot error decreased from 42% in 1983–87 to 25% in 1998–2002, a 40% reduction. The rate of mishaps related to poor decisions declined from 6.2 to 1.8 per 10 million flights, a 71% reduction; much of this decrease was due to a 76% reduction in poor decisions related to weather. Mishandling wind or runway conditions declined by 78%. The rate of mishaps involving poor crew interaction declined by 68%. Mishaps during takeoff declined by 70%, from 5.3 to 1.6 per 10 million flights. The latter reduction was offset by an increase in mishaps while the aircraft was standing, from 2.5 to 6.0 per 10 million flights, and during pushback, which increased from 0 to 3.1 per 10 million flights. Conclusions Reductions in pilot errors involving decision making and crew coordination are important trends that may reflect improvements in training and technological advances that facilitate good decisions. Mishaps while aircraft are standing and during push-back have increased and deserve special attention. PMID:18225771
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
Error Tendencies in Processing Student Feedback for Instructional Decision Making.
ERIC Educational Resources Information Center
Schermerhorn, John R., Jr.; And Others
1985-01-01
Seeks to assist instructors in recognizing two basic errors that can occur in processing student evaluation data on instructional development efforts; offers a research framework for future investigations of the error tendencies and related issues; and suggests ways in which instructors can confront and manage error tendencies in practice. (MBR)
48 CFR 6101.29 - Clerical mistakes; harmless error [Rule 29].
Code of Federal Regulations, 2010 CFR
2010-10-01
...; harmless error [Rule 29]. 6101.29 Section 6101.29 Federal Acquisition Regulations System CIVILIAN BOARD OF...; harmless error [Rule 29]. (a) Clerical mistakes. Clerical mistakes in decisions, orders, or other parts of... error in anything done or not done by the Board will be a ground for granting a new hearing or for...
Using Clinical Decision Support Software in Health Insurance Company
NASA Astrophysics Data System (ADS)
Konovalov, R.; Kumlander, Deniss
This paper proposes the idea to use Clinical Decision Support software in Health Insurance Company as a tool to reduce the expenses related to Medication Errors. As a prove that this class of software will help insurance companies reducing the expenses, the research was conducted in eight hospitals in United Arab Emirates to analyze the amount of preventable common Medication Errors in drug prescription.
ERIC Educational Resources Information Center
Brockmann, Frank
2011-01-01
State testing programs today are more extensive than ever, and their results are required to serve more purposes and high-stakes decisions than one might have imagined. Assessment results are used to hold schools, districts, and states accountable for student performance and to help guide a multitude of important decisions. This report describes…
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
[Clinical economics: a concept to optimize healthcare services].
Porzsolt, F; Bauer, K; Henne-Bruns, D
2012-03-01
Clinical economics strives to support healthcare decisions by economic considerations. Making economic decisions does not mean saving costs but rather comparing the gained added value with the burden which has to be accepted. The necessary rules are offered in various disciplines, such as economy, epidemiology and ethics. Medical doctors have recognized these rules but are not applying them in daily clinical practice. This lacking orientation leads to preventable errors. Examples of these errors are shown for diagnosis, screening, prognosis and therapy. As these errors can be prevented by application of clinical economic principles the possible consequences for optimization of healthcare are discussed.
Humans Optimize Decision-Making by Delaying Decision Onset
Teichert, Tobias; Ferrera, Vincent P.; Grinband, Jack
2014-01-01
Why do humans make errors on seemingly trivial perceptual decisions? It has been shown that such errors occur in part because the decision process (evidence accumulation) is initiated before selective attention has isolated the relevant sensory information from salient distractors. Nevertheless, it is typically assumed that subjects increase accuracy by prolonging the decision process rather than delaying decision onset. To date it has not been tested whether humans can strategically delay decision onset to increase response accuracy. To address this question we measured the time course of selective attention in a motion interference task using a novel variant of the response signal paradigm. Based on these measurements we estimated time-dependent drift rate and showed that subjects should in principle be able trade speed for accuracy very effectively by delaying decision onset. Using the time-dependent estimate of drift rate we show that subjects indeed delay decision onset in addition to raising response threshold when asked to stress accuracy over speed in a free reaction version of the same motion-interference task. These findings show that decision onset is a critical aspect of the decision process that can be adjusted to effectively improve decision accuracy. PMID:24599295
Computation and measurement of cell decision making errors using single cell data
Habibi, Iman; Cheong, Raymond; Levchenko, Andre; Emamian, Effat S.; Abdi, Ali
2017-01-01
In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF—NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell’s inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves. PMID:28379950
Computation and measurement of cell decision making errors using single cell data.
Habibi, Iman; Cheong, Raymond; Lipniacki, Tomasz; Levchenko, Andre; Emamian, Effat S; Abdi, Ali
2017-04-01
In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF-NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell's inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Lin, S.
1985-01-01
A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.
Accuracy and reliability of forensic latent fingerprint decisions
Ulery, Bradford T.; Hicklin, R. Austin; Buscaglia, JoAnn; Roberts, Maria Antonia
2011-01-01
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners’ decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners’ decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion. PMID:21518906
Accuracy and reliability of forensic latent fingerprint decisions.
Ulery, Bradford T; Hicklin, R Austin; Buscaglia, Joann; Roberts, Maria Antonia
2011-05-10
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners' decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners' decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.
NASA Astrophysics Data System (ADS)
Wang, Hongcui; Kawahara, Tatsuya
CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.
Dual processing and diagnostic errors.
Norman, Geoff
2009-09-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
Chai, Chen; Wong, Yiik Diew; Wang, Xuesong
2017-07-01
This paper proposes a simulation-based approach to estimate safety impact of driver cognitive failures and driving errors. Fuzzy Logic, which involves linguistic terms and uncertainty, is incorporated with Cellular Automata model to simulate decision-making process of right-turn filtering movement at signalized intersections. Simulation experiments are conducted to estimate the relationships between cognitive failures and driving errors with safety performance. Simulation results show Different types of cognitive failures are found to have varied relationship with driving errors and safety performance. For right-turn filtering movement, cognitive failures are more likely to result in driving errors with denser conflicting traffic stream. Moreover, different driving errors are found to have different safety impacts. The study serves to provide a novel approach to linguistically assess cognitions and replicate decision-making procedures of the individual driver. Compare to crash analysis, the proposed FCA model allows quantitative estimation of particular cognitive failures, and the impact of cognitions on driving errors and safety performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
A regret-induced status-quo bias
Nicolle, A.; Fleming, S.M.; Bach, D.R.; Driver, J.; Dolan, R. J.
2011-01-01
A suboptimal bias towards accepting the ‘status-quo’ option in decision-making is well established behaviorally, but the underlying neural mechanisms are less clear. Behavioral evidence suggests the emotion of regret is higher when errors arise from rejection rather than acceptance of a status-quo option. Such asymmetry in the genesis of regret might drive the status-quo bias on subsequent decisions, if indeed erroneous status-quo rejections have a greater neuronal impact than erroneous status-quo acceptances. To test this, we acquired human fMRI data during a difficult perceptual decision task that incorporated a trial-to-trial intrinsic status-quo option, with explicit signaling of outcomes (error or correct). Behaviorally, experienced regret was higher after an erroneous status-quo rejection compared to acceptance. Anterior insula and medial prefrontal cortex showed increased BOLD signal after such status-quo rejection errors. In line with our hypothesis, a similar pattern of signal change predicted acceptance of the status-quo on a subsequent trial. Thus, our data link a regret-induced status-quo bias to error-related activity on the preceding trial. PMID:21368043
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
1983-03-01
Decision Tree -------------------- 62 4-E. PACKAGE unitrep Action/Area Selection flow Chart 82 4-7. PACKAGE unitrep Control Flow Chart...the originetor wculd manually draft simple, readable, formatted iressages using "-i predef.ined forms and decision logic trees . This alternative was...Study Analysis DATA CCNTENT ERRORS PERCENT OF ERRORS Character Type 2.1 Calcvlations/Associations 14.3 Message Identification 4.? Value Pisiratch 22.E
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error
NASA Technical Reports Server (NTRS)
Byrne, M. D.; Kirlik, Alex
2003-01-01
We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.
Prediction of the compression ratio for municipal solid waste using decision tree.
Heshmati R, Ali Akbar; Mokhtari, Maryam; Shakiba Rad, Saeed
2014-01-01
The compression ratio of municipal solid waste (MSW) is an essential parameter for evaluation of waste settlement and landfill design. However, no appropriate model has been proposed to estimate the waste compression ratio so far. In this study, a decision tree method was utilized to predict the waste compression ratio (C'c). The tree was constructed using Quinlan's M5 algorithm. A reliable database retrieved from the literature was used to develop a practical model that relates C'c to waste composition and properties, including dry density, dry weight water content, and percentage of biodegradable organic waste using the decision tree method. The performance of the developed model was examined in terms of different statistical criteria, including correlation coefficient, root mean squared error, mean absolute error and mean bias error, recommended by researchers. The obtained results demonstrate that the suggested model is able to evaluate the compression ratio of MSW effectively.
Automation bias: decision making and performance in high-tech cockpits.
Mosier, K L; Skitka, L J; Heers, S; Burdick, M
1997-01-01
Automated aids and decision support tools are rapidly becoming indispensable tools in high-technology cockpits and are assuming increasing control of"cognitive" flight tasks, such as calculating fuel-efficient routes, navigating, or detecting and diagnosing system malfunctions and abnormalities. This study was designed to investigate automation bias, a recently documented factor in the use of automated aids and decision support systems. The term refers to omission and commission errors resulting from the use of automated cues as a heuristic replacement for vigilant information seeking and processing. Glass-cockpit pilots flew flight scenarios involving automation events or opportunities for automation-related omission and commission errors. Although experimentally manipulated accountability demands did not significantly impact performance, post hoc analyses revealed that those pilots who reported an internalized perception of "accountability" for their performance and strategies of interaction with the automation were significantly more likely to double-check automated functioning against other cues and less likely to commit errors than those who did not share this perception. Pilots were also lilkely to erroneously "remember" the presence of expected cues when describing their decision-making processes.
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Astrophysics Data System (ADS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
Dual Processing and Diagnostic Errors
ERIC Educational Resources Information Center
Norman, Geoff
2009-01-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…
Feasibility of neuro-morphic computing to emulate error-conflict based decision making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branch, Darren W.
2009-09-01
A key aspect of decision making is determining when errors or conflicts exist in information and knowing whether to continue or terminate an action. Understanding the error-conflict processing is crucial in order to emulate higher brain functions in hardware and software systems. Specific brain regions, most notably the anterior cingulate cortex (ACC) are known to respond to the presence of conflicts in information by assigning a value to an action. Essentially, this conflict signal triggers strategic adjustments in cognitive control, which serve to prevent further conflict. The most probable mechanism is the ACC reports and discriminates different types of feedback,more » both positive and negative, that relate to different adaptations. Unique cells called spindle neurons that are primarily found in the ACC (layer Vb) are known to be responsible for cognitive dissonance (disambiguation between alternatives). Thus, the ACC through a specific set of cells likely plays a central role in the ability of humans to make difficult decisions and solve challenging problems in the midst of conflicting information. In addition to dealing with cognitive dissonance, decision making in high consequence scenarios also relies on the integration of multiple sets of information (sensory, reward, emotion, etc.). Thus, a second area of interest for this proposal lies in the corticostriatal networks that serve as an integration region for multiple cognitive inputs. In order to engineer neurological decision making processes in silicon devices, we will determine the key cells, inputs, and outputs of conflict/error detection in the ACC region. The second goal is understand in vitro models of corticostriatal networks and the impact of physical deficits on decision making, specifically in stressful scenarios with conflicting streams of data from multiple inputs. We will elucidate the mechanisms of cognitive data integration in order to implement a future corticostriatal-like network in silicon devices for improved decision processing.« less
The Dialectical Utility of Heuristic Processing in Outdoor Adventure Education
ERIC Educational Resources Information Center
Zajchowski, Chris A. B.; Brownlee, Matthew T. J.; Furman, Nate N.
2016-01-01
Heuristics--cognitive shortcuts used in decision-making events--have been paradoxically praised for their contribution to decision-making efficiency and prosecuted for their contribution to decision-making error (Gigerenzer & Gaissmaier, 2011; Gigerenzer, Todd, & ABC Research Group, 1999; Kahneman, 2011; Kahneman, Slovic, & Tversky,…
Learning to Make More Effective Decisions: Changing Beliefs as a Prelude to Action
ERIC Educational Resources Information Center
Friedman, Sheldon
2004-01-01
Decision-makers in organizations often make what appear as being intuitively obviously and reasonable decisions, which often turn out to yield unintended outcomes. The cause of such ineffective decisions can be a combination of cognitive biases, poor mental models of complex systems, and errors in thinking provoked by anxiety, all of which tend to…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-22
... collection. Abstract: Veterans who disagree with the initial decision denying their healthcare benefits in... allows decision making to be more responsive to Veterans using the VA healthcare system. An agency may... date of the initial decision. The request must state why the decision is in error and include any new...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-07
... collection. Abstract: Veterans who disagree with the initial decision denying their healthcare benefits in... appeals and allows decision making to be more responsive to veterans using the VA healthcare system. An... year of the date of the initial decision. The request must state why the decision is in error and...
Lerch, Rachel A; Sims, Chris R
2016-06-01
Limitations in visual working memory (VWM) have been extensively studied in psychophysical tasks, but not well understood in terms of how these memory limits translate to performance in more natural domains. For example, in reaching to grasp an object based on a spatial memory representation, overshooting the intended target may be more costly than undershooting, such as when reaching for a cup of hot coffee. The current body of literature lacks a detailed account of how the costs or consequences of memory error influence what we encode in visual memory and how we act on the basis of remembered information. Here, we study how externally imposed monetary costs influence behavior in a motor decision task that involves reach planning based on recalled information from VWM. We approach this from a decision theoretic perspective, viewing decisions of where to aim in relation to the utility of their outcomes given the uncertainty of memory representations. Our results indicate that subjects accounted for the uncertainty in their visual memory, showing a significant difference in their reach planning when monetary costs were imposed for memory errors. However, our findings indicate that subjects memory representations per se were not biased by the imposed costs, but rather subjects adopted a near-optimal post-mnemonic decision strategy in their motor planning.
Drichoutis, Andreas C.; Lusk, Jayson L.
2014-01-01
Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample. PMID:25029467
Drichoutis, Andreas C; Lusk, Jayson L
2014-01-01
Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample.
Hozo, Iztok; Schell, Michael J; Djulbegovic, Benjamin
2008-07-01
The absolute truth in research is unobtainable, as no evidence or research hypothesis is ever 100% conclusive. Therefore, all data and inferences can in principle be considered as "inconclusive." Scientific inference and decision-making need to take into account errors, which are unavoidable in the research enterprise. The errors can occur at the level of conclusions that aim to discern the truthfulness of research hypothesis based on the accuracy of research evidence and hypothesis, and decisions, the goal of which is to enable optimal decision-making under present and specific circumstances. To optimize the chance of both correct conclusions and correct decisions, the synthesis of all major statistical approaches to clinical research is needed. The integration of these approaches (frequentist, Bayesian, and decision-analytic) can be accomplished through formal risk:benefit (R:B) analysis. This chapter illustrates the rational choice of a research hypothesis using R:B analysis based on decision-theoretic expected utility theory framework and the concept of "acceptable regret" to calculate the threshold probability of the "truth" above which the benefit of accepting a research hypothesis outweighs its risks.
Regret and the rationality of choices.
Bourgeois-Gironde, Sacha
2010-01-27
Regret helps to optimize decision behaviour. It can be defined as a rational emotion. Several recent neurobiological studies have confirmed the interface between emotion and cognition at which regret is located and documented its role in decision behaviour. These data give credibility to the incorporation of regret in decision theory that had been proposed by economists in the 1980s. However, finer distinctions are required in order to get a better grasp of how regret and behaviour influence each other. Regret can be defined as a predictive error signal but this signal does not necessarily transpose into a decision-weight influencing behaviour. Clinical studies on several types of patients show that the processing of an error signal and its influence on subsequent behaviour can be dissociated. We propose a general understanding of how regret and decision-making are connected in terms of regret being modulated by rational antecedents of choice. Regret and the modification of behaviour on its basis will depend on the criteria of rationality involved in decision-making. We indicate current and prospective lines of research in order to refine our views on how regret contributes to optimal decision-making.
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Clinical decision-making: heuristics and cognitive biases for the ophthalmologist.
Hussain, Ahsen; Oestreicher, James
Diagnostic errors have a significant impact on health care outcomes and patient care. The underlying causes and development of diagnostic error are complex with flaws in health care systems, as well as human error, playing a role. Cognitive biases and a failure of decision-making shortcuts (heuristics) are human factors that can compromise the diagnostic process. We describe these mechanisms, their role with the clinician, and provide clinical scenarios to highlight the various points at which biases may emerge. We discuss strategies to modify the development and influence of these processes and the vulnerability of heuristics to provide insight and improve clinical outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.
Jolley, Suzanne; Thompson, Claire; Hurley, James; Medin, Evelina; Butler, Lucy; Bebbington, Paul; Dunn, Graham; Freeman, Daniel; Fowler, David; Kuipers, Elizabeth; Garety, Philippa
2014-01-01
Understanding how people with delusions arrive at false conclusions is central to the refinement of cognitive behavioural interventions. Making hasty decisions based on limited data (‘jumping to conclusions’, JTC) is one potential causal mechanism, but reasoning errors may also result from other processes. In this study, we investigated the correlates of reasoning errors under differing task conditions in 204 participants with schizophrenia spectrum psychosis who completed three probabilistic reasoning tasks. Psychotic symptoms, affect, and IQ were also evaluated. We found that hasty decision makers were more likely to draw false conclusions, but only 37% of their reasoning errors were consistent with the limited data they had gathered. The remainder directly contradicted all the presented evidence. Reasoning errors showed task-dependent associations with IQ, affect, and psychotic symptoms. We conclude that limited data-gathering contributes to false conclusions but is not the only mechanism involved. Delusions may also be maintained by a tendency to disregard evidence. Low IQ and emotional biases may contribute to reasoning errors in more complex situations. Cognitive strategies to reduce reasoning errors should therefore extend beyond encouragement to gather more data, and incorporate interventions focused directly on these difficulties. PMID:24958065
Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C
2014-03-01
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.
ERIC Educational Resources Information Center
Gilis, Bart; Helsen, Werner; Catteeuw, Peter; Wagemans, Johan
2008-01-01
This study investigated the offside decision-making process in association football. The first aim was to capture the specific offside decision-making skills in complex dynamic events. Second, we analyzed the type of errors to investigate the factors leading to incorrect decisions. Federation Internationale de Football Association (FIFA; n = 29)…
McLaughlin, Douglas B
2012-01-01
The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors. Copyright © 2011 SETAC.
Souvestre, P A; Landrock, C K; Blaber, A P
2008-08-01
Human factors centered aviation accident analyses report that skill based errors are known to be cause of 80% of all accidents, decision making related errors 30% and perceptual errors 6%1. In-flight decision making error is a long time recognized major avenue leading to incidents and accidents. Through the past three decades, tremendous and costly efforts have been developed to attempt to clarify causation, roles and responsibility as well as to elaborate various preventative and curative countermeasures blending state of the art biomedical, technological advances and psychophysiological training strategies. In-flight related statistics have not been shown significantly changed and a significant number of issues remain not yet resolved. Fine Postural System and its corollary, Postural Deficiency Syndrome (PDS), both defined in the 1980's, are respectively neurophysiological and medical diagnostic models that reflect central neural sensory-motor and cognitive controls regulatory status. They are successfully used in complex neurotraumatology and related rehabilitation for over two decades. Analysis of clinical data taken over a ten-year period from acute and chronic post-traumatic PDS patients shows a strong correlation between symptoms commonly exhibited before, along side, or even after error, and sensory-motor or PDS related symptoms. Examples are given on how PDS related central sensory-motor control dysfunction can be correctly identified and monitored via a neurophysiological ocular-vestibular-postural monitoring system. The data presented provides strong evidence that a specific biomedical assessment methodology can lead to a better understanding of in-flight adaptive neurophysiological, cognitive and perceptual dysfunctional status that could induce in flight-errors. How relevant human factors can be identified and leveraged to maintain optimal performance will be addressed.
Shah, Priya; Wyatt, Jeremy C; Makubate, Boikanyo; Cross, Frank W
2011-01-01
Objective Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. Design A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. Measurements The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. Results Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). Conclusions Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. PMID:21836158
Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu
2015-01-01
Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908
A Grounded Theory Study of Aircraft Maintenance Technician Decision-Making
NASA Astrophysics Data System (ADS)
Norcross, Robert
Aircraft maintenance technician decision-making and actions have resulted in aircraft system errors causing aircraft incidents and accidents. Aircraft accident investigators and researchers examined the factors that influence aircraft maintenance technician errors and categorized the types of errors in an attempt to prevent similar occurrences. New aircraft technology introduced to improve aviation safety and efficiency incur failures that have no information contained in the aircraft maintenance manuals. According to the Federal Aviation Administration, aircraft maintenance technicians must use only approved aircraft maintenance documents to repair, modify, and service aircraft. This qualitative research used a grounded theory approach to explore the decision-making processes and actions taken by aircraft maintenance technicians when confronted with an aircraft problem not contained in the aircraft maintenance manuals. The target population for the research was Federal Aviation Administration licensed aircraft and power plant mechanics from across the United States. Nonprobability purposeful sampling was used to obtain aircraft maintenance technicians with the experience sought in the study problem. The sample population recruitment yielded 19 participants for eight focus group sessions to obtain opinions, perceptions, and experiences related to the study problem. All data collected was entered into the Atlas ti qualitative analysis software. The emergence of Aircraft Maintenance Technician decision-making themes regarding Aircraft Maintenance Manual content, Aircraft Maintenance Technician experience, and legal implications of not following Aircraft Maintenance Manuals surfaced. Conclusions from this study suggest Aircraft Maintenance Technician decision-making were influenced by experience, gaps in the Aircraft Maintenance Manuals, reliance on others, realizing the impact of decisions concerning aircraft airworthiness, management pressures, and legal concerns related to decision-making. Recommendations included an in-depth systematic review of the Aircraft Maintenance Manuals, development of a Federal Aviation Administration approved standardized Aircraft Maintenance Technician decision-making flow diagram, and implementation of risk based decision-making training. The benefit of this study is to save the airline industry revenue by preventing poor decision-making practices that result in inefficient maintenance actions and aircraft incidents and accidents.
ERIC Educational Resources Information Center
Ramos, Erica; Alfonso, Vincent C.; Schermerhorn, Susan M.
2009-01-01
The interpretation of cognitive test scores often leads to decisions concerning the diagnosis, educational placement, and types of interventions used for children. Therefore, it is important that practitioners administer and score cognitive tests without error. This study assesses the frequency and types of examiner errors that occur during the…
Effects of Shame and Guilt on Error Reporting Among Obstetric Clinicians.
Zabari, Mara Lynne; Southern, Nancy L
2018-04-17
To understand how the experiences of shame and guilt, coupled with organizational factors, affect error reporting by obstetric clinicians. Descriptive cross-sectional. A sample of 84 obstetric clinicians from three maternity units in Washington State. In this quantitative inquiry, a variant of the Test of Self-Conscious Affect was used to measure proneness to guilt and shame. In addition, we developed questions to assess attitudes regarding concerns about damaging one's reputation if an error was reported and the choice to keep an error to oneself. Both assessments were analyzed separately and then correlated to identify relationships between constructs. Interviews were used to identify organizational factors that affect error reporting. As a group, mean scores indicated that obstetric clinicians would not choose to keep errors to themselves. However, bivariate correlations showed that proneness to shame was positively correlated to concerns about one's reputation if an error was reported, and proneness to guilt was negatively correlated with keeping errors to oneself. Interview data analysis showed that Past Experience with Responses to Errors, Management and Leadership Styles, Professional Hierarchy, and Relationships With Colleagues were influential factors in error reporting. Although obstetric clinicians want to report errors, their decisions to report are influenced by their proneness to guilt and shame and perceptions of the degree to which organizational factors facilitate or create barriers to restore their self-images. Findings underscore the influence of the organizational context on clinicians' decisions to report errors. Copyright © 2018 AWHONN, the Association of Women’s Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.
An Overview of Judgment and Decision Making Research Through the Lens of Fuzzy Trace Theory.
Setton, Roni; Wilhelms, Evan; Weldon, Becky; Chick, Christina; Reyna, Valerie
2014-12-01
We present the basic tenets of fuzzy trace theory, a comprehensive theory of memory, judgment, and decision making that is grounded in research on how information is stored as knowledge, mentally represented, retrieved from storage, and processed. In doing so, we highlight how it is distinguished from traditional models of decision making in that gist reasoning plays a central role. The theory also distinguishes advanced intuition from primitive impulsivity. It predicts that different sorts of errors occur with respect to each component of judgment and decision making: background knowledge, representation, retrieval, and processing. Classic errors in the judgment and decision making literature, such as risky-choice framing and the conjunction fallacy, are accounted for by fuzzy trace theory and new results generated by the theory contradict traditional approaches. We also describe how developmental changes in brain and behavior offer crucial insight into adult cognitive processing. Research investigating brain and behavior in developing and special populations supports fuzzy trace theory's predictions about reliance on gist processing.
Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J
2011-09-01
In three experiments search termination decisions were examined as a function of response type (correct vs. incorrect) and confidence. It was found that the time between the last retrieved item and the decision to terminate search (exit latency) was related to the type of response and confidence in the last item retrieved. Participants were willing to search longer when the last retrieved item was a correct item vs. an incorrect item and when the confidence was high in the last retrieved item. It was also found that the number of errors retrieved during the recall period was related to search termination decisions such that the more errors retrieved, the more likely participants were to terminate the search. Finally, it was found that knowledge of overall search set size influenced the time needed to search for items, but did not influence search termination decisions. Copyright © 2011 Elsevier B.V. All rights reserved.
A risk-based approach to flood management decisions in a nonstationary world
NASA Astrophysics Data System (ADS)
Rosner, Ana; Vogel, Richard M.; Kirshen, Paul H.
2014-03-01
Traditional approaches to flood management in a nonstationary world begin with a null hypothesis test of "no trend" and its likelihood, with little or no attention given to the likelihood that we might ignore a trend if it really existed. Concluding a trend exists when it does not, or rejecting a trend when it exists are known as type I and type II errors, respectively. Decision-makers are poorly served by statistical and/or decision methods that do not carefully consider both over- and under-preparation errors, respectively. Similarly, little attention is given to how to integrate uncertainty in our ability to detect trends into a flood management decision context. We show how trend hypothesis test results can be combined with an adaptation's infrastructure costs and damages avoided to provide a rational decision approach in a nonstationary world. The criterion of expected regret is shown to be a useful metric that integrates the statistical, economic, and hydrological aspects of the flood management problem in a nonstationary world.
An Overview of Judgment and Decision Making Research Through the Lens of Fuzzy Trace Theory
Setton, Roni; Wilhelms, Evan; Weldon, Becky; Chick, Christina; Reyna, Valerie
2017-01-01
We present the basic tenets of fuzzy trace theory, a comprehensive theory of memory, judgment, and decision making that is grounded in research on how information is stored as knowledge, mentally represented, retrieved from storage, and processed. In doing so, we highlight how it is distinguished from traditional models of decision making in that gist reasoning plays a central role. The theory also distinguishes advanced intuition from primitive impulsivity. It predicts that different sorts of errors occur with respect to each component of judgment and decision making: background knowledge, representation, retrieval, and processing. Classic errors in the judgment and decision making literature, such as risky-choice framing and the conjunction fallacy, are accounted for by fuzzy trace theory and new results generated by the theory contradict traditional approaches. We also describe how developmental changes in brain and behavior offer crucial insight into adult cognitive processing. Research investigating brain and behavior in developing and special populations supports fuzzy trace theory’s predictions about reliance on gist processing. PMID:28725239
NASA Technical Reports Server (NTRS)
Simon, M.; Tkacenko, A.
2006-01-01
In a previous publication [1], an iterative closed-loop carrier synchronization scheme for binary phase-shift keyed (BPSK) modulation was proposed that was based on feeding back data decisions to the input of the loop, the purpose being to remove the modulation prior to carrier synchronization as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. The idea there was that, with sufficient independence between the received data and the decisions on it that are fed back (as would occur in an error-correction coding environment with sufficient decoding delay), a pure tone in the presence of noise would ultimately be produced (after sufficient iteration and low enough error probability) and thus could be tracked without any squaring loss. This article demonstrates that, with some modification, the same idea of iterative information reduction through decision feedback can be applied to quadrature phase-shift keyed (QPSK) modulation, something that was mentioned in the previous publication but never pursued.
Affective forecasting: an unrecognized challenge in making serious health decisions.
Halpern, Jodi; Arnold, Robert M
2008-10-01
Patients facing medical decisions that will impact quality of life make assumptions about how they will adjust emotionally to living with health declines and disability. Despite abundant research on decision-making, we have no direct research on how accurately patients envision their future well-being and how this influences their decisions. Outside medicine, psychological research on "affective forecasting" consistently shows that people poorly predict their future ability to adapt to adversity. This finding is important for medicine, since many serious health decisions hinge on quality-of-life judgments. We describe three specific mechanisms for affective forecasting errors that may influence health decisions: focalism, in which people focus more on what will change than on what will stay the same; immune neglect, in which they fail to envision how their own coping skills will lessen their unhappiness; and failure to predict adaptation, in which people fail to envision shifts in what they value. We discuss emotional and social factors that interact with these cognitive biases. We describe how caregivers can recognize these biases in the clinical setting and suggest interventions to help patients recognize and address affective forecasting errors.
Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.
2018-01-01
Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737
Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D
2018-01-01
Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.
Decision making in urological surgery.
Abboudi, Hamid; Ahmed, Kamran; Normahani, Pasha; Abboudi, May; Kirby, Roger; Challacombe, Ben; Khan, Mohammed Shamim; Dasgupta, Prokar
2012-06-01
Non-technical skills are important behavioural aspects that a urologist must be fully competent at to minimise harm to patients. The majority of surgical errors are now known to be due to errors in judgment and decision making as opposed to the technical aspects of the craft. The authors reviewed the published literature regarding decision-making theory and in practice related to urology as well as the current tools available to assess decision-making skills. Limitations include limited number of studies, and the available studies are of low quality. Decision making is the psychological process of choosing between alternative courses of action. In the surgical environment, this can often be a complex balance of benefit and risk within a variable time frame and dynamic setting. In recent years, the emphasis of new surgical curriculums has shifted towards non-technical surgical skills; however, the assessment tools in place are far from objective, reliable and valid. Surgical simulators and video-assisted questionnaires are useful methods for appraisal of trainees. Well-designed, robust and validated tools need to be implemented in training and assessment of decision-making skills in urology. Patient safety can only be ensured when safe and effective decisions are made.
Regret and the rationality of choices
Bourgeois-Gironde, Sacha
2010-01-01
Regret helps to optimize decision behaviour. It can be defined as a rational emotion. Several recent neurobiological studies have confirmed the interface between emotion and cognition at which regret is located and documented its role in decision behaviour. These data give credibility to the incorporation of regret in decision theory that had been proposed by economists in the 1980s. However, finer distinctions are required in order to get a better grasp of how regret and behaviour influence each other. Regret can be defined as a predictive error signal but this signal does not necessarily transpose into a decision-weight influencing behaviour. Clinical studies on several types of patients show that the processing of an error signal and its influence on subsequent behaviour can be dissociated. We propose a general understanding of how regret and decision-making are connected in terms of regret being modulated by rational antecedents of choice. Regret and the modification of behaviour on its basis will depend on the criteria of rationality involved in decision-making. We indicate current and prospective lines of research in order to refine our views on how regret contributes to optimal decision-making. PMID:20026463
Jolley, Suzanne; Thompson, Claire; Hurley, James; Medin, Evelina; Butler, Lucy; Bebbington, Paul; Dunn, Graham; Freeman, Daniel; Fowler, David; Kuipers, Elizabeth; Garety, Philippa
2014-10-30
Understanding how people with delusions arrive at false conclusions is central to the refinement of cognitive behavioural interventions. Making hasty decisions based on limited data ('jumping to conclusions', JTC) is one potential causal mechanism, but reasoning errors may also result from other processes. In this study, we investigated the correlates of reasoning errors under differing task conditions in 204 participants with schizophrenia spectrum psychosis who completed three probabilistic reasoning tasks. Psychotic symptoms, affect, and IQ were also evaluated. We found that hasty decision makers were more likely to draw false conclusions, but only 37% of their reasoning errors were consistent with the limited data they had gathered. The remainder directly contradicted all the presented evidence. Reasoning errors showed task-dependent associations with IQ, affect, and psychotic symptoms. We conclude that limited data-gathering contributes to false conclusions but is not the only mechanism involved. Delusions may also be maintained by a tendency to disregard evidence. Low IQ and emotional biases may contribute to reasoning errors in more complex situations. Cognitive strategies to reduce reasoning errors should therefore extend beyond encouragement to gather more data, and incorporate interventions focused directly on these difficulties. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Wang, Tianyou; And Others
M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…
Lindahl, Jonas; Danell, Rickard
The aim of this study was to provide a framework to evaluate bibliometric indicators as decision support tools from a decision making perspective and to examine the information value of early career publication rate as a predictor of future productivity. We used ROC analysis to evaluate a bibliometric indicator as a tool for binary decision making. The dataset consisted of 451 early career researchers in the mathematical sub-field of number theory. We investigated the effect of three different definitions of top performance groups-top 10, top 25, and top 50 %; the consequences of using different thresholds in the prediction models; and the added prediction value of information on early career research collaboration and publications in prestige journals. We conclude that early career performance productivity has an information value in all tested decision scenarios, but future performance is more predictable if the definition of a high performance group is more exclusive. Estimated optimal decision thresholds using the Youden index indicated that the top 10 % decision scenario should use 7 articles, the top 25 % scenario should use 7 articles, and the top 50 % should use 5 articles to minimize prediction errors. A comparative analysis between the decision thresholds provided by the Youden index which take consequences into consideration and a method commonly used in evaluative bibliometrics which do not take consequences into consideration when determining decision thresholds, indicated that differences are trivial for the top 25 and the 50 % groups. However, a statistically significant difference between the methods was found for the top 10 % group. Information on early career collaboration and publication strategies did not add any prediction value to the bibliometric indicator publication rate in any of the models. The key contributions of this research is the focus on consequences in terms of prediction errors and the notion of transforming uncertainty into risk when we are choosing decision thresholds in bibliometricly informed decision making. The significance of our results are discussed from the point of view of a science policy and management.
Bansback, Nick; Li, Linda C; Lynd, Larry; Bryan, Stirling
2014-08-01
Patient decision aids (PtDA) are developed to facilitate informed, value-based decisions about health. Research suggests that even when informed with necessary evidence and information, cognitive errors can prevent patients from choosing the option that is most congruent with their own values. We sought to utilize principles of behavioural economics to develop a computer application that presents information from conventional decision aids in a way that reduces these errors, subsequently promoting higher quality decisions. The Dynamic Computer Interactive Decision Application (DCIDA) was developed to target four common errors that can impede quality decision making with PtDAs: unstable values, order effects, overweighting of rare events, and information overload. Healthy volunteers were recruited to an interview to use three PtDAs converted to the DCIDA on a computer equipped with an eye tracker. Participants were first used a conventional PtDA, and then subsequently used the DCIDA version. User testing was assessed based on whether respondents found the software both usable: evaluated using a) eye-tracking, b) the system usability scale, and c) user verbal responses from a 'think aloud' protocol; and useful: evaluated using a) eye-tracking, b) whether preferences for options were changed, and c) and the decisional conflict scale. Of the 20 participants recruited to the study, 11 were male (55%), the mean age was 35, 18 had at least a high school education (90%), and 8 (40%) had a college or university degree. Eye-tracking results, alongside a mean system usability scale score of 73 (range 68-85), indicated a reasonable degree of usability for the DCIDA. The think aloud study suggested areas for further improvement. The DCIDA also appeared to be useful to participants wherein subjects focused more on the features of the decision that were most important to them (21% increase in time spent focusing on the most important feature). Seven subjects (25%) changed their preferred option when using DCIDA. Preliminary results suggest that DCIDA has potential to improve the quality of patient decision-making. Next steps include larger studies to test individual components of DCIDA and feasibility testing with patients making real decisions.
Decision feedback loop for tracking a polyphase modulated carrier
NASA Technical Reports Server (NTRS)
Simon, M. K. (Inventor)
1974-01-01
A multiple phase modulated carrier tracking loop for use in a frequency shift keying system is described in which carrier tracking efficiency is improved by making use of the decision signals made on the data phase transmitted in each T-second interval. The decision signal is used to produce a pair of decision-feedback quadrature signals for enhancing the loop's performance in developing a loop phase error signal.
Rational Choice and the Framing of Decisions.
1986-05-29
decision under risk is the deriva- .- tion of the expected utility rule from simple principles of rational choice that make no . reference to long-run...corrective power of incentives depends on the nature of the particular error and cannot be taken for granted. The assumption of rationality of decision making ...easily eliminated by experience must be demonstrated. Finally, it is sometimes argued that failures of rationality in individual decision making are
Principal Candidates Create Decision-Making Simulations to Prepare for the JOB
ERIC Educational Resources Information Center
Staub, Nancy A.; Bravender, Marlena
2014-01-01
Online simulations offer opportunities for trial and error decision-making. What better tool for a principal than to make decisions when the consequences will not have real-world ramifications. In this study, two groups of graduate students in a principal preparation program taking the same course in the same semester use online simulations…
ERIC Educational Resources Information Center
VanDerHeyden, Amanda M.; Burns, Matthew K.; Bonifay, Wesley
2018-01-01
Screening is necessary to detect risk and prevent reading failure. Yet the amount of screening that commonly occurs in U.S. schools may undermine its value, creating more error in decision making and lost instructional opportunity. This 2-year longitudinal study examined the decision accuracy associated with collecting concurrent reading screening…
USDA-ARS?s Scientific Manuscript database
Plant disease management decision aids typically require inputs of weather elements such as air temperature. Whereas many disease models are created based on weather elements at the crop canopy, and with relatively fine time resolution, the decision aids commonly are implemented with hourly weather...
Agency and Error in Young Adults' Stories of Sexual Decision Making
ERIC Educational Resources Information Center
Allen, Katherine R.; Husser, Erica K.; Stone, Dana J.; Jordal, Christian E.
2008-01-01
We conducted a qualitative analysis of 148 college students' written comments about themselves as sexual decision makers. Most participants described experiences in which they were actively engaged in decision-making processes of "waiting it out" to "working it out." The four patterns were (a) I am in control, (b) I am experimenting and learning,…
Gaming against medical errors: methods and results from a design game on CPOE.
Kanstrup, Anne Marie; Nøhr, Christian
2009-01-01
The paper presents design game as a technique for participatory design for a Computerized Decision Support System (CDSS) for minimizing medical errors. Design game is used as a technique for working with the skills of users, the complexity of the use practice and the negotiation of design here within the challenging domain of medication. The paper presents a developed design game based on game inspiration from a computer game, theoretical inspiration on electronic decision support, and empirical grounding in scenarios of medical errors. The game has been played in a two-hour workshop with six clinicians. The result is presented as a list of central themes for design of CDSS and derived design principles from these themes. These principles are currently under further exploration in follow up prototype based activities.
Types of errors by referees and perception of injustice by soccer players: a preliminary study.
Canovas, Sophie; Reynes, Eric; Ferrand, Claude; Pantaléon, Nathalie; Long, Thierry
2008-02-01
This study investigated the effect of referees' errors on players' perceived injustice in soccer. The conditions investigated were Referee Decision, with three types: Correctly Called a foul vs Wrongly Called a foul vs Did not Call a foul and Repetition of the Situation, with two types: Isolated vs Repeated. Male soccer players at regional and departmental levels of practice (N = 95, M(age) = 23.2, SD = 5.1) were asked to rank six hypothetical situations according to the perceived injustice. Analysis indicated significant effects of Referee Decisions and Repetition of the Situation on the perception of injustice, but showed no differences between the types of error. However, age and years of soccer experience were associated with perception of injustice when the referee correctly called a foul.
Santos, Adriano A; Moura, J Antão B; de Araújo, Joseana Macêdo Fechine Régis
2015-01-01
Mitigating uncertainty and risks faced by specialist physicians in analysis of rare clinical cases is something desired by anyone who needs health services. The number of clinical cases never seen by these experts, with little documentation, may introduce errors in decision-making. Such errors negatively affect well-being of patients, increase procedure costs, rework, health insurance premiums, and impair the reputation of specialists and medical systems involved. In this context, IT and Clinical Decision Support Systems (CDSS) play a fundamental role, supporting decision-making process, making it more efficient and effective, reducing a number of avoidable medical errors and enhancing quality of treatment given to patients. An investigation has been initiated to look into characteristics and solution requirements of this problem, model it, propose a general solution in terms of a conceptual risk-based, automated framework to support rare-case medical diagnostics and validate it by means of case studies. A preliminary validation study of the proposed framework has been carried out by interviews conducted with experts who are practicing professionals, academics, and researchers in health care. This paper summarizes the investigation and its positive results. These results motivate continuation of research towards development of the conceptual framework and of a software tool that implements the proposed model.
Smart algorithms and adaptive methods in computational fluid dynamics
NASA Astrophysics Data System (ADS)
Tinsley Oden, J.
1989-05-01
A review is presented of the use of smart algorithms which employ adaptive methods in processing large amounts of data in computational fluid dynamics (CFD). Smart algorithms use a rationally based set of criteria for automatic decision making in an attempt to produce optimal simulations of complex fluid dynamics problems. The information needed to make these decisions is not known beforehand and evolves in structure and form during the numerical solution of flow problems. Once the code makes a decision based on the available data, the structure of the data may change, and criteria may be reapplied in order to direct the analysis toward an acceptable end. Intelligent decisions are made by processing vast amounts of data that evolve unpredictably during the calculation. The basic components of adaptive methods and their application to complex problems of fluid dynamics are reviewed. The basic components of adaptive methods are: (1) data structures, that is what approaches are available for modifying data structures of an approximation so as to reduce errors; (2) error estimation, that is what techniques exist for estimating error evolution in a CFD calculation; and (3) solvers, what algorithms are available which can function in changing meshes. Numerical examples which demonstrate the viability of these approaches are presented.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Automatic system testing of a decision support system for insulin dosing using Google Android.
Spat, Stephan; Höll, Bernhard; Petritsch, Georg; Schaupp, Lukas; Beck, Peter; Pieber, Thomas R
2013-01-01
Hyperglycaemia in hospitalized patients is a common and costly health care problem. The GlucoTab system is a mobile workflow and decision support system, aiming to facilitate efficient and safe glycemic control of non-critically ill patients. Being a medical device, the GlucoTab requires extensive and reproducible testing. A framework for high-volume, reproducible and automated system testing of the GlucoTab system was set up applying several Open Source tools for test automation and system time handling. The REACTION insulin titration protocol was investigated in a paper-based clinical trial (PBCT). In order to validate the GlucoTab system, data from this trial was used for simulation and system tests. In total, 1190 decision support action points were identified and simulated. Four data points (0.3%) resulted in a GlucoTab system error caused by a defective implementation. In 144 data points (12.1%), calculation errors of physicians and nurses in the PBCT were detected. The test framework was able to verify manual calculation of insulin doses and detect relatively many user errors and workflow anomalies in the PBCT data. This shows the high potential of the electronic decision support application to improve safety of implementation of an insulin titration protocol and workflow management system in clinical wards.
Decision Support Alerts for Medication Ordering in a Computerized Provider Order Entry (CPOE) System
Beccaro, M. A. Del; Villanueva, R.; Knudson, K. M.; Harvey, E. M.; Langle, J. M.; Paul, W.
2010-01-01
Objective We sought to determine the frequency and type of decision support alerts by location and ordering provider role during Computerized Provider Order Entry (CPOE) medication ordering. Using these data we adjusted the decision support tools to reduce the number of alerts. Design Retrospective analyses were performed of dose range checks (DRC), drug-drug interaction and drug-allergy alerts from our electronic medical record. During seven sampling periods (each two weeks long) between April 2006 and October 2008 all alerts in these categories were analyzed. Another audit was performed of all DRC alerts by ordering provider role from November 2008 through January 2009. Medication ordering error counts were obtained from a voluntary error reporting system. Measurement/Results Between April 2006 and October 2008 the percent of medication orders that triggered a dose range alert decreased from 23.9% to 7.4%. The relative risk (RR) for getting an alert was higher at the start of the interventions versus later (RR= 2.40, 95% CI 2.28-2.52; p< 0.0001). The percentage of medication orders that triggered alerts for drug-drug interactions also decreased from 13.5% to 4.8%. The RR for getting a drug interaction alert at the start was 1.63, 95% CI 1.60-1.66; p< 0.0001. Alerts decreased in all clinical areas without an increase in reported medication errors. Conclusion We reduced the quantity of decision support alerts in CPOE using a systematic approach without an increase in reported medication errors PMID:23616845
Error-related negativities elicited by monetary loss and cues that predict loss.
Dunning, Jonathan P; Hajcak, Greg
2007-11-19
Event-related potential studies have reported error-related negativity following both error commission and feedback indicating errors or monetary loss. The present study examined whether error-related negativities could be elicited by a predictive cue presented prior to both the decision and subsequent feedback in a gambling task. Participants were presented with a cue that indicated the probability of reward on the upcoming trial (0, 50, and 100%). Results showed a negative deflection in the event-related potential in response to loss cues compared with win cues; this waveform shared a similar latency and morphology with the traditional feedback error-related negativity.
Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.
Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J
2017-12-01
Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.
Uncertainty in sample estimates and the implicit loss function for soil information.
NASA Astrophysics Data System (ADS)
Lark, Murray
2015-04-01
One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Gilis, Bart; Helsen, Werner; Catteeuw, Peter; Wagemans, Johan
2008-03-01
This study investigated the offside decision-making process in association football. The first aim was to capture the specific offside decision-making skills in complex dynamic events. Second, we analyzed the type of errors to investigate the factors leading to incorrect decisions. Fédération Internationale de Football Association (FIFA; n = 29) and Belgian elite (n = 28) assistant referees (ARs) assessed 64 computer-based offside situations. First, an expertise effect was found. The FIFA ARs assessed the trials more accurately than the Belgian ARs (76.4% vs. 67.5%). Second, regarding the type of error, all ARs clearly tended to raise their flag in doubtful situations. This observation could be explained by a perceptual bias associated with the flash-lag effect. Specifically, attackers were perceived ahead of their actual positions, and this tendency was stronger for the Belgian than for the FIFA ARs (11.0 vs. 8.4 pixels), in particular when the difficulty of the trials increased. Further experimentation is needed to examine whether video- and computer-based decision-making training is effective in improving the decision-making skills of ARs during the game. PsycINFO Database Record (c) 2008 APA, all rights reserved
[Medical expert systems and clinical needs].
Buscher, H P
1991-10-18
The rapid expansion of computer-based systems for problem solving or decision making in medicine, the so-called medical expert systems, emphasize the need for reappraisal of their indication and value. Where specialist knowledge is required, in particular where medical decisions are susceptible to error these systems will probably serve as a valuable support. In the near future computer-based systems should be able to aid the interpretation of findings of technical investigations and the control of treatment, especially where rapid reactions are necessary despite the need of complex analysis of investigated parameters. In the distant future complete support of diagnostic procedures from the history to final diagnosis is possible. It promises to be particularly attractive for the diagnosis of seldom diseases, for difficult differential diagnoses, and in the decision making in the case of expensive, risky or new diagnostic or therapeutic methods. The physician needs to be aware of certain dangers, ranging from misleading information up to abuse. Patient information depends often on subjective reports and error-prone observations. Although basing on problematic knowledge computer-born decisions may have an imperative effect on medical decision making. Also it must be born in mind that medical decisions should always combine the rational with a consideration of human motives.
Form and Objective of the Decision Rule in Absolute Identification
NASA Technical Reports Server (NTRS)
Balakrishnan, J. D.
1997-01-01
In several conditions of a line length identification experiment, the subjects' decision making strategies were systematically biased against the responses on the edges of the stimulus range. When the range and number of the stimuli were small, the bias caused the percentage of correct responses to be highest in the center and lowest on the extremes of the range. Two general classes of decision rules that would explain these results are considered. The first class assumes that subjects intend to adopt an optimal decision rule, but systematically misrepresent one or more parameters of the decision making context. The second class assumes that subjects use a different measure of performance than the one assumed by the experimenter: instead of maximizing the chances of a correct response, the subject attempts to minimize the expected size of the response error (a "fidelity criterion"). In a second experiment, extended experience and feedback did not diminish the bias effect, but explicitly penalizing all response errors equally, regardless of their size, did reduce or eliminate it in some subjects. Both results favor the fidelity criterion over the optimal rule.
Chana, Narinder; Porat, Talya; Whittlesea, Cate; Delaney, Brendan
2017-03-01
Electronic prescribing has benefited from computerised clinical decision support systems (CDSSs); however, no published studies have evaluated the potential for a CDSS to support GPs in prescribing specialist drugs. To identify potential weaknesses and errors in the existing process of prescribing specialist drugs that could be addressed in the development of a CDSS. Semi-structured interviews with key informants followed by an observational study involving GPs in the UK. Twelve key informants were interviewed to investigate the use of CDSSs in the UK. Nine GPs were observed while performing case scenarios depicting requests from hospitals or patients to prescribe a specialist drug. Activity diagrams, hierarchical task analysis, and systematic human error reduction and prediction approach analyses were performed. The current process of prescribing specialist drugs by GPs is prone to error. Errors of omission due to lack of information were the most common errors, which could potentially result in a GP prescribing a specialist drug that should only be prescribed in hospitals, or prescribing a specialist drug without reference to a shared care protocol. Half of all possible errors in the prescribing process had a high probability of occurrence. A CDSS supporting GPs during the process of prescribing specialist drugs is needed. This could, first, support the decision making of whether or not to undertake prescribing, and, second, provide drug-specific parameters linked to shared care protocols, which could reduce the errors identified and increase patient safety. © British Journal of General Practice 2017.
Target Uncertainty Mediates Sensorimotor Error Correction
Vijayakumar, Sethu; Wolpert, Daniel M.
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323
Target Uncertainty Mediates Sensorimotor Error Correction.
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.
Coffey, Maitreya; Thomson, Kelly; Tallett, Susan; Matlow, Anne
2010-10-01
Although experts advise disclosing medical errors to patients, individual physicians' different levels of knowledge and comfort suggest a gap between recommendations and practice. This study explored pediatric residents' knowledge and attitudes about disclosure. In 2006, the authors of this single-center, mixed-methods study surveyed 64 pediatric residents at the University of Toronto and then held three focus groups with a total of 24 of those residents. Thirty-seven (58%) residents completed questionnaires. Most agreed that medical errors are one of the most serious problems in health care, that errors should be disclosed, and that disclosure would be difficult. When shown a scenario involving a medical error, over 90% correctly identified the error, but only 40% would definitely disclose it. Most would apologize, but far fewer would acknowledge harm if it occurred or use the word "mistake." Most had witnessed or performed a disclosure, but only 40% reported receiving teaching on disclosure. Most reported experiencing negative effects of errors, including anxiety and reduced confidence. Data from the focus groups emphasized the extent to which residents consider contextual information when making decisions around disclosure. Themes included their or their team's degree of responsibility for the error versus others, quality of team relationships, training level, existence of social boundaries, and their position within a hierarchy. These findings add to the understanding of facilitators and inhibitors of error disclosure and reporting. The influence of social context warrants further study and should be considered in medical curriculum design and hospital guideline implementation.
ERIC Educational Resources Information Center
Heinicke, Susanne
2014-01-01
Every measurement in science, every experimental decision, result and information drawn from it has to cope with something that has long been named by the term "error". In fact, errors describe our limitations when it comes to experimental science and science looks back on a long tradition to cope with them. The widely known way to cope…
Miller, Chad S
2013-01-01
Nearly half of medical errors can be attributed to an error of clinical reasoning or decision making. It is estimated that the correct diagnosis is missed or delayed in between 5% and 14% of acute hospital admissions. Through understanding why and how physicians make these errors, it is hoped that strategies can be developed to decrease the number of these errors. In the present case, a patient presented with dyspnea, gastrointestinal symptoms and weight loss; the diagnosis was initially missed when the treating physicians took mental short cuts and used heuristics as in this case. Heuristics have an inherent bias that can lead to faulty reasoning or conclusions, especially in complex or difficult cases. Affective bias, which is the overinvolvement of emotion in clinical decision making, limited the available information for diagnosis because of the hesitancy to acquire a full history and perform a complete physical examination in this patient. Zebra retreat, another type of bias, is when a rare diagnosis figures prominently on the differential diagnosis but the physician retreats for various reasons. Zebra retreat also factored in the delayed diagnosis. Through the description of these clinical reasoning errors in an actual case, it is hoped that future errors can be prevented or inspiration for additional research in this area will develop.
Artificial Experience: Situation Awareness Training in Nursing
ERIC Educational Resources Information Center
Hinton, Janine E.
2011-01-01
The quasi-experimental research study developed and tested an education process to reduce and trap medication errors. The study was framed by Endsley's (1995a) model of situation awareness in dynamic decision making. Situation awareness improvement strategies were practiced during high-fidelity clinical simulations. Harmful medication errors occur…
File Assignment in a Central Server Computer Network.
1979-01-01
somewhat artificial for many applications. Sometimes important variables must be known in advance when they are more appropriately decision variables... intellegently , we must have some notion of the errors that may be introduced. We must account for two types of er:ors. The first is the error
Eisele, Thomas P; Rhoda, Dale A; Cutts, Felicity T; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J D; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.
Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331
Design Study of an Incinerator Ash Conveyor Counting System - 13323
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaederstroem, Henrik; Bronson, Frazier
A design study has been performed for a system that should measure the Cs-137 activity in ash from an incinerator. Radioactive ash, expected to consist of both Cs-134 and Cs-137, will be transported on a conveyor belt at 0.1 m/s. The objective of the counting system is to determine the Cs-137 activity and direct the ash to the correct stream after a diverter. The decision levels are ranging from 8000 to 400000 Bq/kg and the decision error should be as low as possible. The decision error depends on the total measurement uncertainty which depends on the counting statistics and themore » uncertainty in the efficiency of the geometry. For the low activity decision it is necessary to know the efficiency to be able to determine if the signal from the Cs-137 is above the minimum detectable activity and that it generates enough counts to reach the desired precision. For the higher activity decision the uncertainty of the efficiency needs to be understood to minimize decision errors. The total efficiency of the detector is needed to be able to determine if the detector will be able operate at the count rate at the highest expected activity. The design study that is presented in this paper describes how the objectives of the monitoring systems were obtained, the choice of detector was made and how ISOCS (In Situ Object Counting System) mathematical modeling was used to calculate the efficiency. The ISOCS uncertainty estimator (IUE) was used to determine which parameters of the ash was important to know accurately in order to minimize the uncertainty of the efficiency. The examined parameters include the height of the ash on the conveyor belt, the matrix composition and density and relative efficiency of the detector. (authors)« less
Canadian drivers' attitudes regarding preventative responses to driving while impaired by alcohol.
Vanlaar, Ward; Nadeau, Louise; McKiernan, Anna; Hing, Marisela M; Ouimet, Marie Claude; Brown, Thomas G
2017-09-01
In many jurisdictions, a risk assessment following a first driving while impaired (DWI) offence is used to guide administrative decision making regarding driver relicensing. Decision error in this process has important consequences for public security on one hand, and the social and economic well being of drivers on the other. Decision theory posits that consideration of the costs and benefits of decision error is needed, and in the public health context, this should include community attitudes. The objective of the present study was to clarify whether Canadians prefer decision error that: i) better protects the public (i.e., false positives); or ii) better protects the offender (i.e., false negatives). A random sample of male and female adult drivers (N=1213) from the five most populated regions of Canada was surveyed on drivers' preference for a protection of the public approach versus a protection of DWI drivers approach in resolving assessment decision error, and the relative value (i.e., value ratio) they imparted to both approaches. The role of region, sex and age on drivers' value ratio were also appraised. Seventy percent of Canadian drivers preferred a protection of the public from DWI approach, with the overall relative ratio given to this preference, compared to the alternative protection of the driver approach, being 3:1. Females expressed a significantly higher value ratio (M=3.4, SD=3.5) than males (M=3.0, SD=3.4), p<0.05. Regression analysis showed that both days of alcohol use in the past 30days (CI for B: -0.07, -0.02) and frequency of driving over legal BAC limits in the past year (CI for B=-0.19, -0.01) were significantly but modestly related to lower value ratios, R 2 (adj.)=0.014, p<0.001. Regional differences were also detected. Canadian drivers strongly favour a protection of the public approach to dealing with uncertainty in assessment, even at the risk of false positives. Accounting for community attitudes concerning DWI prevention and the individual differences that influence them could contribute to more informed, coherent and effective regional policies and prevention program development. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
Medication-related clinical decision support in computerized provider order entry systems: a review.
Kuperman, Gilad J; Bobb, Anne; Payne, Thomas H; Avery, Anthony J; Gandhi, Tejal K; Burns, Gerard; Classen, David C; Bates, David W
2007-01-01
While medications can improve patients' health, the process of prescribing them is complex and error prone, and medication errors cause many preventable injuries. Computer provider order entry (CPOE) with clinical decision support (CDS), can improve patient safety and lower medication-related costs. To realize the medication-related benefits of CDS within CPOE, one must overcome significant challenges. Healthcare organizations implementing CPOE must understand what classes of CDS their CPOE systems can support, assure that clinical knowledge underlying their CDS systems is reasonable, and appropriately represent electronic patient data. These issues often influence to what extent an institution will succeed with its CPOE implementation and achieve its desired goals. Medication-related decision support is probably best introduced into healthcare organizations in two stages, basic and advanced. Basic decision support includes drug-allergy checking, basic dosing guidance, formulary decision support, duplicate therapy checking, and drug-drug interaction checking. Advanced decision support includes dosing support for renal insufficiency and geriatric patients, guidance for medication-related laboratory testing, drug-pregnancy checking, and drug-disease contraindication checking. In this paper, the authors outline some of the challenges associated with both basic and advanced decision support and discuss how those challenges might be addressed. The authors conclude with summary recommendations for delivering effective medication-related clinical decision support addressed to healthcare organizations, application and knowledge base vendors, policy makers, and researchers.
Building a Lego wall: Sequential action selection.
Arnold, Amy; Wing, Alan M; Rotshtein, Pia
2017-05-01
The present study draws together two distinct lines of enquiry into the selection and control of sequential action: motor sequence production and action selection in everyday tasks. Participants were asked to build 2 different Lego walls. The walls were designed to have hierarchical structures with shared and dissociated colors and spatial components. Participants built 1 wall at a time, under low and high load cognitive states. Selection times for correctly completed trials were measured using 3-dimensional motion tracking. The paradigm enabled precise measurement of the timing of actions, while using real objects to create an end product. The experiment demonstrated that action selection was slowed at decision boundary points, relative to boundaries where no between-wall decision was required. Decision points also affected selection time prior to the actual selection window. Dual-task conditions increased selection errors. Errors mostly occurred at boundaries between chunks and especially when these required decisions. The data support hierarchical control of sequenced behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A taxonomy of decision problems on the flight deck
NASA Technical Reports Server (NTRS)
Orasanu, Judith M.; Fischer, Ute; Tarrel, Richard J.
1993-01-01
Examining cases of real crews making decisions in full-mission simulators or through Aviation Safety Reporting System (ASRS) reports shows that there are many different types of decisions that crews must make. Features of the situation determine the type of decision that must be made. The paper identifies six types of decisions that require different types of cognitive work and are also subject to different types of error or failure. These different requirements, along with descriptions of effective crew strategies, can serve as a basis for developing training practices and for evaluating crews.
ERIC Educational Resources Information Center
Westerberg, Carmen E.; Hawkins, Christopher A.; Rendon, Lauren
2018-01-01
Reality-monitoring errors occur when internally generated thoughts are remembered as external occurrences. We hypothesized that sleep-dependent memory consolidation could reduce them by strengthening connections between items and their contexts during an afternoon nap. Participants viewed words and imagined their referents. Pictures of the…
Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
NASA Astrophysics Data System (ADS)
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
Madani, Amin; Watanabe, Yusuke; Feldman, Liane S; Vassiliou, Melina C; Barkun, Jeffrey S; Fried, Gerald M; Aggarwal, Rajesh
2015-11-01
Bile duct injuries from laparoscopic cholecystectomy remain a significant source of morbidity and are often the result of intraoperative errors in perception, judgment, and decision-making. This qualitative study aimed to define and characterize higher-order cognitive competencies required to safely perform a laparoscopic cholecystectomy. Hierarchical and cognitive task analyses for establishing a critical view of safety during laparoscopic cholecystectomy were performed using qualitative methods to map the thoughts and practices that characterize expert performance. Experts with more than 5 years of experience, and who have performed at least 100 laparoscopic cholecystectomies, participated in semi-structured interviews and field observations. Verbal data were transcribed verbatim, supplemented with content from published literature, coded, thematically analyzed using grounded-theory by 2 independent reviewers, and synthesized into a list of items. A conceptual framework was created based on 10 interviews with experts, 9 procedures, and 18 literary sources. Experts included 6 minimally invasive surgeons, 2 hepato-pancreatico-biliary surgeons, and 2 acute care general surgeons (median years in practice, 11 [range 8 to 14]). One hundred eight cognitive elements (35 [32%] related to situation awareness, 47 [44%] involving decision-making, and 26 [24%] action-oriented subtasks) and 75 potential errors were identified and categorized into 6 general themes and 14 procedural tasks. Of the 75 potential errors, root causes were mapped to errors in situation awareness (24 [32%]), decision-making (49 [65%]), or either one (61 [81%]). This study defines the competencies that are essential to establishing a critical view of safety and avoiding bile duct injuries during laparoscopic cholecystectomy. This framework may serve as the basis for instructional design, assessment tools, and quality-control metrics to prevent injuries and promote a culture of patient safety. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berman, D.W.; Allen, B.C.; Van Landingham, C.B.
1998-12-31
The decision rules commonly employed to determine the need for cleanup are evaluated both to identify conditions under which they lead to erroneous conclusions and to quantify the rate that such errors occur. Their performance is also compared with that of other applicable decision rules. The authors based the evaluation of decision rules on simulations. Results are presented as power curves. These curves demonstrate that the degree of statistical control achieved is independent of the form of the null hypothesis. The loss of statistical control that occurs when a decision rule is applied to a data set that does notmore » satisfy the rule`s validity criteria is also clearly demonstrated. Some of the rules evaluated do not offer the formal statistical control that is an inherent design feature of other rules. Nevertheless, results indicate that such informal decision rules may provide superior overall control of error rates, when their application is restricted to data exhibiting particular characteristics. The results reported here are limited to decision rules applied to uncensored and lognormally distributed data. To optimize decision rules, it is necessary to evaluate their behavior when applied to data exhibiting a range of characteristics that bracket those common to field data. The performance of decision rules applied to data sets exhibiting a broader range of characteristics is reported in the second paper of this study.« less
Increasing reliability of Gauss-Kronrod quadrature by Eratosthenes' sieve method
NASA Astrophysics Data System (ADS)
Adam, Gh.; Adam, S.
2001-04-01
The reliability of the local error estimates returned by the Gauss-Kronrod quadrature rules can be raised up to the theoretical 100% rate of success, under error estimate sharpening, provided a number of natural validating conditions are required. The self-validating scheme of the local error estimates, which is easy to implement and adds little supplementary computing effort, strengthens considerably the correctness of the decisions within the automatic adaptive quadrature.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
Autoimmunity: a decision theory model.
Morris, J A
1987-01-01
Concepts from statistical decision theory were used to analyse the detection problem faced by the body's immune system in mounting immune responses to bacteria of the normal body flora. Given that these bacteria are potentially harmful, that there can be extensive cross reaction between bacterial antigens and host tissues, and that the decisions are made in uncertainty, there is a finite chance of error in immune response leading to autoimmune disease. A model of ageing in the immune system is proposed that is based on random decay in components of the decision process, leading to a steep age dependent increase in the probability of error. The age incidence of those autoimmune diseases which peak in early and middle life can be explained as the resultant of two processes: an exponentially falling curve of incidence of first contact with common bacteria, and a rapidly rising error function. Epidemiological data on the variation of incidence with social class, sibship order, climate and culture can be used to predict the likely site of carriage and mode of spread of the causative bacteria. Furthermore, those autoimmune diseases precipitated by common viral respiratory tract infections might represent reactions to nasopharyngeal bacterial overgrowth, and this theory can be tested using monoclonal antibodies to search the bacterial isolates for cross reacting antigens. If this model is correct then prevention of autoimmune disease by early exposure to low doses of bacteria might be possible. PMID:3818985
2014-01-01
Background Patient decision aids (PtDA) are developed to facilitate informed, value-based decisions about health. Research suggests that even when informed with necessary evidence and information, cognitive errors can prevent patients from choosing the option that is most congruent with their own values. We sought to utilize principles of behavioural economics to develop a computer application that presents information from conventional decision aids in a way that reduces these errors, subsequently promoting higher quality decisions. Method The Dynamic Computer Interactive Decision Application (DCIDA) was developed to target four common errors that can impede quality decision making with PtDAs: unstable values, order effects, overweighting of rare events, and information overload. Healthy volunteers were recruited to an interview to use three PtDAs converted to the DCIDA on a computer equipped with an eye tracker. Participants were first used a conventional PtDA, and then subsequently used the DCIDA version. User testing was assessed based on whether respondents found the software both usable: evaluated using a) eye-tracking, b) the system usability scale, and c) user verbal responses from a ‘think aloud’ protocol; and useful: evaluated using a) eye-tracking, b) whether preferences for options were changed, and c) and the decisional conflict scale. Results Of the 20 participants recruited to the study, 11 were male (55%), the mean age was 35, 18 had at least a high school education (90%), and 8 (40%) had a college or university degree. Eye-tracking results, alongside a mean system usability scale score of 73 (range 68–85), indicated a reasonable degree of usability for the DCIDA. The think aloud study suggested areas for further improvement. The DCIDA also appeared to be useful to participants wherein subjects focused more on the features of the decision that were most important to them (21% increase in time spent focusing on the most important feature). Seven subjects (25%) changed their preferred option when using DCIDA. Conclusion Preliminary results suggest that DCIDA has potential to improve the quality of patient decision-making. Next steps include larger studies to test individual components of DCIDA and feasibility testing with patients making real decisions. PMID:25084808
Schönberg, Tom; Daw, Nathaniel D; Joel, Daphna; O'Doherty, John P
2007-11-21
The computational framework of reinforcement learning has been used to forward our understanding of the neural mechanisms underlying reward learning and decision-making behavior. It is known that humans vary widely in their performance in decision-making tasks. Here, we used a simple four-armed bandit task in which subjects are almost evenly split into two groups on the basis of their performance: those who do learn to favor choice of the optimal action and those who do not. Using models of reinforcement learning we sought to determine the neural basis of these intrinsic differences in performance by scanning both groups with functional magnetic resonance imaging. We scanned 29 subjects while they performed the reward-based decision-making task. Our results suggest that these two groups differ markedly in the degree to which reinforcement learning signals in the striatum are engaged during task performance. While the learners showed robust prediction error signals in both the ventral and dorsal striatum during learning, the nonlearner group showed a marked absence of such signals. Moreover, the magnitude of prediction error signals in a region of dorsal striatum correlated significantly with a measure of behavioral performance across all subjects. These findings support a crucial role of prediction error signals, likely originating from dopaminergic midbrain neurons, in enabling learning of action selection preferences on the basis of obtained rewards. Thus, spontaneously observed individual differences in decision making performance demonstrate the suggested dependence of this type of learning on the functional integrity of the dopaminergic striatal system in humans.
Future of electronic health records: implications for decision support.
Rothman, Brian; Leonard, Joan C; Vigoda, Michael M
2012-01-01
The potential benefits of the electronic health record over traditional paper are many, including cost containment, reductions in errors, and improved compliance by utilizing real-time data. The highest functional level of the electronic health record (EHR) is clinical decision support (CDS) and process automation, which are expected to enhance patient health and healthcare. The authors provide an overview of the progress in using patient data more efficiently and effectively through clinical decision support to improve health care delivery, how decision support impacts anesthesia practice, and how some are leading the way using these systems to solve need-specific issues. Clinical decision support uses passive or active decision support to modify clinician behavior through recommendations of specific actions. Recommendations may reduce medication errors, which would result in considerable savings by avoiding adverse drug events. In selected studies, clinical decision support has been shown to decrease the time to follow-up actions, and prediction has proved useful in forecasting patient outcomes, avoiding costs, and correctly prompting treatment plan modifications by clinicians before engaging in decision-making. Clinical documentation accuracy and completeness is improved by an electronic health record and greater relevance of care data is delivered. Clinical decision support may increase clinician adherence to clinical guidelines, but educational workshops may be equally effective. Unintentional consequences of clinical decision support, such as alert desensitization, can decrease the effectiveness of a system. Current anesthesia clinical decision support use includes antibiotic administration timing, improved documentation, more timely billing, and postoperative nausea and vomiting prophylaxis. Electronic health record implementation offers data-mining opportunities to improve operational, financial, and clinical processes. Using electronic health record data in real-time for decision support and process automation has the potential to both reduce costs and improve the quality of patient care. © 2012 Mount Sinai School of Medicine.
An MEG signature corresponding to an axiomatic model of reward prediction error.
Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J
2012-01-02
Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data. Copyright © 2011 Elsevier Inc. All rights reserved.
Medication errors: definitions and classification
Aronson, Jeffrey K
2009-01-01
To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526
Diagnostic Error in Stroke-Reasons and Proposed Solutions.
Bakradze, Ekaterina; Liberman, Ava L
2018-02-13
We discuss the frequency of stroke misdiagnosis and identify subgroups of stroke at high risk for specific diagnostic errors. In addition, we review common reasons for misdiagnosis and propose solutions to decrease error. According to a recent report by the National Academy of Medicine, most people in the USA are likely to experience a diagnostic error during their lifetimes. Nearly half of such errors result in serious disability and death. Stroke misdiagnosis is a major health care concern, with initial misdiagnosis estimated to occur in 9% of all stroke patients in the emergency setting. Under- or missed diagnosis (false negative) of stroke can result in adverse patient outcomes due to the preclusion of acute treatments and failure to initiate secondary prevention strategies. On the other hand, the overdiagnosis of stroke can result in inappropriate treatment, delayed identification of actual underlying disease, and increased health care costs. Young patients, women, minorities, and patients presenting with non-specific, transient, or posterior circulation stroke symptoms are at increased risk of misdiagnosis. Strategies to decrease diagnostic error in stroke have largely focused on early stroke detection via bedside examination strategies and a clinical decision rules. Targeted interventions to improve the diagnostic accuracy of stroke diagnosis among high-risk groups as well as symptom-specific clinical decision supports are needed. There are a number of open questions in the study of stroke misdiagnosis. To improve patient outcomes, existing strategies to improve stroke diagnostic accuracy should be more broadly adopted and novel interventions devised and tested to reduce diagnostic errors.
Effects of noise on the performance of a memory decision response task
NASA Technical Reports Server (NTRS)
Lawton, B. W.
1972-01-01
An investigation has been made to determine the effects of noise on human performance. Fourteen subjects performed a memory-decision-response task in relative quiet and while listening to tape recorded noises. Analysis of the data obtained indicates that performance was degraded in the presence of noise. Significant increases in problem solution times were found for impulsive noise conditions as compared with times found for the no-noise condition. Performance accuracy was also degraded. Significantly more error responses occurred at higher noise levels; a direct or positive relation was found between error responses and noise level experienced by the subjects.
Design of a digital voice data compression technique for orbiter voice channels
NASA Technical Reports Server (NTRS)
1975-01-01
Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.
Decision feedback equalizer for holographic data storage.
Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo
2018-05-20
Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.
Effects of different feedback types on information integration in repeated monetary gambles
Haffke, Peter; Hübner, Ronald
2015-01-01
Most models of risky decision making assume that all relevant information is taken into account (e.g., von Neumann and Morgenstern, 1944; Kahneman and Tversky, 1979). However, there are also some models supposing that only part of the information is considered (e.g., Brandstätter et al., 2006; Gigerenzer and Gaissmaier, 2011). To further investigate the amount of information that is usually used for decision making, and how the use depends on feedback, we conducted a series of three experiments in which participants choose between two lotteries and where no feedback, outcome feedback, and error feedback was provided, respectively. The results show that without feedback participants mostly chose the lottery with the higher winning probability, and largely ignored the potential gains. The same results occurred when the outcome of each decision was fed back. Only after presenting error feedback (i.e., signaling whether a choice was optimal or not), participants considered probabilities as well as gains, resulting in more optimal choices. We propose that outcome feedback was ineffective, because of its probabilistic and ambiguous nature. Participants improve information integration only if provided with a consistent and deterministic signal such as error feedback. PMID:25667576
Effects of different feedback types on information integration in repeated monetary gambles.
Haffke, Peter; Hübner, Ronald
2014-01-01
Most models of risky decision making assume that all relevant information is taken into account (e.g., von Neumann and Morgenstern, 1944; Kahneman and Tversky, 1979). However, there are also some models supposing that only part of the information is considered (e.g., Brandstätter et al., 2006; Gigerenzer and Gaissmaier, 2011). To further investigate the amount of information that is usually used for decision making, and how the use depends on feedback, we conducted a series of three experiments in which participants choose between two lotteries and where no feedback, outcome feedback, and error feedback was provided, respectively. The results show that without feedback participants mostly chose the lottery with the higher winning probability, and largely ignored the potential gains. The same results occurred when the outcome of each decision was fed back. Only after presenting error feedback (i.e., signaling whether a choice was optimal or not), participants considered probabilities as well as gains, resulting in more optimal choices. We propose that outcome feedback was ineffective, because of its probabilistic and ambiguous nature. Participants improve information integration only if provided with a consistent and deterministic signal such as error feedback.
Model-based influences on humans’ choices and striatal prediction errors
Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.
2011-01-01
Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563
NASA Astrophysics Data System (ADS)
Han, Xifeng; Zhou, Wen
2018-03-01
Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
ERIC Educational Resources Information Center
Schmidt, Brandy; Papale, Andrew; Redish, A. David; Markus, Etan J.
2013-01-01
Navigation can be accomplished through multiple decision-making strategies, using different information-processing computations. A well-studied dichotomy in these decision-making strategies compares hippocampal-dependent "place" and dorsal-lateral striatal dependent "response" strategies. A place strategy depends on the ability to flexibly respond…
Reducing Diagnostic Error with Computer-Based Clinical Decision Support
ERIC Educational Resources Information Center
Greenes, Robert A.
2009-01-01
Information technology approaches to delivering diagnostic clinical decision support (CDS) are the subject of the papers to follow in the proceedings. These will address the history of CDS and present day approaches (Miller), evaluation of diagnostic CDS methods (Friedman), and the role of clinical documentation in supporting diagnostic decision…
2015-10-28
techniques such as regression analysis, correlation, and multicollinearity assessment to identify the change and error on the input to the model...between many of the independent or predictor variables, the issue of multicollinearity may arise [18]. VII. SUMMARY Accurate decisions concerning
Decision-Making Accuracy of CBM Progress-Monitoring Data
ERIC Educational Resources Information Center
Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.
2018-01-01
This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…
20 CFR 404.942 - Prehearing proceedings and decisions by attorney advisors.
Code of Federal Regulations, 2010 CFR
2010-04-01
...-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations, Administrative Review Process, and...) There is an error in the file or some other indication that a fully favorable decision may be issued. (c... additional evidence that may be relevant to the claim, including medical evidence; and (2) If necessary to...
20 CFR 404.942 - Prehearing proceedings and decisions by attorney advisors.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations, Administrative Review Process, and...) There is an error in the file or some other indication that a fully favorable decision may be issued. (c... additional evidence that may be relevant to the claim, including medical evidence; and (2) If necessary to...
Collegiate Aviation Review. September 1994.
ERIC Educational Resources Information Center
Barker, Ballard M., Ed.
This document contains four papers on aviation education. The first paper, "Why Aren't We Teaching Aeronautical Decision Making?" (Richard J. Adams), reviews 15 years of aviation research into the causes of human performance errors in aviation and provides guidelines for designing the next generation of aeronautical decision-making materials.…
Selection Practices of Group Leaders: A National Survey.
ERIC Educational Resources Information Center
Riva, Maria T.; Lippert, Laurel; Tackett, M. Jan
2000-01-01
Study surveys the selection practices of group leaders. Explores methods of selection, variables used to make selection decisions, and the types of selection errors that leaders have experienced. Results suggest that group leaders use clinical judgment to make selection decisions and endorse using some specific variables in selection. (Contains 22…
Error Ratio Analysis: Alternate Mathematics Assessment for General and Special Educators.
ERIC Educational Resources Information Center
Miller, James H.; Carr, Sonya C.
1997-01-01
Eighty-seven elementary students in grades four, five, and six, were administered a 30-item multiplication instrument to assess performance in computation across grade levels. An interpretation of student performance using error ratio analysis is provided and the use of this method with groups of students for instructional decision making is…
Effects of Crew Resource Management Training on Medical Errors in a Simulated Prehospital Setting
ERIC Educational Resources Information Center
Carhart, Elliot D.
2012-01-01
This applied dissertation investigated the effect of crew resource management (CRM) training on medical errors in a simulated prehospital setting. Specific areas addressed by this program included situational awareness, decision making, task management, teamwork, and communication. This study is believed to be the first investigation of CRM…
76 FR 20438 - Proposed Model Performance Measures for State Traffic Records Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... what data elements are critical. States should take advantage of these decision-making opportunities to... single database. Error means the recorded value for some data element of interest is incorrect. Error... into the database) and the number of missing (blank) data elements in the records that are in a...
A framework for simulating map error in ecosystem models
Sean P. Healey; Shawn P. Urbanski; Paul L. Patterson; Chris Garrard
2014-01-01
The temporal depth and spatial breadth of observations from platforms such as Landsat provide unique perspective on ecosystem dynamics, but the integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential map errors in broader...
2014-07-01
Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection
Mitigating Errors of Representation: A Practical Case Study of the University Experience Survey
ERIC Educational Resources Information Center
Whiteley, Sonia
2014-01-01
The Total Survey Error (TSE) paradigm provides a framework that supports the effective planning of research, guides decision making about data collection and contextualises the interpretation and dissemination of findings. TSE also allows researchers to systematically evaluate and improve the design and execution of ongoing survey programs and…
Diagnosis of Cognitive Errors by Statistical Pattern Recognition Methods.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.
The rule space model permits measurement of cognitive skill acquisition, diagnosis of cognitive errors, and detection of the strengths and weaknesses of knowledge possessed by individuals. Two ways to classify an individual into his or her most plausible latent state of knowledge include: (1) hypothesis testing--Bayes' decision rules for minimum…
Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2018-01-01
Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…
Decision-making in schizophrenia: A predictive-coding perspective.
Sterzer, Philipp; Voss, Martin; Schlagenhauf, Florian; Heinz, Andreas
2018-05-31
Dysfunctional decision-making has been implicated in the positive and negative symptoms of schizophrenia. Decision-making can be conceptualized within the framework of hierarchical predictive coding as the result of a Bayesian inference process that uses prior beliefs to infer states of the world. According to this idea, prior beliefs encoded at higher levels in the brain are fed back as predictive signals to lower levels. Whenever these predictions are violated by the incoming sensory data, a prediction error is generated and fed forward to update beliefs encoded at higher levels. Well-documented impairments in cognitive decision-making support the view that these neural inference mechanisms are altered in schizophrenia. There is also extensive evidence relating the symptoms of schizophrenia to aberrant signaling of prediction errors, especially in the domain of reward and value-based decision-making. Moreover, the idea of altered predictive coding is supported by evidence for impaired low-level sensory mechanisms and motor processes. We review behavioral and neural findings from these research areas and provide an integrated view suggesting that schizophrenia may be related to a pervasive alteration in predictive coding at multiple hierarchical levels, including cognitive and value-based decision-making processes as well as sensory and motor systems. We relate these findings to decision-making processes and propose that varying degrees of impairment in the implicated brain areas contribute to the variety of psychotic experiences. Copyright © 2018 Elsevier Inc. All rights reserved.
Sankari, E Siva; Manimegalai, D
2017-12-21
Predicting membrane protein types is an important and challenging research area in bioinformatics and proteomics. Traditional biophysical methods are used to classify membrane protein types. Due to large exploration of uncharacterized protein sequences in databases, traditional methods are very time consuming, expensive and susceptible to errors. Hence, it is highly desirable to develop a robust, reliable, and efficient method to predict membrane protein types. Imbalanced datasets and large datasets are often handled well by decision tree classifiers. Since imbalanced datasets are taken, the performance of various decision tree classifiers such as Decision Tree (DT), Classification And Regression Tree (CART), C4.5, Random tree, REP (Reduced Error Pruning) tree, ensemble methods such as Adaboost, RUS (Random Under Sampling) boost, Rotation forest and Random forest are analysed. Among the various decision tree classifiers Random forest performs well in less time with good accuracy of 96.35%. Another inference is RUS boost decision tree classifier is able to classify one or two samples in the class with very less samples while the other classifiers such as DT, Adaboost, Rotation forest and Random forest are not sensitive for the classes with fewer samples. Also the performance of decision tree classifiers is compared with SVM (Support Vector Machine) and Naive Bayes classifier. Copyright © 2017 Elsevier Ltd. All rights reserved.
Linear and Order Statistics Combiners for Pattern Classification
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)
2001-01-01
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
Balkanyi, Laszlo; Heja, Gergely; Nagy, Attlia
2014-01-01
Extracting scientifically accurate terminology from an EU public health regulation is part of the knowledge engineering work at the European Centre for Disease Prevention and Control (ECDC). ECDC operates information systems at the crossroads of many areas - posing a challenge for transparency and consistency. Semantic interoperability is based on the Terminology Server (TS). TS value sets (structured vocabularies) describe shared domains as "diseases", "organisms", "public health terms", "geo-entities" "organizations" and "administrative terms" and others. We extracted information from the relevant EC Implementing Decision on case definitions for reporting communicable diseases, listing 53 notifiable infectious diseases, containing clinical, diagnostic, laboratory and epidemiological criteria. We performed a consistency check; a simplification - abstraction; we represented lab criteria in triplets: as 'y' procedural result /of 'x' organism-substance/on 'z' specimen and identified negations. The resulting new case definition value set represents the various formalized criteria, meanwhile the existing disease value set has been extended, new signs and symptoms were added. New organisms enriched the organism value set. Other new categories have been added to the public health value set, as transmission modes; substances; specimens and procedures. We identified problem areas, as (a) some classification error(s); (b) inconsistent granularity of conditions; (c) seemingly nonsense criteria, medical trivialities; (d) possible logical errors, (e) seemingly factual errors that might be phrasing errors. We think our hypothesis regarding room for possible improvements is valid: there are some open issues and a further improved legal text might lead to more precise epidemiologic data collection. It has to be noted that formal representation for automatic classification of cases was out of scope, such a task would require other formalism, as e.g. those used by rule-based decision support systems.
Decision-Making under Risk of Loss in Children
Steelandt, Sophie; Broihanne, Marie-Hélène; Romain, Amélie; Thierry, Bernard; Dufour, Valérie
2013-01-01
In human adults, judgment errors are known to often lead to irrational decision-making in risky contexts. While these errors can affect the accuracy of profit evaluation, they may have once enhanced survival in dangerous contexts following a “better be safe than sorry” rule of thumb. Such a rule can be critical for children, and it could develop early on. Here, we investigated the rationality of choices and the possible occurrence of judgment errors in children aged 3 to 9 years when exposed to a risky trade. Children were allocated with a piece of cookie that they could either keep or risk in exchange of the content of one cup among 6, visible in front of them. In the cups, cookies could be of larger, equal or smaller sizes than the initial allocation. Chances of losing or winning were manipulated by presenting different combinations of cookie sizes in the cups (for example 3 large, 2 equal and 1 small cookie). We investigated the rationality of children's response using the theoretical models of Expected Utility Theory (EUT) and Cumulative Prospect Theory. Children aged 3 to 4 years old were unable to discriminate the profitability of exchanging in the different combinations. From 5 years, children were better at maximizing their benefit in each combination, their decisions were negatively induced by the probability of losing, and they exhibited a framing effect, a judgment error found in adults. Confronting data to the EUT indicated that children aged over 5 were risk-seekers but also revealed inconsistencies in their choices. According to a complementary model, the Cumulative Prospect Theory (CPT), they exhibited loss aversion, a pattern also found in adults. These findings confirm that adult-like judgment errors occur in children, which suggests that they possess a survival value. PMID:23349682
Decision-making under risk of loss in children.
Steelandt, Sophie; Broihanne, Marie-Hélène; Romain, Amélie; Thierry, Bernard; Dufour, Valérie
2013-01-01
In human adults, judgment errors are known to often lead to irrational decision-making in risky contexts. While these errors can affect the accuracy of profit evaluation, they may have once enhanced survival in dangerous contexts following a "better be safe than sorry" rule of thumb. Such a rule can be critical for children, and it could develop early on. Here, we investigated the rationality of choices and the possible occurrence of judgment errors in children aged 3 to 9 years when exposed to a risky trade. Children were allocated with a piece of cookie that they could either keep or risk in exchange of the content of one cup among 6, visible in front of them. In the cups, cookies could be of larger, equal or smaller sizes than the initial allocation. Chances of losing or winning were manipulated by presenting different combinations of cookie sizes in the cups (for example 3 large, 2 equal and 1 small cookie). We investigated the rationality of children's response using the theoretical models of Expected Utility Theory (EUT) and Cumulative Prospect Theory. Children aged 3 to 4 years old were unable to discriminate the profitability of exchanging in the different combinations. From 5 years, children were better at maximizing their benefit in each combination, their decisions were negatively induced by the probability of losing, and they exhibited a framing effect, a judgment error found in adults. Confronting data to the EUT indicated that children aged over 5 were risk-seekers but also revealed inconsistencies in their choices. According to a complementary model, the Cumulative Prospect Theory (CPT), they exhibited loss aversion, a pattern also found in adults. These findings confirm that adult-like judgment errors occur in children, which suggests that they possess a survival value.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
Relationship between impulsivity and decision-making in cocaine dependence
Kjome, Kimberly L.; Lane, Scott D.; Schmitz, Joy M.; Green, Charles; Ma, Liangsuo; Prasla, Irshad; Swann, Alan C.; Moeller, F. Gerard
2010-01-01
Impulsivity and decision-making are associated on a theoretical level in that impaired planning is a component of both. However, few studies have examined the relationship between measures of decision-making and impulsivity in clinical populations. The purpose of this study was to compare cocaine-dependent subjects to controls on a measure of decision-making (the Iowa Gambling Task or IGT), a questionnaire measure of impulsivity (the Barratt Impulsiveness Scale or BIS-11), and a measure of behavioral inhibition (the immediate memory task or IMT), and to examine the interrelationship among these measures. Results of the study showed that cocaine-dependent subjects made more disadvantageous choices on the IGT, had higher scores on the BIS, and more commission errors on the IMT. Cognitive model analysis showed that choice consistency factors on the IGT differed between cocaine-dependent subjects and controls. However, there was no significant correlation between IGT performance and the BIS total score or subscales or IMT commission errors. These results suggest that in cocaine dependent subjects there is little overlap between decision-making as measured by the IGT and impulsivity/behavioral inhibition as measured by the BIS and IMT. PMID:20478631
NASA Astrophysics Data System (ADS)
Kostyukov, V. N.; Naumenko, A. P.
2017-08-01
The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under control - not more than 0.027. In case when only pump and compressor units are under control, the failure skipping risk is not more than 0.022, when the probability of error in operator's action is not more than 0.011. The work output shows that on the basis of the researches results an assessment of operators' reliability can be made in terms of almost any kind of production, but considering only technological capabilities, since operators' psychological and general training considerable vary in different production industries. Using latest technologies of engineering psychology and design of data support systems, situation assessment systems, decision-making and responding system, as well as achievement in condition monitoring in various production industries one can evaluate hazardous condition skipping risk probability considering static, dynamic errors and human factor.
Erroneous knowledge of results affects decision and memory processes on timing tasks.
Ryan, Lawrence J; Fritz, Matthew S
2007-12-01
On mental timing tasks, erroneous knowledge of results (KR) leads to incorrect performance accompanied by the subjective judgment of accurate performance. Using the start-stop technique (an analogue of the peak interval procedure) with both reproduction and production timing tasks, the authors analyze what processes erroneous KR alters. KR provides guidance (performance error information) that lowers decision thresholds. Erroneous KR also provides targeting information that alters response durations proportionately to the magnitude of the feedback error. On the production task, this shift results from changes in the reference memory, whereas on the reproduction task this shift results from changes in the decision threshold for responding. The idea that erroneous KR can alter different cognitive processes on related tasks is supported by the authors' demonstration that the learned strategies can transfer from the reproduction task to the production task but not visa versa. Thus effects of KR are both task and context dependent.
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Double dissociation of value computations in orbitofrontal and anterior cingulate neurons
Kennerley, Steven W.; Behrens, Timothy E. J.; Wallis, Jonathan D.
2011-01-01
Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable within PFC neurons. While many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population eliminated value coding. However, a special population of neurons in anterior cingulate cortex (ACC) - but not orbitofrontal cortex (OFC) - multiplex chosen value across decision parameters using a unified encoding scheme, and encoded reward prediction errors. In contrast, neurons in OFC - but not ACC - encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, while ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters. PMID:22037498
Human Factors Effecting Forensic Decision Making: Workplace Stress and Well-being.
Jeanguenat, Amy M; Dror, Itiel E
2018-01-01
Over the past decade, there has been a growing openness about the importance of human factors in forensic work. However, most of it focused on cognitive bias, and neglected issues of workplace wellness and stress. Forensic scientists work in a dynamic environment that includes common workplace pressures such as workload volume, tight deadlines, lack of advancement, number of working hours, low salary, technology distractions, and fluctuating priorities. However, in addition, forensic scientists also encounter a number of industry-specific pressures, such as technique criticism, repeated exposure to crime scenes or horrific case details, access to funding, working in an adversarial legal system, and zero tolerance for "errors". Thus, stress is an important human factor to mitigate for overall error management, productivity and decision quality (not to mention the well-being of the examiners themselves). Techniques such as mindfulness can become powerful tools to enhance work and decision quality. © 2017 American Academy of Forensic Sciences.
Kishida, Kenneth T.; Saez, Ignacio; Lohrenz, Terry; Witcher, Mark R.; Laxton, Adrian W.; Tatter, Stephen B.; White, Jason P.; Ellis, Thomas L.; Phillips, Paul E. M.; Montague, P. Read
2016-01-01
In the mammalian brain, dopamine is a critical neuromodulator whose actions underlie learning, decision-making, and behavioral control. Degeneration of dopamine neurons causes Parkinson’s disease, whereas dysregulation of dopamine signaling is believed to contribute to psychiatric conditions such as schizophrenia, addiction, and depression. Experiments in animal models suggest the hypothesis that dopamine release in human striatum encodes reward prediction errors (RPEs) (the difference between actual and expected outcomes) during ongoing decision-making. Blood oxygen level-dependent (BOLD) imaging experiments in humans support the idea that RPEs are tracked in the striatum; however, BOLD measurements cannot be used to infer the action of any one specific neurotransmitter. We monitored dopamine levels with subsecond temporal resolution in humans (n = 17) with Parkinson’s disease while they executed a sequential decision-making task. Participants placed bets and experienced monetary gains or losses. Dopamine fluctuations in the striatum fail to encode RPEs, as anticipated by a large body of work in model organisms. Instead, subsecond dopamine fluctuations encode an integration of RPEs with counterfactual prediction errors, the latter defined by how much better or worse the experienced outcome could have been. How dopamine fluctuations combine the actual and counterfactual is unknown. One possibility is that this process is the normal behavior of reward processing dopamine neurons, which previously had not been tested by experiments in animal models. Alternatively, this superposition of error terms may result from an additional yet-to-be-identified subclass of dopamine neurons. PMID:26598677
Kishida, Kenneth T; Saez, Ignacio; Lohrenz, Terry; Witcher, Mark R; Laxton, Adrian W; Tatter, Stephen B; White, Jason P; Ellis, Thomas L; Phillips, Paul E M; Montague, P Read
2016-01-05
In the mammalian brain, dopamine is a critical neuromodulator whose actions underlie learning, decision-making, and behavioral control. Degeneration of dopamine neurons causes Parkinson's disease, whereas dysregulation of dopamine signaling is believed to contribute to psychiatric conditions such as schizophrenia, addiction, and depression. Experiments in animal models suggest the hypothesis that dopamine release in human striatum encodes reward prediction errors (RPEs) (the difference between actual and expected outcomes) during ongoing decision-making. Blood oxygen level-dependent (BOLD) imaging experiments in humans support the idea that RPEs are tracked in the striatum; however, BOLD measurements cannot be used to infer the action of any one specific neurotransmitter. We monitored dopamine levels with subsecond temporal resolution in humans (n = 17) with Parkinson's disease while they executed a sequential decision-making task. Participants placed bets and experienced monetary gains or losses. Dopamine fluctuations in the striatum fail to encode RPEs, as anticipated by a large body of work in model organisms. Instead, subsecond dopamine fluctuations encode an integration of RPEs with counterfactual prediction errors, the latter defined by how much better or worse the experienced outcome could have been. How dopamine fluctuations combine the actual and counterfactual is unknown. One possibility is that this process is the normal behavior of reward processing dopamine neurons, which previously had not been tested by experiments in animal models. Alternatively, this superposition of error terms may result from an additional yet-to-be-identified subclass of dopamine neurons.
Repeatability and Reproducibility of Decisions by Latent Fingerprint Examiners
Ulery, Bradford T.; Hicklin, R. Austin; Buscaglia, JoAnn; Roberts, Maria Antonia
2012-01-01
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. We tested latent print examiners on the extent to which they reached consistent decisions. This study assessed intra-examiner repeatability by retesting 72 examiners on comparisons of latent and exemplar fingerprints, after an interval of approximately seven months; each examiner was reassigned 25 image pairs for comparison, out of total pool of 744 image pairs. We compare these repeatability results with reproducibility (inter-examiner) results derived from our previous study. Examiners repeated 89.1% of their individualization decisions, and 90.1% of their exclusion decisions; most of the changed decisions resulted in inconclusive decisions. Repeatability of comparison decisions (individualization, exclusion, inconclusive) was 90.0% for mated pairs, and 85.9% for nonmated pairs. Repeatability and reproducibility were notably lower for comparisons assessed by the examiners as “difficult” than for “easy” or “moderate” comparisons, indicating that examiners' assessments of difficulty may be useful for quality assurance. No false positive errors were repeated (n = 4); 30% of false negative errors were repeated. One percent of latent value decisions were completely reversed (no value even for exclusion vs. of value for individualization). Most of the inter- and intra-examiner variability concerned whether the examiners considered the information available to be sufficient to reach a conclusion; this variability was concentrated on specific image pairs such that repeatability and reproducibility were very high on some comparisons and very low on others. Much of the variability appears to be due to making categorical decisions in borderline cases. PMID:22427888
Equalization for a page-oriented optical memory system
NASA Astrophysics Data System (ADS)
Trelewicz, Jennifer Q.; Capone, Jeffrey
1999-11-01
In this work, a method of decision-feedback equalization is developed for a digital holographic channel that experiences moderate-to-severe imaging errors. Decision feedback is utilized, not only where the channel is well-behaved, but also near the edges of the camera grid that are subject to a high degree of imaging error. In addition to these effects, the channel is worsened by typical problems of holographic channels, including non-uniform illumination, dropouts, and stuck bits. The approach described in this paper builds on established methods for performing trained and blind equalization on time-varying channels. The approach is tested on experimental data sets. On most of these data sets, the method of equalization described in this work delivers at least an order of magnitude improvement in bit-error rate (BER) before error-correction coding (ECC). When ECC is introduced, the approach is able to recover stored data with no errors for many of the tested data sets. Furthermore, a low BER was maintained even over a range of small alignment perturbations in the system. It is believed that this equalization method can allow cost reductions to be made in page-memory systems, by allowing for a larger image area per page or less complex imaging components, without sacrificing the low BER required by data storage applications.
Preventing Errors in Clinical Practice: A Call for Self-Awareness
Borrell-Carrió, Francesc; Epstein, Ronald M.
2004-01-01
While ascribing medical errors primarily to systems factors can free clinicians from individual blame, there are elements of medical errors that can and should be attributed to individual factors. These factors are related less commonly to lack of knowledge and skill than to the inability to apply the clinician’s abilities to situations under certain circumstances. In concert with efforts to improve health care systems, refining physicians’ emotional and cognitive capacities might also prevent many errors. In general, physicians have the sensation of making a mistake because of the interference of emotional elements. We propose a so-called rational-emotive model that emphasizes 2 factors in error causation: (1) difficulty in reframing the first hypothesis that goes to the physician’s mind in an automatic way, and (2) premature closure of the clinical act to avoid confronting inconsistencies, low-level decision rules, and emotions. We propose a teaching strategy based on developing the physician’s insight and self-awareness to detect the inappropriate use of low-level decision rules, as well as detecting the factors that limit a physician’s capacity to tolerate the tension of uncertainty and ambiguity. Emotional self-awareness and self-regulation of attention can be consciously cultivated as habits to help physicians function better in clinical situations. PMID:15335129
Preventing errors in clinical practice: a call for self-awareness.
Borrell-Carrió, Francesc; Epstein, Ronald M
2004-01-01
While ascribing medical errors primarily to systems factors can free clinicians from individual blame, there are elements of medical errors that can and should be attributed to individual factors. These factors are related less commonly to lack of knowledge and skill than to the inability to apply the clinician's abilities to situations under certain circumstances. In concert with efforts to improve health care systems, refining physicians' emotional and cognitive capacities might also prevent many errors. In general, physicians have the sensation of making a mistake because of the interference of emotional elements. We propose a so-called rational-emotive model that emphasizes 2 factors in error causation: (1) difficulty in reframing the first hypothesis that goes to the physician's mind in an automatic way, and (2) premature closure of the clinical act to avoid confronting inconsistencies, low-level decision rules, and emotions. We propose a teaching strategy based on developing the physician's insight and self-awareness to detect the inappropriate use of low-level decision rules, as well as detecting the factors that limit a physician's capacity to tolerate the tension of uncertainty and ambiguity. Emotional self-awareness and self-regulation of attention can be consciously cultivated as habits to help physicians function better in clinical situations.
A Conceptual Framework for Predicting Error in Complex Human-Machine Environments
NASA Technical Reports Server (NTRS)
Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)
1998-01-01
We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1974-01-01
Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.
NASA Astrophysics Data System (ADS)
Książek, Judyta
2015-10-01
At present, there has been a great interest in the development of texture based image classification methods in many different areas. This study presents the results of research carried out to assess the usefulness of selected textural features for detection of asbestos-cement roofs in orthophotomap classification. Two different orthophotomaps of southern Poland (with ground resolution: 5 cm and 25 cm) were used. On both orthoimages representative samples for two classes: asbestos-cement roofing sheets and other roofing materials were selected. Estimation of texture analysis usefulness was conducted using machine learning methods based on decision trees (C5.0 algorithm). For this purpose, various sets of texture parameters were calculated in MaZda software. During the calculation of decision trees different numbers of texture parameters groups were considered. In order to obtain the best settings for decision trees models cross-validation was performed. Decision trees models with the lowest mean classification error were selected. The accuracy of the classification was held based on validation data sets, which were not used for the classification learning. For 5 cm ground resolution samples, the lowest mean classification error was 15.6%. The lowest mean classification error in the case of 25 cm ground resolution was 20.0%. The obtained results confirm potential usefulness of the texture parameter image processing for detection of asbestos-cement roofing sheets. In order to improve the accuracy another extended study should be considered in which additional textural features as well as spectral characteristics should be analyzed.
2016-09-01
Reports an error in "Decision sidestepping: How the motivation for closure prompts individuals to bypass decision making" by Ashley S. Otto, Joshua J. Clarkson and Frank R. Kardes ( Journal of Personality and Social Psychology , 2016[Jul], Vol 111[1], 1-16). In the article, the main heading for Experiment 3 was missing due to a production error, and the first sentence of the first paragraph of Experiment 3 should begin as follows: Experiment 2 offered support for the hypothesis that those seeking closure engage in decision sidestepping to reduce the bothersome nature of decision making. (The following abstract of the original article appeared in record 2016-30159-001.) We all too often have to make decisions—from the mundane (e.g., what to eat for breakfast) to the complex (e.g., what to buy a loved one)—and yet there exists a multitude of strategies that allows us to make a decision. This work focuses on a subset of decision strategies that allows individuals to make decisions by bypassing the decision-making process—a phenomenon we term decision sidestepping. Critical to the present manuscript, however, we contend that decision sidestepping stems from the motivation to achieve closure. We link this proposition back to the fundamental nature of closure and how those seeking closure are highly bothered by decision making. As such, we argue that the motivation to achieve closure prompts a reliance on sidestepping strategies (e.g., default bias, choice delegation, status quo bias, inaction inertia, option fixation) to reduce the bothersome nature of decision making. In support of this framework, five experiments demonstrate that (a) those seeking closure are more likely to engage in decision sidestepping, (b) the effect of closure on sidestepping stems from the bothersome nature of decision making, and (c) the reliance on sidestepping results in downstream consequences for subsequent choice. Taken together, these findings offer unique insight into the cognitive motivations stimulating a reliance on decision sidestepping and thus a novel framework by which to understand how individuals make decisions while bypassing the decision-making process. PsycINFO Database Record (c) 2016 APA, all rights reserved
A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments
S. Healey; P. Patterson; S. Urbanski
2014-01-01
Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...
Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz
2013-01-01
The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...
38 CFR 20.1401 - Rule 1401. Definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2012-07-01 2012-07-01 false Rule 1401. Definitions... Unmistakable Error § 20.1401 Rule 1401. Definitions. (a) Issue. Unless otherwise specified, the term “issue” in this subpart means a matter upon which the Board made a final decision (other than a decision under...
38 CFR 20.1401 - Rule 1401. Definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2011-07-01 2011-07-01 false Rule 1401. Definitions... Unmistakable Error § 20.1401 Rule 1401. Definitions. (a) Issue. Unless otherwise specified, the term “issue” in this subpart means a matter upon which the Board made a final decision (other than a decision under...
38 CFR 20.1401 - Rule 1401. Definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2014-07-01 2014-07-01 false Rule 1401. Definitions... Unmistakable Error § 20.1401 Rule 1401. Definitions. (a) Issue. Unless otherwise specified, the term “issue” in this subpart means a matter upon which the Board made a final decision (other than a decision under...
Effect of thematic map misclassification on landscape multi-metric assessment.
Kleindl, William J; Powell, Scott L; Hauer, F Richard
2015-06-01
Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.
Graff, L; Russell, J; Seashore, J; Tate, J; Elwell, A; Prete, M; Werdmann, M; Maag, R; Krivenko, C; Radford, M
2000-11-01
To test the hypothesis that physician errors (failure to diagnose appendicitis at initial evaluation) correlate with adverse outcome. The authors also postulated that physician errors would correlate with delays in surgery, delays in surgery would correlate with adverse outcomes, and physician errors would occur on patients with atypical presentations. This was a retrospective two-arm observational cohort study at 12 acute care hospitals: 1) consecutive patients who had an appendectomy for appendicitis and 2) consecutive emergency department abdominal pain patients. Outcome measures were adverse events (perforation, abscess) and physician diagnostic performance (false-positive decisions, false-negative decisions). The appendectomy arm of the study included 1, 026 patients with 110 (10.5%) false-positive decisions (range by hospital 4.7% to 19.5%). Of the 916 patients with appendicitis, 170 (18.6%) false-negative decisions were made (range by hospital 10.6% to 27.8%). Patients who had false-negative decisions had increased risks of perforation (r = 0.59, p = 0.058) and of abscess formation (r = 0.81, p = 0.002). For admitted patients, when the inhospital delay before surgery was >20 hours, the risk of perforation was increased [2.9 odds ratio (OR) 95% CI = 1.8 to 4.8]. The amount of delay from initial physician evaluation until surgery varied with physician diagnostic performance: 7.0 hours (95% CI = 6.7 to 7.4) if the initial physician made the diagnosis, 72.4 hours (95% CI = 51.2 to 93.7) if the initial office physician missed the diagnosis, and 63.1 hours (95% CI = 47.9 to 78.4) if the initial emergency physician missed the diagnosis. Patients whose diagnosis was initially missed by the physician had fewer signs and symptoms of appendicitis than patients whose diagnosis was made initially [appendicitis score 2.0 (95% CI = 1.6 to 2.3) vs 6.5 (95% CI = 6.4 to 6.7)]. Older patients (>41 years old) had more false-negative decisions and a higher risk of perforation or abscess (3.5 OR 95% CI = 2.4 to 5.1). False-positive decisions were made for patients who had signs and symptoms similar to those of appendicitis patients [appendicitis score 5.7 (95% CI = 5.2 to 6.1) vs 6.5 (95% CI = 6.4 to 6.7)]. Female patients had an increased risk of false-positive surgery (2.3 OR 95% CI = 1.5 to 3.4). The abdominal pain arm of the study included 1,118 consecutive patients submitted by eight hospitals, with 44 patients having appendicitis. Hospitals with observation units compared with hospitals without observation units had a higher "rule out appendicitis" evaluation rate [33.7% (95% CI = 27 to 38) vs 24.7% (95% CI = 23 to 27)] and a similar hospital admission rate (27.6% vs 24.7%, p = NS). There was a lower miss-diagnosis rate (15.1% vs 19.4%, p = NS power 0.02), lower perforation rate (19.0% vs 20.6%, p = NS power 0.05), and lower abscess rate (5.6% vs 6.9%, p = NS power 0.06), but these did not reach statistical significance. Errors in physician diagnostic decisions correlated with patient clinical findings, i.e., the missed diagnoses were on appendicitis patients with few clinical findings and unnecessary surgeries were on non-appendicitis patients with clinical findings similar to those of patients with appendicitis. Adverse events (perforation, abscess formation) correlated with physician false-negative decisions.
An automated approach to the design of decision tree classifiers
NASA Technical Reports Server (NTRS)
Argentiero, P.; Chin, R.; Beaudet, P.
1982-01-01
An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
NASA Astrophysics Data System (ADS)
Coyne, Kevin Anthony
The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.
Hargrave, Catriona; Deegan, Timothy; Bednarz, Tomasz; Poulsen, Michael; Harden, Fiona; Mengersen, Kerrie
2018-05-17
To describe a Bayesian network (BN) and complementary visualization tool that aim to support decision-making during online cone-beam computed tomography (CBCT)-based image-guided radiotherapy (IGRT) for prostate cancer patients. The BN was created to represent relationships between observed prostate, proximal seminal vesicle (PSV), bladder and rectum volume variations, an image feature alignment score (FAS TV _ OAR ), delivered dose, and treatment plan compliance (TPC). Variables influencing tumor volume (TV) targeting accuracy such as intrafraction motion, and contouring and couch shift errors were also represented. A score of overall TPC (FAS global ) and factors such as image quality were used to inform the BN output node providing advice about proceeding with treatment. The BN was quantified using conditional probabilities generated from published studies, FAS TV _ OAR /global modeling, and a survey of IGRT decision-making practices. A new IGRT visualization tool (IGRT REV ), in the form of Mollweide projection plots, was developed to provide a global summary of residual errors after online CBCT-planning CT registration. Sensitivity and scenario analyses were undertaken to evaluate the performance of the BN and the relative influence of the network variables on TPC and the decision to proceed with treatment. The IGRT REV plots were evaluated in conjunction with the BN scenario testing, using additional test data generated from retrospective CBCT-planning CT soft-tissue registrations for 13/36 patients whose data were used in the FAS TV _ OAR /global modeling. Modeling of the TV targeting errors resulted in a very low probability of corrected distances between the CBCT and planning CT prostate or PSV volumes being within their thresholds. Strength of influence evaluation with and without the BN TV targeting error nodes indicated that rectum- and bladder-related network variables had the highest relative importance. When the TV targeting error nodes were excluded from the BN, TPC was sensitive to observed PSV and rectum variations while the decision to treat was sensitive to observed prostate and PSV variations. When root nodes were set so the PSV and rectum variations exceeded thresholds, the probability of low TPC increased to 40%. Prostate and PSV variations exceeding thresholds increased the likelihood of repositioning or repeating patient preparation to 43%. Scenario testing using the test data from 13 patients, demonstrated two cases where the BN provided increased high TPC probabilities, despite some of the prostate and PSV volume variation metrics not being within tolerance. The IGRT REV tool was effective in highlighting and quantifying where TV and OAR variations occurred, supporting the BN recommendation to reposition the patient or repeat their bladder and bowel preparation. In another case, the IGRT REV tool was also effective in highlighting where PSV volume variation significantly exceeded tolerance when the BN had indicated to proceed with treatment. This study has demonstrated that both the BN and IGRT REV plots are effective tools for inclusion in a decision support system for online CBCT-based IGRT for prostate cancer patients. Alternate approaches to modeling TV targeting errors need to be explored as well as extension of the BN to support offline IGRT decisions related to adaptive radiotherapy. © 2018 American Association of Physicists in Medicine.
Zendehrouh, Sareh
2015-11-01
Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
White, Stuart F.; Geraci, Marilla; Lewis, Elizabeth; Leshin, Joseph; Teng, Cindy; Averbeck, Bruno; Meffert, Harma; Ernst, Monique; Blair, James R.; Grillon, Christian; Blair, Karina S.
2017-01-01
Objective Deficits in reinforcement-based decision-making have been reported in Generalized Anxiety Disorder. However, the pathophysiology of these deficits is largely unknown, extant studies have mainly examined youth and the integrity of core functional processes underpinning decision-making remain undetermined. In particular, it is unclear whether the representation of reinforcement prediction error (PE: the difference between received and expected reinforcement) is disrupted in Generalized Anxiety Disorder. The current study addresses these issues in adults with the disorder. Methods Forty-six un-medicated individuals with Generalized Anxiety Disorder and 32 healthy controls group-matched on IQ, gender and age, completed a passive avoidance task while undergoing functional MRI. Results Behaviorally, individuals with Generalized Anxiety Disorder showed impaired reinforcement-based decision-making. Imaging results revealed that during feedback, individuals with Generalized Anxiety Disorder relative to healthy controls showed a reduced correlation between PE and activity within ventromedial prefrontal cortex, ventral striatum and other structures implicated in decision-making. In addition, individuals with Generalized Anxiety Disorder relative to healthy participants showed a reduced correlation between punishment, but not reward, PEs and activity within bilateral lentiform nucleus/putamen. Conclusions This is the first study to identify computational impairments during decision-making in Generalized Anxiety Disorder. PE signaling is significantly disrupted in individuals with the disorder and may underpin the decision-making deficits observed in patients with GAD. PMID:27631963
Family matters: dyadic agreement in end-of-life medical decision making.
Schmid, Bettina; Allen, Rebecca S; Haley, Philip P; Decoster, Jamie
2010-04-01
We examined race/ethnicity and cultural context within hypothetical end-of-life medical decision scenarios and its influence on patient-proxy agreement. Family dyads consisting of an older adult and 1 family member, typically an adult child, responded to questions regarding the older adult's preferences for cardiopulmonary resuscitation, artificial feeding and fluids, and palliative care in hypothetical illness scenarios. The responses of 34 Caucasian dyads and 30 African American dyads were compared to determine the extent to which family members could accurately predict the treatment preferences of their older relative. We found higher treatment preference agreement among African American dyads compared with Caucasian dyads when considering overall raw difference scores (i.e., overtreatment errors can compensate for undertreatment errors). Prior advance care planning moderated the effect such that lower levels of advance care planning predicted undertreatment errors among African American proxies and overtreatment errors among Caucasian proxies. In contrast, no racial/ethnic differences in treatment preference agreement were found within absolute difference scores (i.e., total error, regardless of the direction of error). This project is one of the first to examine the mediators and moderators of dyadic racial/cultural differences in treatment preference agreement for end-of-life care in hypothetical illness scenarios. Future studies should use mixed method approaches to explore underlying factors for racial differences in patient-proxy agreement as a basis for developing culturally sensitive interventions to reduce racial disparities in end-of-life care options.
Human factors in surgery: from Three Mile Island to the operating room.
D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco
2009-01-01
Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.
Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa; Ravindran, Balaraman; Moustafa, Ahmed A.
2014-01-01
Although empirical and neural studies show that serotonin (5HT) plays many functional roles in the brain, prior computational models mostly focus on its role in behavioral inhibition. In this study, we present a model of risk based decision making in a modified Reinforcement Learning (RL)-framework. The model depicts the roles of dopamine (DA) and serotonin (5HT) in Basal Ganglia (BG). In this model, the DA signal is represented by the temporal difference error (δ), while the 5HT signal is represented by a parameter (α) that controls risk prediction error. This formulation that accommodates both 5HT and DA reconciles some of the diverse roles of 5HT particularly in connection with the BG system. We apply the model to different experimental paradigms used to study the role of 5HT: (1) Risk-sensitive decision making, where 5HT controls risk assessment, (2) Temporal reward prediction, where 5HT controls time-scale of reward prediction, and (3) Reward/Punishment sensitivity, in which the punishment prediction error depends on 5HT levels. Thus the proposed integrated RL model reconciles several existing theories of 5HT and DA in the BG. PMID:24795614
Hitchcock, Elaine R.; Ferron, John
2017-01-01
Purpose Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. Method This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Conclusions Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders. PMID:28595354
Byun, Tara McAllister; Hitchcock, Elaine R; Ferron, John
2017-06-10
Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders.
Nematode Damage Functions: The Problems of Experimental and Sampling Error
Ferris, H.
1984-01-01
The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865
Pressing the Approach: A NASA Study of 19 Recent Accidents Yields a New Perspective on Pilot Error
NASA Technical Reports Server (NTRS)
Berman, Benjamin A.; Dismukes, R. Key
2007-01-01
This article begins with a review of two sample airplane accidents that were caused by pilot error. The analysis of these and 17 other accidents suggested that almost all experienced pilot operating in the same environment in which the accident crews were operating and knowing only what the accident crews knew at each moment of the flight, would be vulnerable to making a similar decision and similar errors. Whether a particular crew in a given situation makes errors depends on somewhat random interaction of factors. Two themes that seem to be prevalent in these cases are: Plan Continuation Bias, and Snowballing Workload.
Sources of variability and systematic error in mouse timing behavior.
Gallistel, C R; King, Adam; McDonald, Robert
2004-01-01
In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.
The Cut-Score Operating Function: A New Tool to Aid in Standard Setting
ERIC Educational Resources Information Center
Grabovsky, Irina; Wainer, Howard
2017-01-01
In this essay, we describe the construction and use of the Cut-Score Operating Function in aiding standard setting decisions. The Cut-Score Operating Function shows the relation between the cut-score chosen and the consequent error rate. It allows error rates to be defined by multiple loss functions and will show the behavior of each loss…
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
The role of the insula in intuitive expert bug detection in computer code: an fMRI study.
Castelhano, Joao; Duarte, Isabel C; Ferreira, Carlos; Duraes, Joao; Madeira, Henrique; Castelo-Branco, Miguel
2018-05-09
Software programming is a complex and relatively recent human activity, involving the integration of mathematical, recursive thinking and language processing. The neural correlates of this recent human activity are still poorly understood. Error monitoring during this type of task, requiring the integration of language, logical symbol manipulation and other mathematical skills, is particularly challenging. We therefore aimed to investigate the neural correlates of decision-making during source code understanding and mental manipulation in professional participants with high expertise. The present fMRI study directly addressed error monitoring during source code comprehension, expert bug detection and decision-making. We used C code, which triggers the same sort of processing irrespective of the native language of the programmer. We discovered a distinct role for the insula in bug monitoring and detection and a novel connectivity pattern that goes beyond the expected activation pattern evoked by source code understanding in semantic language and mathematical processing regions. Importantly, insula activity levels were critically related to the quality of error detection, involving intuition, as signalled by reported initial bug suspicion, prior to final decision and bug detection. Activity in this salience network (SN) region evoked by bug suspicion was predictive of bug detection precision, suggesting that it encodes the quality of the behavioral evidence. Connectivity analysis provided evidence for top-down circuit "reutilization" stemming from anterior cingulate cortex (BA32), a core region in the SN that evolved for complex error monitoring such as required for this type of recent human activity. Cingulate (BA32) and anterolateral (BA10) frontal regions causally modulated decision processes in the insula, which in turn was related to activity of math processing regions in early parietal cortex. In other words, earlier brain regions used during evolution for other functions seem to be reutilized in a top-down manner for a new complex function, in an analogous manner as described for other cultural creations such as reading and literacy.
Prefrontal neural correlates of memory for sequences.
Averbeck, Bruno B; Lee, Daeyeol
2007-02-28
The sequence of actions appropriate to solve a problem often needs to be discovered by trial and error and recalled in the future when faced with the same problem. Here, we show that when monkeys had to discover and then remember a sequence of decisions across trials, ensembles of prefrontal cortex neurons reflected the sequence of decisions the animal would make throughout the interval between trials. This signal could reflect either an explicit memory process or a sequence-planning process that begins far in advance of the actual sequence execution. This finding extended to error trials such that, when the neural activity during the intertrial interval specified the wrong sequence, the animal also attempted to execute an incorrect sequence. More specifically, we used a decoding analysis to predict the sequence the monkey was planning to execute at the end of the fore-period, just before sequence execution. When this analysis was applied to error trials, we were able to predict where in the sequence the error would occur, up to three movements into the future. This suggests that prefrontal neural activity can retain information about sequences between trials, and that regardless of whether information is remembered correctly or incorrectly, the prefrontal activity veridically reflects the animal's action plan.
Research implications of science-informed, value-based decision making.
Dowie, Jack
2004-01-01
In 'Hard' science, scientists correctly operate as the 'guardians of certainty', using hypothesis testing formulations and value judgements about error rates and time discounting that make classical inferential methods appropriate. But these methods can neither generate most of the inputs needed by decision makers in their time frame, nor generate them in a form that allows them to be integrated into the decision in an analytically coherent and transparent way. The need for transparent accountability in public decision making under uncertainty and value conflict means the analytical coherence provided by the stochastic Bayesian decision analytic approach, drawing on the outputs of Bayesian science, is needed. If scientific researchers are to play the role they should be playing in informing value-based decision making, they need to see themselves also as 'guardians of uncertainty', ensuring that the best possible current posterior distributions on relevant parameters are made available for decision making, irrespective of the state of the certainty-seeking research. The paper distinguishes the actors employing different technologies in terms of the focus of the technology (knowledge, values, choice); the 'home base' mode of their activity on the cognitive continuum of varying analysis-to-intuition ratios; and the underlying value judgements of the activity (especially error loss functions and time discount rates). Those who propose any principle of decision making other than the banal 'Best Principle', including the 'Precautionary Principle', are properly interpreted as advocates seeking to have their own value judgements and preferences regarding mode location apply. The task for accountable decision makers, and their supporting technologists, is to determine the best course of action under the universal conditions of uncertainty and value difference/conflict.
Evaluating team decision-making as an emergent phenomenon.
Kinnear, John; Wilson, Nick; O'Dwyer, Anthony
2018-04-01
The complexity of modern clinical practice has highlighted the fallibility of individual clinicians' decision-making, with effective teamwork emerging as a key to patient safety. Dual process theory is widely accepted as a framework for individual decision-making, with type 1 processes responsible for fast, intuitive and automatic decisions and type 2 processes for slow, analytical decisions. However, dual process theory does not explain cognition at the group level, when individuals act in teams. Team cognition resulting from dynamic interaction of individuals is said to be more resilient to decision-making error and greater than simply aggregated cognition. Clinicians were paired as teams and asked to solve a cognitive puzzle constructed as a drug calculation. The frequency at which the teams made incorrect decisions was compared with that of individual clinicians answering the same question. When clinicians acted in pairs, 63% answered the cognitive puzzle correctly, compared with 33% of clinicians as individuals, showing a statistically significant difference in performance (χ 2 (1, n=116)=24.329, P<0.001). Based on the predicted performance of teams made up of the random pairing of individuals who had the same propensity to answer as previously, there was no statistical difference in the actual and predicted teams' performance. Teams are less prone to making errors of decision-making than individuals. However, the improved performance is likely to be owing to the effect of aggregated cognition rather than any improved decision-making as a result of the interaction. There is no evidence of team cognition as an emergent and distinct entity. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Hauser, Tobias U; Iannaccone, Reto; Ball, Juliane; Mathys, Christoph; Brandeis, Daniel; Walitza, Susanne; Brem, Silvia
2014-10-01
Attention-deficit/hyperactivity disorder (ADHD) has been associated with deficient decision making and learning. Models of ADHD have suggested that these deficits could be caused by impaired reward prediction errors (RPEs). Reward prediction errors are signals that indicate violations of expectations and are known to be encoded by the dopaminergic system. However, the precise learning and decision-making deficits and their neurobiological correlates in ADHD are not well known. To determine the impaired decision-making and learning mechanisms in juvenile ADHD using advanced computational models, as well as the related neural RPE processes using multimodal neuroimaging. Twenty adolescents with ADHD and 20 healthy adolescents serving as controls (aged 12-16 years) were examined using a probabilistic reversal learning task while simultaneous functional magnetic resonance imaging and electroencephalogram were recorded. Learning and decision making were investigated by contrasting a hierarchical Bayesian model with an advanced reinforcement learning model and by comparing the model parameters. The neural correlates of RPEs were studied in functional magnetic resonance imaging and electroencephalogram. Adolescents with ADHD showed more simplistic learning as reflected by the reinforcement learning model (exceedance probability, Px = .92) and had increased exploratory behavior compared with healthy controls (mean [SD] decision steepness parameter β: ADHD, 4.83 [2.97]; controls, 6.04 [2.53]; P = .02). The functional magnetic resonance imaging analysis revealed impaired RPE processing in the medial prefrontal cortex during cue as well as during outcome presentation (P < .05, family-wise error correction). The outcome-related impairment in the medial prefrontal cortex could be attributed to deficient processing at 200 to 400 milliseconds after feedback presentation as reflected by reduced feedback-related negativity (ADHD, 0.61 [3.90] μV; controls, -1.68 [2.52] μV; P = .04). The combination of computational modeling of behavior and multimodal neuroimaging revealed that impaired decision making and learning mechanisms in adolescents with ADHD are driven by impaired RPE processing in the medial prefrontal cortex. This novel, combined approach furthers the understanding of the pathomechanisms in ADHD and may advance treatment strategies.
Errors in imaging patients in the emergency setting
Reginelli, Alfonso; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a “perfect storm” for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955
Errors in imaging patients in the emergency setting.
Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.
Set the Wrong Tuition and You'll Pay a Price
ERIC Educational Resources Information Center
Strauss, David W.
2006-01-01
For all of the attention rising college costs continue to receive, it is striking how poorly informed many decision makers are when it comes to setting tuition and fees. And it's equally astounding that so many institutions are learning the consequences of pricing decisions undertaken solely by trial and error when a wrong judgment can affect…
Schmidt, Brandy; Papale, Andrew; Redish, A David; Markus, Etan J
2013-02-15
Navigation can be accomplished through multiple decision-making strategies, using different information-processing computations. A well-studied dichotomy in these decision-making strategies compares hippocampal-dependent "place" and dorsal-lateral striatal-dependent "response" strategies. A place strategy depends on the ability to flexibly respond to environmental cues, while a response strategy depends on the ability to quickly recognize and react to situations with well-learned action-outcome relationships. When rats reach decision points, they sometimes pause and orient toward the potential routes of travel, a process termed vicarious trial and error (VTE). VTE co-occurs with neurophysiological information processing, including sweeps of representation ahead of the animal in the hippocampus and transient representations of reward in the ventral striatum and orbitofrontal cortex. To examine the relationship between VTE and the place/response strategy dichotomy, we analyzed data in which rats were cued to switch between place and response strategies on a plus maze. The configuration of the maze allowed for place and response strategies to work competitively or cooperatively. Animals showed increased VTE on trials entailing competition between navigational systems, linking VTE with deliberative decision-making. Even in a well-learned task, VTE was preferentially exhibited when a spatial selection was required, further linking VTE behavior with decision-making associated with hippocampal processing.
Rational decision-making in inhibitory control.
Shenoy, Pradeep; Yu, Angela J
2011-01-01
An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability.
Rational Decision-Making in Inhibitory Control
Shenoy, Pradeep; Yu, Angela J.
2011-01-01
An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability. PMID:21647306
Shalom, Erez; Shahar, Yuval; Parmet, Yisrael; Lunenfeld, Eitan
2015-04-01
To quantify the effect of a new continuous-care guideline (GL)-application engine, the Picard decision support system (DSS) engine, on the correctness and completeness of clinicians' decisions relative to an established clinical GL, and to assess the clinicians' attitudes towards a specific DSS. Thirty-six clinicians, including residents at different training levels and board-certified specialists at an academic OB/GYN department that handles around 15,000 deliveries annually, agreed to evaluate our continuous-care guideline-based DSS and to perform a cross-over assessment of the effects of using our guideline-based DSS. We generated electronic patient records that realistically simulated the longitudinal course of six different clinical scenarios of the preeclampsia/eclampsia/toxemia (PET) GL, encompassing 60 different decision points in total. Each clinician managed three scenarios manually without the Picard DSS engine (Non-DSS mode) and three scenarios when assisted by the Picard DSS engine (DSS mode). The main measures in both modes were correctness and completeness of actions relative to the PET GL. Correctness was further decomposed into necessary and redundant actions, relative to the guideline and the actual patient data. At the end of the assessment, a questionnaire was administered to the clinicians to assess their perceptions regarding use of the DSS. With respect to completeness, the clinicians applied approximately 41% of the GL's recommended actions in the non-DSS mode. Completeness increased to the performance of approximately 93% of the guideline's recommended actions, when using the DSS mode. With respect to correctness, approximately 94.5% of the clinicians' decisions in the non-DSS mode were correct. However, these included 68% of the actions that were correct but redundant, given the patient's data (e.g., repeating tests that had been performed), and 27% of the actions, which were necessary in the context of the GL and of the given scenario. Only 5.5% of the decisions were definite errors. In the DSS mode, 94% of the clinicians' decisions were correct, which included 3% that were correct but redundant, and 91% of the actions that were correct and necessary in the context of the GL and of the given scenario. Only 6% of the DSS-mode decisions were erroneous. The DSS was assessed by the clinicians as potentially useful. Support from the GL-based DSS led to uniformity in the quality of the decisions, regardless of the particular clinician, any particular clinical scenario, any particular decision point, or any decision type within the scenarios. Using the DSS dramatically enhances completeness (i.e., performance of guideline-based recommendations) and seems to prevent the performance of most of the redundant actions, but does not seem to affect the rate of performance of incorrect actions. The redundancy rate is enhanced by similar recent findings in recent studies. Clinicians mostly find this support to be potentially useful for their daily practice. A continuous-care GL-based DSS, such as the Picard DSS engine, has the potential to prevent most errors of omission by ensuring uniformly high quality of clinical decision making (relative to a GL-based norm), due to the increased adherence (i.e., completeness) to the GL, and most of the errors of commission that increase therapy costs, by reducing the rate of redundant actions. However, to prevent clinical errors of commission, the DSS needs to be accompanied by additional modules, such as automated control of the quality of the physician's actual actions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Alós-Ferrer, Carlos; Hügelschäfer, Sabine; Li, Jiahui
2016-01-01
Decision inertia is the tendency to repeat previous choices independently of the outcome, which can give rise to perseveration in suboptimal choices. We investigate this tendency in probability-updating tasks. Study 1 shows that, whenever decision inertia conflicts with normatively optimal behavior (Bayesian updating), error rates are larger and decisions are slower. This is consistent with a dual-process view of decision inertia as an automatic process conflicting with a more rational, controlled one. We find evidence of decision inertia in both required and autonomous decisions, but the effect of inertia is more clear in the latter. Study 2 considers more complex decision situations where further conflict arises due to reinforcement processes. We find the same effects of decision inertia when reinforcement is aligned with Bayesian updating, but if the two latter processes conflict, the effects are limited to autonomous choices. Additionally, both studies show that the tendency to rely on decision inertia is positively associated with preference for consistency.
Alós-Ferrer, Carlos; Hügelschäfer, Sabine; Li, Jiahui
2016-01-01
Decision inertia is the tendency to repeat previous choices independently of the outcome, which can give rise to perseveration in suboptimal choices. We investigate this tendency in probability-updating tasks. Study 1 shows that, whenever decision inertia conflicts with normatively optimal behavior (Bayesian updating), error rates are larger and decisions are slower. This is consistent with a dual-process view of decision inertia as an automatic process conflicting with a more rational, controlled one. We find evidence of decision inertia in both required and autonomous decisions, but the effect of inertia is more clear in the latter. Study 2 considers more complex decision situations where further conflict arises due to reinforcement processes. We find the same effects of decision inertia when reinforcement is aligned with Bayesian updating, but if the two latter processes conflict, the effects are limited to autonomous choices. Additionally, both studies show that the tendency to rely on decision inertia is positively associated with preference for consistency. PMID:26909061
Lockhart, Joseph J; Satya-Murti, Saty
2017-11-01
Cognitive effort is an essential part of both forensic and clinical decision-making. Errors occur in both fields because the cognitive process is complex and prone to bias. We performed a selective review of full-text English language literature on cognitive bias leading to diagnostic and forensic errors. Earlier work (1970-2000) concentrated on classifying and raising bias awareness. Recently (2000-2016), the emphasis has shifted toward strategies for "debiasing." While the forensic sciences have focused on the control of misleading contextual cues, clinical debiasing efforts have relied on checklists and hypothetical scenarios. No single generally applicable and effective bias reduction strategy has emerged so far. Generalized attempts at bias elimination have not been particularly successful. It is time to shift focus to the study of errors within specific domains, and how to best communicate uncertainty in order to improve decision making on the part of both the expert and the trier-of-fact. © 2017 American Academy of Forensic Sciences.
Model-based influences on humans' choices and striatal prediction errors.
Daw, Nathaniel D; Gershman, Samuel J; Seymour, Ben; Dayan, Peter; Dolan, Raymond J
2011-03-24
The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. Copyright © 2011 Elsevier Inc. All rights reserved.
Sinclair, R C F; Danjoux, G R; Goodridge, V; Batterham, A M
2009-11-01
The variability between observers in the interpretation of cardiopulmonary exercise tests may impact upon clinical decision making and affect the risk stratification and peri-operative management of a patient. The purpose of this study was to quantify the inter-reader variability in the determination of the anaerobic threshold (V-slope method). A series of 21 cardiopulmonary exercise tests from patients attending a surgical pre-operative assessment clinic were read independently by nine experienced clinicians regularly involved in clinical decision making. The grand mean for the anaerobic threshold was 10.5 ml O(2).kg body mass(-1).min(-1). The technical error of measurement was 8.1% (circa 0.9 ml.kg(-1).min(-1); 90% confidence interval, 7.4-8.9%). The mean absolute difference between readers was 4.5% with a typical random error of 6.5% (6.0-7.2%). We conclude that the inter-observer variability for experienced clinicians determining the anaerobic threshold from cardiopulmonary exercise tests is acceptable.
Grinband, Jack; Savitskaya, Judith; Wager, Tor D; Teichert, Tobias; Ferrera, Vincent P; Hirsch, Joy
2011-07-15
The dorsal medial frontal cortex (dMFC) is highly active during choice behavior. Though many models have been proposed to explain dMFC function, the conflict monitoring model is the most influential. It posits that dMFC is primarily involved in detecting interference between competing responses thus signaling the need for control. It accurately predicts increased neural activity and response time (RT) for incompatible (high-interference) vs. compatible (low-interference) decisions. However, it has been shown that neural activity can increase with time on task, even when no decisions are made. Thus, the greater dMFC activity on incompatible trials may stem from longer RTs rather than response conflict. This study shows that (1) the conflict monitoring model fails to predict the relationship between error likelihood and RT, and (2) the dMFC activity is not sensitive to congruency, error likelihood, or response conflict, but is monotonically related to time on task. Copyright © 2010 Elsevier Inc. All rights reserved.
The PoET (Prevention of Error-Based Transfers) Project.
Oliver, Jill; Chidwick, Paula
2017-01-01
The PoET (Prevention of Error-based Transfers) Project is one of the Ethics Quality Improvement Projects (EQIPs) taking place at William Osler Health System. This specific project is designed to reduce transfers from long-term care to hospital that are caused by legal and ethical errors related to consent, capacity and substitute decision-making. The project is currently operating in eight long-term care homes in the Central West Local Health Integration Network and has seen a 56% reduction in multiple transfers before death in hospital.
Using a Context-aware Medical Application to Address Information Needs for Extubation Decisions
Zhu, Xinxin; Lord, William
2005-01-01
Information overload has been one of the causes of preventable medical errors [1] and escalating costs [2]. A context-aware application with embedded clinical knowledge is proposed to provide practitioners with the appropriate amount of information and content. We developed a prototype of a context-aware medical application to address clinicians’ information needs that arise in a data-intensive unit, the Cardio-Thoracic Intensive Care Unit (CTICU). A major clinical decision supported by the prototype, the extubation decision, is illustrated. PMID:16779455
NASA Astrophysics Data System (ADS)
Amphawan, Angela; Ghazi, Alaan; Al-dawoodi, Aras
2017-11-01
A free-space optics mode-wavelength division multiplexing (MWDM) system using Laguerre-Gaussian (LG) modes is designed using decision feedback equalization for controlling mode coupling and combating inter symbol interference so as to increase channel diversity. In this paper, a data rate of 24 Gbps is achieved for a FSO MWDM channel of 2.6 km in length using feedback equalization. Simulation results show significant improvement in eye diagrams and bit-error rates before and after decision feedback equalization.
Systematic Review of Medical Informatics-Supported Medication Decision Making.
Melton, Brittany L
2017-01-01
This systematic review sought to assess the applications and implications of current medical informatics-based decision support systems related to medication prescribing and use. Studies published between January 2006 and July 2016 which were indexed in PubMed and written in English were reviewed, and 39 studies were ultimately included. Most of the studies looked at computerized provider order entry or clinical decision support systems. Most studies examined decision support systems as a means of reducing errors or risk, particularly associated with medication prescribing, whereas a few studies evaluated the impact medical informatics-based decision support systems have on workflow or operations efficiency. Most studies identified benefits associated with decision support systems, but some indicate there is room for improvement.
NASA Astrophysics Data System (ADS)
Cui, Chenxuan
When cognitive radio (CR) operates, it starts by sensing spectrum and looking for idle bandwidth. There are several methods for CR to make a decision on either the channel is occupied or idle, for example, energy detection scheme, cyclostationary detection scheme and matching filtering detection scheme [1]. Among them, the most common method is energy detection scheme because of its algorithm and implementation simplicities [2]. There are two major methods for sensing, the first one is to sense single channel slot with varying bandwidth, whereas the second one is to sense multiple channels and each with same bandwidth. After sensing periods, samples are compared with a preset detection threshold and a decision is made on either the primary user (PU) is transmitting or not. Sometimes the sensing and decision results can be erroneous, for example, false alarm error and misdetection error may occur. In order to better control error probabilities and improve CR network performance (i.e. energy efficiency), we introduce cooperative sensing; in which several CR within a certain range detect and make decisions on channel availability together. The decisions are transmitted to and analyzed by a data fusion center (DFC) to make a final decision on channel availability. After the final decision is been made, DFC sends back the decision to the CRs in order to tell them to stay idle or start to transmit data to secondary receiver (SR) within a preset transmission time. After the transmission, a new cycle starts again with sensing. This thesis report is organized as followed: Chapter II review some of the papers on optimizing CR energy efficiency. In Chapter III, we study how to achieve maximal energy efficiency when CR senses single channel with changing bandwidth and with constrain on misdetection threshold in order to protect PU; furthermore, a case study is given and we calculate the energy efficiency. In Chapter IV, we study how to achieve maximal energy efficiency when CR senses multiple channels and each channel with same bandwidth, also, we preset a misdetection threshold and calculate the energy efficiency. A comparison will be shown between two sensing methods at the end of the chapter. Finally, Chapter V concludes this thesis.
Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops
NASA Technical Reports Server (NTRS)
Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram
2017-01-01
The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.
Dror, Itiel E; Wertheim, Kasey; Fraser-Mackenzie, Peter; Walajtys, Jeff
2012-03-01
Experts play a critical role in forensic decision making, even when cognition is offloaded and distributed between human and machine. In this paper, we investigated the impact of using Automated Fingerprint Identification Systems (AFIS) on human decision makers. We provided 3680 AFIS lists (a total of 55,200 comparisons) to 23 latent fingerprint examiners as part of their normal casework. We manipulated the position of the matching print in the AFIS list. The data showed that latent fingerprint examiners were affected by the position of the matching print in terms of false exclusions and false inconclusives. Furthermore, the data showed that false identification errors were more likely at the top of the list and that such errors occurred even when the correct match was present further down the list. These effects need to be studied and considered carefully, so as to optimize human decision making when using technologies such as AFIS. © 2011 American Academy of Forensic Sciences.
Improved detection of radioactive material using a series of measurements
NASA Astrophysics Data System (ADS)
Mann, Jenelle
The goal of this project is to develop improved algorithms for detection of radioactive sources that have low signal compared to background. The detection of low signal sources is of interest in national security applications where the source may have weak ionizing radiation emissions, is heavily shielded, or the counting time is short (such as portal monitoring). Traditionally to distinguish signal from background the decision threshold (y*) is calculated by taking a long background count and limiting the false negative error (alpha error) to 5%. Some problems with this method include: background is constantly changing due to natural environmental fluctuations and large amounts of data are being taken as the detector continuously scans that are not utilized. Rather than looking at a single measurement, this work investigates looking at a series of N measurements and develops an appropriate decision threshold for exceeding the decision threshold n times in a series of N. This methodology is investigated for a rectangular, triangular, sinusoidal, Poisson, and Gaussian distribution.
ERIC Educational Resources Information Center
Burns, Matthew K.; Taylor, Crystal N.; Warmbold-Brann, Kristy L.; Preast, June L.; Hosp, John L.; Ford, Jeremy W.
2017-01-01
Intervention researchers often use curriculum-based measurement of reading fluency (CBM-R) with a brief experimental analysis (BEA) to identify an effective intervention for individual students. The current study synthesized data from 22 studies that used CBM-R data within a BEA by computing the standard error of measure (SEM) for the median data…
ERIC Educational Resources Information Center
Lynch, William W.
Prompting of reading errors is a common pattern of teaching behavior occurring in reading groups. Teachers' tactics in responding to pupil errors during oral reading in public school classrooms were analyzed with the assistance of the technology of the Computer Assisted Teacher Training System (CATTS) to formulate hypotheses about teacher decision…
Dopaminergic Modulation of Decision Making and Subjective Well-Being.
Rutledge, Robb B; Skandali, Nikolina; Dayan, Peter; Dolan, Raymond J
2015-07-08
The neuromodulator dopamine has a well established role in reporting appetitive prediction errors that are widely considered in terms of learning. However, across a wide variety of contexts, both phasic and tonic aspects of dopamine are likely to exert more immediate effects that have been less well characterized. Of particular interest is dopamine's influence on economic risk taking and on subjective well-being, a quantity known to be substantially affected by prediction errors resulting from the outcomes of risky choices. By boosting dopamine levels using levodopa (l-DOPA) as human subjects made economic decisions and repeatedly reported their momentary happiness, we show here an effect on both choices and happiness. Boosting dopamine levels increased the number of risky options chosen in trials involving potential gains but not trials involving potential losses. This effect could be better captured as increased Pavlovian approach in an approach-avoidance decision model than as a change in risk preferences within an established prospect theory model. Boosting dopamine also increased happiness resulting from some rewards. Our findings thus identify specific novel influences of dopamine on decision making and emotion that are distinct from its established role in learning. Copyright © 2015 Rutledge et al.
Dopaminergic Modulation of Decision Making and Subjective Well-Being
Skandali, Nikolina; Dayan, Peter; Dolan, Raymond J.
2015-01-01
The neuromodulator dopamine has a well established role in reporting appetitive prediction errors that are widely considered in terms of learning. However, across a wide variety of contexts, both phasic and tonic aspects of dopamine are likely to exert more immediate effects that have been less well characterized. Of particular interest is dopamine's influence on economic risk taking and on subjective well-being, a quantity known to be substantially affected by prediction errors resulting from the outcomes of risky choices. By boosting dopamine levels using levodopa (l-DOPA) as human subjects made economic decisions and repeatedly reported their momentary happiness, we show here an effect on both choices and happiness. Boosting dopamine levels increased the number of risky options chosen in trials involving potential gains but not trials involving potential losses. This effect could be better captured as increased Pavlovian approach in an approach–avoidance decision model than as a change in risk preferences within an established prospect theory model. Boosting dopamine also increased happiness resulting from some rewards. Our findings thus identify specific novel influences of dopamine on decision making and emotion that are distinct from its established role in learning. PMID:26156984
Intelligent Diagnostic Assistant for Complicated Skin Diseases through C5's Algorithm.
Jeddi, Fatemeh Rangraz; Arabfard, Masoud; Kermany, Zahra Arab
2017-09-01
Intelligent Diagnostic Assistant can be used for complicated diagnosis of skin diseases, which are among the most common causes of disability. The aim of this study was to design and implement a computerized intelligent diagnostic assistant for complicated skin diseases through C5's Algorithm. An applied-developmental study was done in 2015. Knowledge base was developed based on interviews with dermatologists through questionnaires and checklists. Knowledge representation was obtained from the train data in the database using Excel Microsoft Office. Clementine Software and C5's Algorithms were applied to draw the decision tree. Analysis of test accuracy was performed based on rules extracted using inference chains. The rules extracted from the decision tree were entered into the CLIPS programming environment and the intelligent diagnostic assistant was designed then. The rules were defined using forward chaining inference technique and were entered into Clips programming environment as RULE. The accuracy and error rates obtained in the training phase from the decision tree were 99.56% and 0.44%, respectively. The accuracy of the decision tree was 98% and the error was 2% in the test phase. Intelligent diagnostic assistant can be used as a reliable system with high accuracy, sensitivity, specificity, and agreement.
Neuronal Reward and Decision Signals: From Theories to Data
Schultz, Wolfram
2015-01-01
Rewards are crucial objects that induce learning, approach behavior, choices, and emotions. Whereas emotions are difficult to investigate in animals, the learning function is mediated by neuronal reward prediction error signals which implement basic constructs of reinforcement learning theory. These signals are found in dopamine neurons, which emit a global reward signal to striatum and frontal cortex, and in specific neurons in striatum, amygdala, and frontal cortex projecting to select neuronal populations. The approach and choice functions involve subjective value, which is objectively assessed by behavioral choices eliciting internal, subjective reward preferences. Utility is the formal mathematical characterization of subjective value and a prime decision variable in economic choice theory. It is coded as utility prediction error by phasic dopamine responses. Utility can incorporate various influences, including risk, delay, effort, and social interaction. Appropriate for formal decision mechanisms, rewards are coded as object value, action value, difference value, and chosen value by specific neurons. Although all reward, reinforcement, and decision variables are theoretical constructs, their neuronal signals constitute measurable physical implementations and as such confirm the validity of these concepts. The neuronal reward signals provide guidance for behavior while constraining the free will to act. PMID:26109341
Uncertainty forecasts improve weather-related decisions and attenuate the effects of forecast error.
Joslyn, Susan L; LeClerc, Jared E
2012-03-01
Although uncertainty is inherent in weather forecasts, explicit numeric uncertainty estimates are rarely included in public forecasts for fear that they will be misunderstood. Of particular concern are situations in which precautionary action is required at low probabilities, often the case with severe events. At present, a categorical weather warning system is used. The work reported here tested the relative benefits of several forecast formats, comparing decisions made with and without uncertainty forecasts. In three experiments, participants assumed the role of a manager of a road maintenance company in charge of deciding whether to pay to salt the roads and avoid a potential penalty associated with icy conditions. Participants used overnight low temperature forecasts accompanied in some conditions by uncertainty estimates and in others by decision advice comparable to categorical warnings. Results suggested that uncertainty information improved decision quality overall and increased trust in the forecast. Participants with uncertainty forecasts took appropriate precautionary action and withheld unnecessary action more often than did participants using deterministic forecasts. When error in the forecast increased, participants with conventional forecasts were reluctant to act. However, this effect was attenuated by uncertainty forecasts. Providing categorical decision advice alone did not improve decisions. However, combining decision advice with uncertainty estimates resulted in the best performance overall. The results reported here have important implications for the development of forecast formats to increase compliance with severe weather warnings as well as other domains in which one must act in the face of uncertainty. PsycINFO Database Record (c) 2012 APA, all rights reserved.
EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.
Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah
2017-12-01
To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.
A Model of Supervisor Decision-Making in the Accommodation of Workers with Low Back Pain.
Williams-Whitt, Kelly; Kristman, Vicki; Shaw, William S; Soklaridis, Sophie; Reguly, Paula
2016-09-01
Purpose To explore supervisors' perspectives and decision-making processes in the accommodation of back injured workers. Methods Twenty-three semi-structured, in-depth interviews were conducted with supervisors from eleven Canadian organizations about their role in providing job accommodations. Supervisors were identified through an on-line survey and interviews were recorded, transcribed and entered into NVivo software. The initial analyses identified common units of meaning, which were used to develop a coding guide. Interviews were coded, and a model of supervisor decision-making was developed based on the themes, categories and connecting ideas identified in the data. Results The decision-making model includes a process element that is described as iterative "trial and error" decision-making. Medical restrictions are compared to job demands, employee abilities and available alternatives. A feasible modification is identified through brainstorming and then implemented by the supervisor. Resources used for brainstorming include information, supervisor experience and autonomy, and organizational supports. The model also incorporates the experience of accommodation as a job demand that causes strain for the supervisor. Accommodation demands affect the supervisor's attitude, brainstorming and monitoring effort, and communication with returning employees. Resources and demands have a combined effect on accommodation decision complexity, which in turn affects the quality of the accommodation option selected. If the employee is unable to complete the tasks or is reinjured during the accommodation, the decision cycle repeats. More frequent iteration through the trial and error process reduces the likelihood of return to work success. Conclusion A series of propositions is developed to illustrate the relationships among categories in the model. The model and propositions show: (a) the iterative, problem solving nature of the RTW process; (b) decision resources necessary for accommodation planning, and (c) the impact accommodation demands may have on supervisors and RTW quality.
Colour coding for blood collection tube closures - a call for harmonisation.
Simundic, Ana-Maria; Cornes, Michael P; Grankvist, Kjell; Lippi, Giuseppe; Nybo, Mads; Ceriotti, Ferruccio; Theodorsson, Elvar; Panteghini, Mauro
2015-02-01
At least one in 10 patients experience adverse events while receiving hospital care. Many of the errors are related to laboratory diagnostics. Efforts to reduce laboratory errors over recent decades have primarily focused on the measurement process while pre- and post-analytical errors including errors in sampling, reporting and decision-making have received much less attention. Proper sampling and additives to the samples are essential. Tubes and additives are identified not only in writing on the tubes but also by the colour of the tube closures. Unfortunately these colours have not been standardised, running the risk of error when tubes from one manufacturer are replaced by the tubes from another manufacturer that use different colour coding. EFLM therefore supports the worldwide harmonisation of the colour coding for blood collection tube closures and labels in order to reduce the risk of pre-analytical errors and improve the patient safety.
When is an error not a prediction error? An electrophysiological investigation.
Holroyd, Clay B; Krigolson, Olave E; Baker, Robert; Lee, Seung; Gibson, Jessica
2009-03-01
A recent theory holds that the anterior cingulate cortex (ACC) uses reinforcement learning signals conveyed by the midbrain dopamine system to facilitate flexible action selection. According to this position, the impact of reward prediction error signals on ACC modulates the amplitude of a component of the event-related brain potential called the error-related negativity (ERN). The theory predicts that ERN amplitude is monotonically related to the expectedness of the event: It is larger for unexpected outcomes than for expected outcomes. However, a recent failure to confirm this prediction has called the theory into question. In the present article, we investigated this discrepancy in three trial-and-error learning experiments. All three experiments provided support for the theory, but the effect sizes were largest when an optimal response strategy could actually be learned. This observation suggests that ACC utilizes dopamine reward prediction error signals for adaptive decision making when the optimal behavior is, in fact, learnable.
Dreisinger, Naomi; Zapolsky, Nathan
2017-02-01
The emergency department (ED) is an environment that is conducive to medical errors. The ED is a time-pressured environment where physicians aim to rapidly evaluate and treat patients. Quick thinking and problem-based solutions are often used to assist in evaluation and diagnosis. Error analysis leads to an understanding of the cause of a medical error and is important to prevent future errors. Research suggests mechanisms to prevent medical errors in the pediatric ED, but prevention is not always possible. Transparency about errors is necessary to assure a trusting doctor-patient relationship. Patients want to be informed about all errors, and apologies are hard. Apologizing for a significant medical error that may have caused a complication is even harder. Having a systematic way to go about apologizing makes the process easier, and helps assure that the right information is relayed to the patient and his or her family. This creates an environment of autonomy and shared decision making that is ultimately beneficial to all aspects of patient care.
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Clinical decision regret among critical care nurses: a qualitative analysis.
Arslanian-Engoren, Cynthia; Scott, Linda D
2014-01-01
Decision regret is a negative cognitive emotion associated with experiences of guilt and situations of interpersonal harm. These negative affective responses may contribute to emotional exhaustion in critical care nurses (CCNs), increased staff turnover rates and high medication error rates. Yet, little is known about clinical decision regret among CCNs or the conditions or situations (e.g., feeling sleepy) that may precipitate its occurrence. To examine decision regret among CCNs, with an emphasis on clinical decisions made when nurses were most sleepy. A content analytic approach was used to examine the narrative descriptions of clinical decisions by CCNs when sleepy. Six decision regret themes emerged that represented deviations in practice or performance behaviors that were attributed to fatigued CCNs. While 157 CCNs disclosed a clinical decision they made at work while sleepy, the prevalence may be underestimated and warrants further investigation. Copyright © 2014 Elsevier Inc. All rights reserved.
Bornmann, Lutz; Wallon, Gerlind; Ledin, Anna
2008-01-01
Does peer review fulfill its declared objective of identifying the best science and the best scientists? In order to answer this question we analyzed the Long-Term Fellowship and the Young Investigator programmes of the European Molecular Biology Organization. Both programmes aim to identify and support the best post doctoral fellows and young group leaders in the life sciences. We checked the association between the selection decisions and the scientific performance of the applicants. Our study involved publication and citation data for 668 applicants to the Long-Term Fellowship programme from the year 1998 (130 approved, 538 rejected) and 297 applicants to the Young Investigator programme (39 approved and 258 rejected applicants) from the years 2001 and 2002. If quantity and impact of research publications are used as a criterion for scientific achievement, the results of (zero-truncated) negative binomial models show that the peer review process indeed selects scientists who perform on a higher level than the rejected ones subsequent to application. We determined the extent of errors due to over-estimation (type I errors) and under-estimation (type 2 errors) of future scientific performance. Our statistical analyses point out that between 26% and 48% of the decisions made to award or reject an application show one of both error types. Even though for a part of the applicants, the selection committee did not correctly estimate the applicant's future performance, the results show a statistically significant association between selection decisions and the applicants' scientific achievements, if quantity and impact of research publications are used as a criterion for scientific achievement. PMID:18941530
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Wang, Xiao-Jing
2016-01-01
Automatic responses enable us to react quickly and effortlessly, but they often need to be inhibited so that an alternative, voluntary action can take place. To investigate the brain mechanism of controlled behavior, we investigated a biologically-based network model of spiking neurons for inhibitory control. In contrast to a simple race between pro- versus anti-response, our model incorporates a sensorimotor remapping module, and an action-selection module endowed with a “Stop” process through tonic inhibition. Both are under the modulation of rule-dependent control. We tested the model by applying it to the well known antisaccade task in which one must suppress the urge to look toward a visual target that suddenly appears, and shift the gaze diametrically away from the target instead. We found that the two-stage competition is crucial for reproducing the complex behavior and neuronal activity observed in the antisaccade task across multiple brain regions. Notably, our model demonstrates two types of errors: fast and slow. Fast errors result from failing to inhibit the quick automatic responses and therefore exhibit very short response times. Slow errors, in contrast, are due to incorrect decisions in the remapping process and exhibit long response times comparable to those of correct antisaccade responses. The model thus reveals a circuit mechanism for the empirically observed slow errors and broad distributions of erroneous response times in antisaccade. Our work suggests that selecting between competing automatic and voluntary actions in behavioral control can be understood in terms of near-threshold decision-making, sharing a common recurrent (attractor) neural circuit mechanism with discrimination in perception. PMID:27551824
Lo, Chung-Chuan; Wang, Xiao-Jing
2016-08-01
Automatic responses enable us to react quickly and effortlessly, but they often need to be inhibited so that an alternative, voluntary action can take place. To investigate the brain mechanism of controlled behavior, we investigated a biologically-based network model of spiking neurons for inhibitory control. In contrast to a simple race between pro- versus anti-response, our model incorporates a sensorimotor remapping module, and an action-selection module endowed with a "Stop" process through tonic inhibition. Both are under the modulation of rule-dependent control. We tested the model by applying it to the well known antisaccade task in which one must suppress the urge to look toward a visual target that suddenly appears, and shift the gaze diametrically away from the target instead. We found that the two-stage competition is crucial for reproducing the complex behavior and neuronal activity observed in the antisaccade task across multiple brain regions. Notably, our model demonstrates two types of errors: fast and slow. Fast errors result from failing to inhibit the quick automatic responses and therefore exhibit very short response times. Slow errors, in contrast, are due to incorrect decisions in the remapping process and exhibit long response times comparable to those of correct antisaccade responses. The model thus reveals a circuit mechanism for the empirically observed slow errors and broad distributions of erroneous response times in antisaccade. Our work suggests that selecting between competing automatic and voluntary actions in behavioral control can be understood in terms of near-threshold decision-making, sharing a common recurrent (attractor) neural circuit mechanism with discrimination in perception.
Stress-induced cortisol facilitates threat-related decision making among police officers.
Akinola, Modupe; Mendes, Wendy Berry
2012-02-01
Previous research suggests that cortisol can affect cognitive functions such as memory, decision making, and attentiveness to threat-related cues. Here, we examine whether increases in cortisol, brought on by an acute social stressor, influence threat-related decision making. Eighty-one police officers completed a standardized laboratory stressor and then immediately completed a computer simulated decision-making task designed to examine decisions to accurately shoot or not shoot armed and unarmed Black and White targets. Results indicated that police officers who had larger cortisol increases to the social-stress task subsequently made fewer errors when deciding to shoot armed Black targets relative to armed White targets, suggesting that hypothalamic pituitary adrenal (HPA) activation may exacerbate vigilance for threat cues. We conclude with a discussion of the implications of threat-initiated decision making.
Swarm intelligence: when uncertainty meets conflict.
Conradt, Larissa; List, Christian; Roper, Timothy J
2013-11-01
Good decision making is important for the survival and fitness of stakeholders, but decisions usually involve uncertainty and conflict. We know surprisingly little about profitable decision-making strategies in conflict situations. On the one hand, sharing decisions with others can pool information and decrease uncertainty (swarm intelligence). On the other hand, sharing decisions can hand influence to individuals whose goals conflict. Thus, when should an animal share decisions with others? Using a theoretical model, we show that, contrary to intuition, decision sharing by animals with conflicting goals often increases individual gains as well as decision accuracy. Thus, conflict-far from hampering effective decision making-can improve decision outcomes for all stakeholders, as long as they share large-scale goals. In contrast, decisions shared by animals without conflict were often surprisingly poor. The underlying mechanism is that animals with conflicting goals are less correlated in individual choice errors. These results provide a strong argument in the interest of all stakeholders for not excluding other (e.g., minority) factions from collective decisions. The observed benefits of including diverse factions among the decision makers could also be relevant to human collective decision making.
New Splitting Criteria for Decision Trees in Stationary Data Streams.
Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Rutkowski, Leszek; Duda, Piotr; Jaworski, Maciej
2018-06-01
The most popular tools for stream data mining are based on decision trees. In previous 15 years, all designed methods, headed by the very fast decision tree algorithm, relayed on Hoeffding's inequality and hundreds of researchers followed this scheme. Recently, we have demonstrated that although the Hoeffding decision trees are an effective tool for dealing with stream data, they are a purely heuristic procedure; for example, classical decision trees such as ID3 or CART cannot be adopted to data stream mining using Hoeffding's inequality. Therefore, there is an urgent need to develop new algorithms, which are both mathematically justified and characterized by good performance. In this paper, we address this problem by developing a family of new splitting criteria for classification in stationary data streams and investigating their probabilistic properties. The new criteria, derived using appropriate statistical tools, are based on the misclassification error and the Gini index impurity measures. The general division of splitting criteria into two types is proposed. Attributes chosen based on type- splitting criteria guarantee, with high probability, the highest expected value of split measure. Type- criteria ensure that the chosen attribute is the same, with high probability, as it would be chosen based on the whole infinite data stream. Moreover, in this paper, two hybrid splitting criteria are proposed, which are the combinations of single criteria based on the misclassification error and Gini index.
Identification of factors associated with diagnostic error in primary care.
Minué, Sergio; Bermúdez-Tamayo, Clara; Fernández, Alberto; Martín-Martín, José Jesús; Benítez, Vivian; Melguizo, Miguel; Caro, Araceli; Orgaz, María José; Prados, Miguel Angel; Díaz, José Enrique; Montoro, Rafael
2014-05-12
Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.
Identification of factors associated with diagnostic error in primary care
2014-01-01
Background Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason’s taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Methods Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician’s initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians’ perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. Discussion This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process. PMID:24884984
Defining and Measuring Decision-Making for the Management of Trauma Patients.
Madani, Amin; Gips, Amanda; Razek, Tarek; Deckelbaum, Dan L; Mulder, David S; Grushka, Jeremy R
Effective management of trauma patients is heavily dependent on sound judgment and decision-making. Yet, current methods for training and assessing these advanced cognitive skills are subjective, lack standardization, and are prone to error. This qualitative study aims to define and characterize the cognitive and interpersonal competencies required to optimally manage injured patients. Cognitive and hierarchical task analyses for managing unstable trauma patients were performed using qualitative methods to map the thoughts, behaviors, and practices that characterize expert performance. Trauma team leaders and board-certified trauma surgeons participated in semistructured interviews that were transcribed verbatim. Data were supplemented with content from published literature and prospectively collected field notes from observations of the trauma team during trauma activations. The data were coded and analyzed using grounded theory by 2 independent reviewers. A framework was created based on 14 interviews with experts (lasting 1-2 hours each), 35 field observations (20 [57%] blunt; 15 [43%] penetrating; median Injury Severity Score 20 [13-25]), and 15 literary sources. Experts included 11 trauma surgeons and 3 emergency physicians from 7 Level 1 academic institutions in North America (median years in practice: 12 [8-17]). Twenty-nine competencies were identified, including 17 (59%) related to situation awareness, 6 (21%) involving decision-making, and 6 (21%) requiring interpersonal skills. Of 40 potential errors that were identified, root causes were mapped to errors in situation awareness (20 [50%]), decision-making (10 [25%]), or interpersonal skills (10 [25%]). This study defines cognitive and interpersonal competencies that are essential for the management of trauma patients. This framework may serve as the basis for novel curricula to train and assess decision-making skills, and to develop quality-control metrics to improve team and individual performance. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less
Automation, decision support, and expert systems in nephrology.
Soman, Sandeep; Zasuwa, Gerard; Yee, Jerry
2008-01-01
Increasing data suggest that errors in medicine occur frequently and result in substantial harm to the patient. The Institute of Medicine report described the magnitude of the problem, and public interest in this issue, which was already large, has grown. The traditional approach in medicine has been to identify the persons making the errors and recommend corrective strategies. However, it has become increasingly clear that it is more productive to focus on the systems and processes through which care is provided. If these systems are set up in ways that would both make errors less likely and identify those that do occur and, at the same time, improve efficiency, then safety and productivity would be substantially improved. Clinical decision support systems (CDSSs) are active knowledge systems that use 2 or more items of patient data to generate case specific recommendations. CDSSs are typically designed to integrate a medical knowledge base, patient data, and an inference engine to generate case specific advice. This article describes how automation, templating, and CDSS improve efficiency, patient care, and safety by reducing the frequency and consequences of medical errors in nephrology. We discuss practical applications of these in 3 settings: a computerized anemia-management program (CAMP, Henry Ford Health System, Detroit, MI), vascular access surveillance systems, and monthly capitation notes in the hemodialysis unit.
NASA Technical Reports Server (NTRS)
Furnstenau, Norbert; Ellis, Stephen R.
2015-01-01
In order to determine the required visual frame rate (FR) for minimizing prediction errors with out-the-window video displays at remote/virtual airport towers, thirteen active air traffic controllers viewed high dynamic fidelity simulations of landing aircraft and decided whether aircraft would stop as if to be able to make a turnoff or whether a runway excursion would be expected. The viewing conditions and simulation dynamics replicated visual rates and environments of transport aircraft landing at small commercial airports. The required frame rate was estimated using Bayes inference on prediction errors by linear FRextrapolation of event probabilities conditional on predictions (stop, no-stop). Furthermore estimates were obtained from exponential model fits to the parametric and non-parametric perceptual discriminabilities d' and A (average area under ROC-curves) as dependent on FR. Decision errors are biased towards preference of overshoot and appear due to illusionary increase in speed at low frames rates. Both Bayes and A - extrapolations yield a framerate requirement of 35 < FRmin < 40 Hz. When comparing with published results [12] on shooter game scores the model based d'(FR)-extrapolation exhibits the best agreement and indicates even higher FRmin > 40 Hz for minimizing decision errors. Definitive recommendations require further experiments with FR > 30 Hz.
Systematic sparse matrix error control for linear scaling electronic structure calculations.
Rubensson, Emanuel H; Sałek, Paweł
2005-11-30
Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.
A day in the life of a volunteer incident commander: errors, pressures and mitigating strategies.
Bearman, Christopher; Bremner, Peter A
2013-05-01
To meet an identified gap in the literature this paper investigates the tasks that a volunteer incident commander needs to carry out during an incident, the errors that can be made and the way that errors are managed. In addition, pressure from goal seduction and situation aversion were also examined. Volunteer incident commanders participated in a two-part interview consisting of a critical decision method interview and discussions about a hierarchical task analysis constructed by the authors. A SHERPA analysis was conducted to further identify potential errors. The results identified the key tasks, errors with extreme risk, pressures from strong situations and mitigating strategies for errors and pressures. The errors and pressures provide a basic set of issues that need to be managed by both volunteer incident commanders and fire agencies. The mitigating strategies identified here suggest some ways that this can be done. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
ERIC Educational Resources Information Center
Koskey, Kristin L. K.; Cain, Bryce; Sondergeld, Toni A.; Alvim, Henrique G.; Slager, Emily M.
2015-01-01
Achieving respectable response rates to surveys on university campuses has become increasingly more difficult, which can increase non-response error and jeopardize the integrity of data. Prior research has focused on investigating the effect of a single or small set of factors on college students' decision to complete surveys. We used a concurrent…
Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.
Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C
2013-12-30
We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.
Decisions without Direction: Career Guidance and Decision-Making among American Youth.
ERIC Educational Resources Information Center
Hurley, Dan, Ed.; Thorp, Jim, Ed.
The attitudes and career plans of high school juniors and seniors were examined in a telephone survey of 809 U.S. high school juniors and seniors (sampling error, +/-3.5%). The respondents ranged in age from 14 to 20 years and were evenly divided between males and females. The key conclusions were as follows: (1) students perceive a lack of career…
Relationship between Recent Flight Experience and Pilot Error General Aviation Accidents
NASA Astrophysics Data System (ADS)
Nilsson, Sarah J.
Aviation insurance agents and fixed-base operation (FBO) owners use recent flight experience, as implied by the 90-day rule, to measure pilot proficiency in physical airplane skills, and to assess the likelihood of a pilot error accident. The generally accepted premise is that more experience in a recent timeframe predicts less of a propensity for an accident, all other factors excluded. Some of these aviation industry stakeholders measure pilot proficiency solely by using time flown within the past 90, 60, or even 30 days, not accounting for extensive research showing aeronautical decision-making and situational awareness training decrease the likelihood of a pilot error accident. In an effort to reduce the pilot error accident rate, the Federal Aviation Administration (FAA) has seen the need to shift pilot training emphasis from proficiency in physical airplane skills to aeronautical decision-making and situational awareness skills. However, current pilot training standards still focus more on the former than on the latter. The relationship between pilot error accidents and recent flight experience implied by the FAA's 90-day rule has not been rigorously assessed using empirical data. The intent of this research was to relate recent flight experience, in terms of time flown in the past 90 days, to pilot error accidents. A quantitative ex post facto approach, focusing on private pilots of single-engine general aviation (GA) fixed-wing aircraft, was used to analyze National Transportation Safety Board (NTSB) accident investigation archival data. The data were analyzed using t-tests and binary logistic regression. T-tests between the mean number of hours of recent flight experience of tricycle gear pilots involved in pilot error accidents (TPE) and non-pilot error accidents (TNPE), t(202) = -.200, p = .842, and conventional gear pilots involved in pilot error accidents (CPE) and non-pilot error accidents (CNPE), t(111) = -.271, p = .787, indicate there is no statistically significant relationship between groups. Binary logistic regression indicate that recent flight experience does not reliably distinguish between pilot error and non-pilot error accidents for TPE/TNPE, chi2 = 0.040 (df=1, p = .841) and CPE/CNPE, chi2= 0.074 (df =1, p = .786). Future research could focus on different pilot populations, and to broaden the scope, analyze several years of data.
Error rates in forensic DNA analysis: definition, numbers, impact and communication.
Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid
2014-09-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Neural evidence for description dependent reward processing in the framing effect.
Yu, Rongjun; Zhang, Ping
2014-01-01
Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the "worse than expected" negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to "better than expected" positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect.
Improving patient care. The cognitive psychology of missed diagnoses.
Redelmeier, Donald A
2005-01-18
Cognitive psychology is the science that examines how people reason, formulate judgments, and make decisions. This case involves a patient given a diagnosis of pharyngitis, whose ultimate diagnosis of osteomyelitis was missed through a series of cognitive shortcuts. These errors include the availability heuristic (in which people judge likelihood by how easily examples spring to mind), the anchoring heuristic (in which people stick with initial impressions), framing effects (in which people make different decisions depending on how information is presented), blind obedience (in which people stop thinking when confronted with authority), and premature closure (in which several alternatives are not pursued). Rather than trying to completely eliminate cognitive shortcuts (which often serve clinicians well), becoming aware of common errors might lead to sustained improvement in patient care.
Driving difficulties in Parkinson's disease
Rizzo, Matthew; Uc, Ergun Y; Dawson, Jeffrey; Anderson, Steven; Rodnitzky, Robert
2011-01-01
Safe driving requires the coordination of attention, perception, memory, motor and executive functions (including decision-making) and self-awareness. PD and other disorders may impair these abilities. Because age or medical diagnosis alone is often an unreliable criterion for licensure, decisions on fitness to drive should be based on empirical observations of performance. Linkages between cognitive abilities measured by neuropsychological tasks, and driving behavior assessed using driving simulators, and natural and naturalistic observations in instrumented vehicles, can help standardize the assessment of fitness-to-drive. By understanding the patterns of driver safety errors that cause crashes, it may be possible to design interventions to reduce these errors and injuries and increase mobility. This includes driver performance monitoring devices, collision alerting and warning systems, road design, and graded licensure strategies. PMID:20187237
Do juries meet our expectations?
Arkes, Hal R; Mellers, Barbara A
2002-12-01
Surveys of public opinion indicate that people have high expectations for juries. When it comes to serious crimes, most people want errors of convicting the innocent (false positives) or acquitting the guilty (false negatives) to fall well below 10%. Using expected utility theory, Bayes' Theorem, signal detection theory, and empirical evidence from detection studies of medical decision making, eyewitness testimony, and weather forecasting, we argue that the frequency of mistakes probably far exceeds these "tolerable" levels. We are not arguing against the use of juries. Rather, we point out that a closer look at jury decisions reveals a serious gap between what we expect from juries and what probably occurs. When deciding issues of guilt and/or punishing convicted criminals, we as a society should recognize and acknowledge the abundance of error.
The pitfalls of premature closure: clinical decision-making in a case of aortic dissection
Kumar, Bharat; Kanna, Balavenkatesh; Kumar, Suresh
2011-01-01
Premature closure is a type of cognitive error in which the physician fails to consider reasonable alternatives after an initial diagnosis is made. It is a common cause of delayed diagnosis and misdiagnosis borne out of a faulty clinical decision-making process. The authors present a case of aortic dissection in which premature closure was avoided by the aggressive pursuit of the appropriate differential diagnosis, and discuss the importance of disciplined clinical decision-making in the setting of chest pain. PMID:22679162
Data-driven Modelling for decision making under uncertainty
NASA Astrophysics Data System (ADS)
Angria S, Layla; Dwi Sari, Yunita; Zarlis, Muhammad; Tulus
2018-01-01
The rise of the issues with the uncertainty of decision making has become a very warm conversation in operation research. Many models have been presented, one of which is with data-driven modelling (DDM). The purpose of this paper is to extract and recognize patterns in data, and find the best model in decision-making problem under uncertainty by using data-driven modeling approach with linear programming, linear and nonlinear differential equation, bayesian approach. Model criteria tested to determine the smallest error, and it will be the best model that can be used.
Offset quadrature communications with decision-feedback carrier synchronization
NASA Technical Reports Server (NTRS)
Simon, M. K.; Smith, J. G.
1974-01-01
In order to accommodate a quadrature amplitude-shift-keyed (QASK) signal, Simon and Smith (1974) have modified the decision-feedback loop which tracks a quadrature phase-shift-keyed (QPSK). In the investigation reported approaches are considered to modify the loops in such a way that offset QASK signals can be tracked, giving attention to the special case of an offset QPSK. The development of the stochastic integro-differential equation of operation for a decision-feedback offset QASK loop is discussed along with the probability density function of the phase error process.
Credit assignment in movement-dependent reinforcement learning
Boggess, Matthew J.; Crossley, Matthew J.; Parvin, Darius; Ivry, Richard B.; Taylor, Jordan A.
2016-01-01
When a person fails to obtain an expected reward from an object in the environment, they face a credit assignment problem: Did the absence of reward reflect an extrinsic property of the environment or an intrinsic error in motor execution? To explore this problem, we modified a popular decision-making task used in studies of reinforcement learning, the two-armed bandit task. We compared a version in which choices were indicated by key presses, the standard response in such tasks, to a version in which the choices were indicated by reaching movements, which affords execution failures. In the key press condition, participants exhibited a strong risk aversion bias; strikingly, this bias reversed in the reaching condition. This result can be explained by a reinforcement model wherein movement errors influence decision-making, either by gating reward prediction errors or by modifying an implicit representation of motor competence. Two further experiments support the gating hypothesis. First, we used a condition in which we provided visual cues indicative of movement errors but informed the participants that trial outcomes were independent of their actual movements. The main result was replicated, indicating that the gating process is independent of participants’ explicit sense of control. Second, individuals with cerebellar degeneration failed to modulate their behavior between the key press and reach conditions, providing converging evidence of an implicit influence of movement error signals on reinforcement learning. These results provide a mechanistically tractable solution to the credit assignment problem. PMID:27247404
Credit assignment in movement-dependent reinforcement learning.
McDougle, Samuel D; Boggess, Matthew J; Crossley, Matthew J; Parvin, Darius; Ivry, Richard B; Taylor, Jordan A
2016-06-14
When a person fails to obtain an expected reward from an object in the environment, they face a credit assignment problem: Did the absence of reward reflect an extrinsic property of the environment or an intrinsic error in motor execution? To explore this problem, we modified a popular decision-making task used in studies of reinforcement learning, the two-armed bandit task. We compared a version in which choices were indicated by key presses, the standard response in such tasks, to a version in which the choices were indicated by reaching movements, which affords execution failures. In the key press condition, participants exhibited a strong risk aversion bias; strikingly, this bias reversed in the reaching condition. This result can be explained by a reinforcement model wherein movement errors influence decision-making, either by gating reward prediction errors or by modifying an implicit representation of motor competence. Two further experiments support the gating hypothesis. First, we used a condition in which we provided visual cues indicative of movement errors but informed the participants that trial outcomes were independent of their actual movements. The main result was replicated, indicating that the gating process is independent of participants' explicit sense of control. Second, individuals with cerebellar degeneration failed to modulate their behavior between the key press and reach conditions, providing converging evidence of an implicit influence of movement error signals on reinforcement learning. These results provide a mechanistically tractable solution to the credit assignment problem.
Visual anticipation biases conscious decision making but not bottom-up visual processing.
Mathews, Zenon; Cetnarski, Ryszard; Verschure, Paul F M J
2014-01-01
Prediction plays a key role in control of attention but it is not clear which aspects of prediction are most prominent in conscious experience. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the formation of conscious experience. Yet, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and a psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and/or errors on conscious experience, attention and decision-making. Using a displacement detection task combined with reverse correlation, we reveal signatures of the usage of prediction at three different levels of perceptual processing: bottom-up fast saccades, top-down driven slow saccades and consciousnes decisions. Our results suggest that the brain employs multiple parallel mechanism at different levels of perceptual processing in order to shape effective sensory consciousness within a predicted perceptual scene. We further observe that bottom-up sensory and top-down predictive processes can be dissociated through cognitive load. We propose a probabilistic data association model from dynamical systems theory to model the predictive multi-scale bias in perceptual processing that we observe and its role in the formation of conscious experience. We propose that these results support the hypothesis that consciousness provides a time-delayed description of a task that is used to prospectively optimize real time control structures, rather than being engaged in the real-time control of behavior itself.
Automation bias and verification complexity: a systematic review.
Lyell, David; Coiera, Enrico
2017-03-01
While potentially reducing decision errors, decision support systems can introduce new types of errors. Automation bias (AB) happens when users become overreliant on decision support, which reduces vigilance in information seeking and processing. Most research originates from the human factors literature, where the prevailing view is that AB occurs only in multitasking environments. This review seeks to compare the human factors and health care literature, focusing on the apparent association of AB with multitasking and task complexity. EMBASE, Medline, Compendex, Inspec, IEEE Xplore, Scopus, Web of Science, PsycINFO, and Business Source Premiere from 1983 to 2015. Evaluation studies where task execution was assisted by automation and resulted in errors were included. Participants needed to be able to verify automation correctness and perform the task manually. Tasks were identified and grouped. Task and automation type and presence of multitasking were noted. Each task was rated for its verification complexity. Of 890 papers identified, 40 met the inclusion criteria; 6 were in health care. Contrary to the prevailing human factors view, AB was found in single tasks, typically involving diagnosis rather than monitoring, and with high verification complexity. The literature is fragmented, with large discrepancies in how AB is reported. Few studies reported the statistical significance of AB compared to a control condition. AB appears to be associated with the degree of cognitive load experienced in decision tasks, and appears to not be uniquely associated with multitasking. Strategies to minimize AB might focus on cognitive load reduction. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Improved memory for error feedback.
Van der Borght, Liesbet; Schouppe, Nathalie; Notebaert, Wim
2016-11-01
Surprising feedback in a general knowledge test leads to an improvement in memory for both the surface features and the content of the feedback (Psychon Bull Rev 16:88-92, 2009). Based on the idea that in cognitive tasks, error is surprising (the orienting account, Cognition 111:275-279, 2009), we tested whether error feedback would be better remembered than correct feedback. Colored words were presented as feedback signals in a flanker task, where the color indicated the accuracy. Subsequently, these words were again presented during a recognition task (Experiment 1) or a lexical decision task (Experiments 2 and 3). In all experiments, memory was improved for words seen as error feedback. These results are compared to the attentional boost effect (J Exp Psychol Learn Mem Cogn 39:1223-12231, 2013) and related to the orienting account for post-error slowing (Cognition 111:275-279, 2009).
Error Analysis of CM Data Products Sources of Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, Brian D.; Eckert-Gallup, Aubrey Celia; Cochran, Lainy Dromgoole
This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across amore » wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.« less
Software fault-tolerance by design diversity DEDIX: A tool for experiments
NASA Technical Reports Server (NTRS)
Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.
1986-01-01
The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
A meta-analytic review of two modes of learning and the description-experience gap.
Wulff, Dirk U; Mergenthaler-Canseco, Max; Hertwig, Ralph
2018-02-01
People can learn about the probabilistic consequences of their actions in two ways: One is by consulting descriptions of an action's consequences and probabilities (e.g., reading up on a medication's side effects). The other is by personally experiencing the probabilistic consequences of an action (e.g., beta testing software). In principle, people taking each route can reach analogous states of knowledge and consequently make analogous decisions. In the last dozen years, however, research has demonstrated systematic discrepancies between description- and experienced-based choices. This description-experience gap has been attributed to factors including reliance on a small set of experience, the impact of recency, and different weighting of probability information in the two decision types. In this meta-analysis focusing on studies using the sampling paradigm of decisions from experience, we evaluated these and other determinants of the decision-experience gap by reference to more than 70,000 choices made by more than 6,000 participants. We found, first, a robust description-experience gap but also a key moderator, namely, problem structure. Second, the largest determinant of the gap was reliance on small samples and the associated sampling error: free to terminate search, individuals explored too little to experience all possible outcomes. Third, the gap persisted when sampling error was basically eliminated, suggesting other determinants. Fourth, the occurrence of recency was contingent on decision makers' autonomy to terminate search, consistent with the notion of optional stopping. Finally, we found indications of different probability weighting in decisions from experience versus decisions from description when the problem structure involved a risky and a safe option. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat
2015-01-01
Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.
Ben-Ezra, Menachem; Bibi, Haim
2016-09-01
The association between psychological distress and decision regret during armed conflict among hospital personnel is of interest. The objective of this study was to learn of the association between psychological distress and decision regret during armed conflict. Data was collected from 178 hospital personnel in Barzilai Medical Center in Ashkelon, Israel during Operation Protective Edge. The survey was based on intranet data collection about: demographics, self-rated health, life satisfaction, psychological distress and decision regret. Among hospital personnel, having higher psychological distress and being young were associated with higher decision regret. This study adds to the existing knowledge by providing novel data about the association between psychological distress and decision regret among hospital personnel during armed conflict. This data opens a new venue of future research to other potentially detrimental factor on medical decision making and medical error done during crisis.
Context affects nestmate recognition errors in honey bees and stingless bees.
Couvillon, Margaret J; Segers, Francisca H I D; Cooper-Bowman, Roseanne; Truslove, Gemma; Nascimento, Daniela L; Nascimento, Fabio S; Ratnieks, Francis L W
2013-08-15
Nestmate recognition studies, where a discriminator first recognises and then behaviourally discriminates (accepts/rejects) another individual, have used a variety of methodologies and contexts. This is potentially problematic because recognition errors in discrimination behaviour are predicted to be context-dependent. Here we compare the recognition decisions (accept/reject) of discriminators in two eusocial bees, Apis mellifera and Tetragonisca angustula, under different contexts. These contexts include natural guards at the hive entrance (control); natural guards held in plastic test arenas away from the hive entrance that vary either in the presence or absence of colony odour or the presence or absence of an additional nestmate discriminator; and, for the honey bee, the inside of the nest. For both honey bee and stingless bee guards, total recognition errors of behavioural discrimination made by guards (% nestmates rejected + % non-nestmates accepted) are much lower at the colony entrance (honey bee: 30.9%; stingless bee: 33.3%) than in the test arenas (honey bee: 60-86%; stingless bee: 61-81%; P<0.001 for both). Within the test arenas, the presence of colony odour specifically reduced the total recognition errors in honey bees, although this reduction still fell short of bringing error levels down to what was found at the colony entrance. Lastly, in honey bees, the data show that the in-nest collective behavioural discrimination by ca. 30 workers that contact an intruder is insufficient to achieve error-free recognition and is not as effective as the discrimination by guards at the entrance. Overall, these data demonstrate that context is a significant factor in a discriminators' ability to make appropriate recognition decisions, and should be considered when designing recognition study methodologies.
ERIC Educational Resources Information Center
Severo, Milton; Silva-Pereira, Fernanda; Ferreira, Maria Amelia
2013-01-01
Several studies have shown that the standard error of measurement (SEM) can be used as an additional “safety net” to reduce the frequency of false-positive or false-negative student grading classifications. Practical examinations in clinical anatomy are often used as diagnostic tests to admit students to course final examinations. The aim of this…
Human-Agent Teaming for Multi-Robot Control: A Literature Review
2013-02-01
neurophysiological devices are becoming more cost effective and less invasive, future systems will most likely take advantage of this technology to monitor...Parasuraman et al., 1993). It has also been reported that both the cost of automation errors and the cost of verification affect humans’ reliance on...decision aids, and the effects are also moderated by age (Ezer et al., 2008). Generally, reliance is reduced as the cost of error increases and it
White, Stuart F; Geraci, Marilla; Lewis, Elizabeth; Leshin, Joseph; Teng, Cindy; Averbeck, Bruno; Meffert, Harma; Ernst, Monique; Blair, James R; Grillon, Christian; Blair, Karina S
2017-02-01
Deficits in reinforcement-based decision making have been reported in generalized anxiety disorder. However, the pathophysiology of these deficits is largely unknown; published studies have mainly examined adolescents, and the integrity of core functional processes underpinning decision making remains undetermined. In particular, it is unclear whether the representation of reinforcement prediction error (PE) (the difference between received and expected reinforcement) is disrupted in generalized anxiety disorder. This study addresses these issues in adults with the disorder. Forty-six unmedicated individuals with generalized anxiety disorder and 32 healthy comparison subjects group-matched on IQ, gender, and age performed a passive avoidance task while undergoing functional MRI. Data analyses were performed using a computational modeling approach. Behaviorally, individuals with generalized anxiety disorder showed impaired reinforcement-based decision making. Imaging results revealed that during feedback, individuals with generalized anxiety disorder relative to healthy subjects showed a reduced correlation between PE and activity within the ventromedial prefrontal cortex, ventral striatum, and other structures implicated in decision making. In addition, individuals with generalized anxiety disorder relative to healthy participants showed a reduced correlation between punishment PEs, but not reward PEs, and activity within the left and right lentiform nucleus/putamen. This is the first study to identify computational impairments during decision making in generalized anxiety disorder. PE signaling is significantly disrupted in individuals with the disorder and may lead to their decision-making deficits and excessive worry about everyday problems by disrupting the online updating ("reality check") of the current relationship between the expected values of current response options and the actual received rewards and punishments.
Our Changing Planet: The U.S. Climate Change Science Program for Fiscal Year 2006
2005-11-01
any remaining uncertainties for the Amazon region of South America.These results are expected to greatly reduce errors and uncertainties concerning...changing the concentration of atmospheric CO2 are fossil -fuel burning, deforestation, land-use change, and cement production.These processes have...the initial phases of work on the remaining products. Specific plans for enhanced decision-support resources include: – Developing decision-support
Aeronautical Decision Making for Student and Private Pilots.
1987-05-01
you learn to gain voluntary control over your body to achieve the relaxation response. In autogenic training , you learn to shut down many bodily...Ahstruct "Aviation accident data indicate that the majority of aircraft mishaps are due to judgment error. This training manual is part of a project to...develop materials and techniques to help improve pilot decision making. Training programs using prototype versions of these materials have
MODIS Snow Cover Mapping Decision Tree Technique: Snow and Cloud Discrimination
NASA Technical Reports Server (NTRS)
Riggs, George A.; Hall, Dorothy K.
2010-01-01
Accurate mapping of snow cover continues to challenge cryospheric scientists and modelers. The Moderate-Resolution Imaging Spectroradiometer (MODIS) snow data products have been used since 2000 by many investigators to map and monitor snow cover extent for various applications. Users have reported on the utility of the products and also on problems encountered. Three problems or hindrances in the use of the MODIS snow data products that have been reported in the literature are: cloud obscuration, snow/cloud confusion, and snow omission errors in thin or sparse snow cover conditions. Implementation of the MODIS snow algorithm in a decision tree technique using surface reflectance input to mitigate those problems is being investigated. The objective of this work is to use a decision tree structure for the snow algorithm. This should alleviate snow/cloud confusion and omission errors and provide a snow map with classes that convey information on how snow was detected, e.g. snow under clear sky, snow tinder cloud, to enable users' flexibility in interpreting and deriving a snow map. Results of a snow cover decision tree algorithm are compared to the standard MODIS snow map and found to exhibit improved ability to alleviate snow/cloud confusion in some situations allowing up to about 5% increase in mapped snow cover extent, thus accuracy, in some scenes.
Liu, Rong
2017-01-01
Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781
Market mechanisms protect the vulnerable brain.
Ramchandran, Kanchna; Nayakankuppam, Dhananjay; Berg, Joyce; Tranel, Daniel; Denburg, Natalie L
2011-07-01
Markets are mechanisms of social exchange, intended to facilitate trading. However, the question remains as to whether markets would help or hurt individuals with decision-makings deficits, as is frequently encountered in the case of cognitive aging. Essential for predicting future gains and losses in monetary and social domains, the striatal nuclei in the brain undergo structural, neurochemical, and functional decline with age. We correlated the efficacy of market mechanisms with dorsal striatal decline in an aging population, by using market based trading in the context of the 2008 U.S. Presidential Elections (primary cycle). Impaired decision-makers displayed higher prediction error (difference between their prediction and actual outcome). Lower in vivo caudate volume was also associated with higher prediction error. Importantly, market-based trading protected older adults with lower caudate volume to a greater extent from their own poorly calibrated predictions. Counterintuitive to the traditional public perception of the market as a fickle, risky proposition where vulnerable traders are most surely to be burned, we suggest that market-based mechanisms protect individuals with brain-based decision-making vulnerabilities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Market mechanisms protect the vulnerable brain
Ramchandran, Kanchna; Nayakankuppam, Dhananjay; Berg, Joyce; Tranel, Daniel
2011-01-01
Markets are mechanisms of social exchange, intended to facilitate trading. However, the question remains as to whether markets would help or hurt individuals with decision-makings deficits, as is frequently encountered in the case of cognitive aging. Essential for predicting future gains and losses in monetary and social domains, the striatal nuclei in the brain undergo structural, neurochemical, and functional decline with age. We correlated the efficacy of market mechanisms with dorsal striatal decline in an aging population, by using market based trading in the context of the 2008 U.S Presidential Elections (primary cycle). Impaired decision-makers displayed higher prediction error (difference between their prediction and actual outcome). Lower in vivo caudate volume was also associated with higher prediction error. Importantly, market-based trading protected older adults with lower caudate volume to a greater extent from their own poorly calibrated predictions. Counterintuitive to the traditional public perception of the market as a fickle, risky proposition where vulnerable traders are most surely to be burned, we suggest that market-based mechanisms protect individuals with brain-based decision-making vulnerabilities. PMID:21600226
Lapish, Christopher C.; Durstewitz, Daniel; Chandler, L. Judson; Seamans, Jeremy K.
2008-01-01
Successful decision making requires an ability to monitor contexts, actions, and outcomes. The anterior cingulate cortex (ACC) is thought to be critical for these functions, monitoring and guiding decisions especially in challenging situations involving conflict and errors. A number of different single-unit correlates have been observed in the ACC that reflect the diverse cognitive components involved. Yet how ACC neurons function as an integrated network is poorly understood. Here we show, using advanced population analysis of multiple single-unit recordings from the rat ACC during performance of an ecologically valid decision-making task, that ensembles of neurons move through different coherent and dissociable states as the cognitive requirements of the task change. This organization into distinct network patterns with respect to both firing-rate changes and correlations among units broke down during trials with numerous behavioral errors, especially at choice points of the task. These results point to an underlying functional organization into cell assemblies in the ACC that may monitor choices, outcomes, and task contexts, thus tracking the animal's progression through “task space.” PMID:18708525
NASA Technical Reports Server (NTRS)
White, W. F. (Compiler)
1978-01-01
The Terminal Configured Vehicle (TCV) program operates a Boeing 737 modified to include a second cockpit and a large amount of experimental navigation, guidance and control equipment for research on advanced avionics systems. Demonstration flights to include curved approaches and automatic landings were tracked by a phototheodolite system. For 50 approaches during the demonstration flights, the following results were obtained: the navigation system, using TRSB guidance, delivered the aircraft onto the 3 nautical mile final approach leg with an average overshoot of 25 feet past centerline, subjet to a 2-sigma dispersion of 90 feet. Lateral tracking data showed a mean error of 4.6 feet left of centerline at the category 1 decision height (200 feet) and 2.7 feet left of centerline at the category 2 decision height (100 feet). These values were subject to a sigma dispersion of about 10 feet. Finally, the glidepath tracking errors were 2.5 feet and 3.0 feet high at the category 1 and 2 decision heights, respectively, with a 2 sigma value of 6 feet.
Introduction to cognitive processes of expert pilots.
Adams, R J; Ericsson, A E
2000-10-01
This report addresses the historical problem that a very high percentage of accidents have been classified as involving "pilot error." Through extensive research since 1977, the Federal Aviation Administration determined that the predominant underlying cause of these types of accidents involved decisional problems or cognitive information processing. To attack these problems, Aeronautical Decision Making (ADM) training materials were developed and tested for ten years. Since the publication of the ADM training manuals in 1987, significant reductions in human performance error (HPE) accidents have been documented both in the U.S. and world wide. However, shortcomings have been observed in the use of these materials for recurrency training and in their relevance to more experienced pilots. The following discussion defines the differences between expert and novice decision makers from a cognitive information processing perspective, correlates the development of expert pilot cognitive processes with training and experience, and reviews accident scenarios which exemplify those processes. This introductory material is a necessary prerequisite to an understanding of how to formulate expert pilot decision making training innovations; and, to continue the record of improved safety through ADM training.
Lobach, David F; Kawamoto, Kensaku; Anstrom, Kevin J; Russell, Michael L; Woods, Peter; Smith, Dwight
2007-01-01
Clinical decision support is recognized as one potential remedy for the growing crisis in healthcare quality in the United States and other industrialized nations. While decision support systems have been shown to improve care quality and reduce errors, these systems are not widely available. This lack of availability arises in part because most decision support systems are not portable or scalable. The Health Level 7 international standard development organization recently adopted a draft standard known as the Decision Support Service standard to facilitate the implementation of clinical decision support systems using software services. In this paper, we report the first implementation of a clinical decision support system using this new standard. This system provides point-of-care chronic disease management for diabetes and other conditions and is deployed throughout a large regional health system. We also report process measures and usability data concerning the system. Use of the Decision Support Service standard provides a portable and scalable approach to clinical decision support that could facilitate the more extensive use of decision support systems.
Uncertainty and the difficulty of thinking through disjunctions.
Shafir, E
1994-01-01
This paper considers the relationship between decision under uncertainty and thinking through disjunctions. Decision situations that lead to violations of Savage's sure-thing principle are examined, and a variety of simple reasoning problems that often generate confusion and error are reviewed. The common difficulty is attributed to people's reluctance to think through disjunctions. Instead of hypothetically traveling through the branches of a decision tree, it is suggested, people suspend judgement and remain at the node. This interpretation is applied to instances of decision making, information search, deductive and inductive reasoning, probabilistic judgement, games, puzzles and paradoxes. Some implications of the reluctance to think through disjunctions, as well as potential corrective procedures, are discussed.
Neural evidence for description dependent reward processing in the framing effect
Yu, Rongjun; Zhang, Ping
2014-01-01
Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the “worse than expected” negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to “better than expected” positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect. PMID:24733998
Liability of physicians supervising nonphysician clinicians.
Paterick, Barbara B; Waterhouse, Blake E; Paterick, Timothy E; Sanbar, Sandy S
2014-01-01
Physicians confront a variety of liability issues when supervising nonphysician clinicians (NPC) including: (1) direct liability resulting from a failure to meet the state-defined standards of supervision/collaboration with NPCs; (2) vicarious liability, arising from agency law, where physicians are held accountable for NPC clinical care that does not meet the national standard of care; and (3) responsibility for medical errors when the NPC and physician are co-employees of the corporate enterprise. Physician-NPC co-employee relationships are highlighted because they are new and becoming predominant in existing healthcare models. Because of their novelty, there is a paucity of judicial decisions determining liability for NPC errors in this setting. Knowledge of the existence of these risks will allow physicians to make informed decisions on what relationships they will enter with NPCs and how these relationships will be structured and monitored.
Ethnic diversity deflates price bubbles
Levine, Sheen S.; Apfelbaum, Evan P.; Bernard, Mark; Bartelt, Valerie L.; Zajac, Edward J.; Stark, David
2014-01-01
Markets are central to modern society, so their failures can be devastating. Here, we examine a prominent failure: price bubbles. Bubbles emerge when traders err collectively in pricing, causing misfit between market prices and the true values of assets. The causes of such collective errors remain elusive. We propose that bubbles are affected by ethnic homogeneity in the market and can be thwarted by diversity. In homogenous markets, traders place undue confidence in the decisions of others. Less likely to scrutinize others’ decisions, traders are more likely to accept prices that deviate from true values. To test this, we constructed experimental markets in Southeast Asia and North America, where participants traded stocks to earn money. We randomly assigned participants to ethnically homogeneous or diverse markets. We find a marked difference: Across markets and locations, market prices fit true values 58% better in diverse markets. The effect is similar across sites, despite sizeable differences in culture and ethnic composition. Specifically, in homogenous markets, overpricing is higher as traders are more likely to accept speculative prices. Their pricing errors are more correlated than in diverse markets. In addition, when bubbles burst, homogenous markets crash more severely. The findings suggest that price bubbles arise not only from individual errors or financial conditions, but also from the social context of decision making. The evidence may inform public discussion on ethnic diversity: it may be beneficial not only for providing variety in perspectives and skills, but also because diversity facilitates friction that enhances deliberation and upends conformity. PMID:25404313
Ethnic diversity deflates price bubbles.
Levine, Sheen S; Apfelbaum, Evan P; Bernard, Mark; Bartelt, Valerie L; Zajac, Edward J; Stark, David
2014-12-30
Markets are central to modern society, so their failures can be devastating. Here, we examine a prominent failure: price bubbles. Bubbles emerge when traders err collectively in pricing, causing misfit between market prices and the true values of assets. The causes of such collective errors remain elusive. We propose that bubbles are affected by ethnic homogeneity in the market and can be thwarted by diversity. In homogenous markets, traders place undue confidence in the decisions of others. Less likely to scrutinize others' decisions, traders are more likely to accept prices that deviate from true values. To test this, we constructed experimental markets in Southeast Asia and North America, where participants traded stocks to earn money. We randomly assigned participants to ethnically homogeneous or diverse markets. We find a marked difference: Across markets and locations, market prices fit true values 58% better in diverse markets. The effect is similar across sites, despite sizeable differences in culture and ethnic composition. Specifically, in homogenous markets, overpricing is higher as traders are more likely to accept speculative prices. Their pricing errors are more correlated than in diverse markets. In addition, when bubbles burst, homogenous markets crash more severely. The findings suggest that price bubbles arise not only from individual errors or financial conditions, but also from the social context of decision making. The evidence may inform public discussion on ethnic diversity: it may be beneficial not only for providing variety in perspectives and skills, but also because diversity facilitates friction that enhances deliberation and upends conformity.
Does a better model yield a better argument? An info-gap analysis
NASA Astrophysics Data System (ADS)
Ben-Haim, Yakov
2017-04-01
Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.
Improving the Glucose Meter Error Grid With the Taguchi Loss Function.
Krouwer, Jan S
2016-07-01
Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.
Measurement Error and Environmental Epidemiology: A Policy Perspective
Edwards, Jessie K.; Keil, Alexander P.
2017-01-01
Purpose of review Measurement error threatens public health by producing bias in estimates of the population impact of environmental exposures. Quantitative methods to account for measurement bias can improve public health decision making. Recent findings We summarize traditional and emerging methods to improve inference under a standard perspective, in which the investigator estimates an exposure response function, and a policy perspective, in which the investigator directly estimates population impact of a proposed intervention. Summary Under a policy perspective, the analysis must be sensitive to errors in measurement of factors that modify the effect of exposure on outcome, must consider whether policies operate on the true or measured exposures, and may increasingly need to account for potentially dependent measurement error of two or more exposures affected by the same policy or intervention. Incorporating approaches to account for measurement error into such a policy perspective will increase the impact of environmental epidemiology. PMID:28138941
Dambacher, Michael; Hübner, Ronald; Schlösser, Jan
2011-01-01
The influence of monetary incentives on performance has been widely investigated among various disciplines. While the results reveal positive incentive effects only under specific conditions, the exact nature, and the contribution of mediating factors are largely unexplored. The present study examined influences of payoff schemes as one of these factors. In particular, we manipulated penalties for errors and slow responses in a speeded categorization task. The data show improved performance for monetary over symbolic incentives when (a) penalties are higher for slow responses than for errors, and (b) neither slow responses nor errors are punished. Conversely, payoff schemes with stronger punishment for errors than for slow responses resulted in worse performance under monetary incentives. The findings suggest that an emphasis of speed is favorable for positive influences of monetary incentives, whereas an emphasis of accuracy under time pressure has the opposite effect. PMID:21980316
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Sensitivity of geographic information system outputs to errors in remotely sensed data
NASA Technical Reports Server (NTRS)
Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.
1981-01-01
The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.
Trial Maneuver Generation and Selection in the Paladin Tactical Decision Generation System
NASA Technical Reports Server (NTRS)
Chappell, Alan R.; McManus, John W.; Goodrich, Kenneth H.
1992-01-01
To date, increased levels of maneuverability and controllability in aircraft have been postulated as tactically advantageous, but little research has studied maneuvers or tactics that make use of these capabilities. In order to help fill this void, a real time tactical decision generation system for air combat engagements, Paladin, has been developed. Paladin models an air combat engagement as a series of discrete decisions. A detailed description of Paladin's decision making process is presented. This includes the sources of data used, methods of generating reasonable maneuvers for the Paladin aircraft, and selection criteria for choosing the "best" maneuver. Simulation results are presented that show Paladin to be relatively insensitive to errors introduced into the decision process by estimation of future positional and geometric data.
Trial maneuver generation and selection in the Paladin tactical decision generation system
NASA Technical Reports Server (NTRS)
Chappell, Alan R.; Mcmanus, John W.; Goodrich, Kenneth H.
1993-01-01
To date, increased levels of maneuverability and controllability in aircraft have been postulated as tactically advantageous, but little research has studied maneuvers or tactics that make use of these capabilities. In order to help fill this void, a real-time tactical decision generation system for air combat engagements, Paladin, has been developed. Paladin models an air combat engagement as a series of discrete decisions. A detailed description of Paladin's decision making process is presented. This includes the sources of data used, methods of generating reasonable maneuvers for the Paladin aircraft, and selection criteria for choosing the 'best' maneuver. Simulation results are presented that show Paladin to be relatively insensitive to errors introduced into the decision process by estimation of future positional and geometric data.
Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.
Fleming, Stephen M; Daw, Nathaniel D
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Self-Evaluation of Decision-Making: A General Bayesian Framework for Metacognitive Computation
2017-01-01
People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one’s own actions to metacognitive judgments. In addition, the model provides insight into why subjects’ metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. PMID:28004960
Azadeh, A; Mokhtari, Z; Sharahi, Z Jiryaei; Zarrin, M
2015-12-01
Decision making failure is a predominant human error in emergency situations. To demonstrate the subject model, operators of an oil refinery were asked to answer a health, safety and environment HSE-decision styles (DS) questionnaire. In order to achieve this purpose, qualitative indicators in HSE and ergonomics domain have been collected. Decision styles, related to the questions, have been selected based on Driver taxonomy of human decision making approach. Teamwork efficiency has been assessed based on different decision style combinations. The efficiency has been ranked based on HSE performance. Results revealed that efficient decision styles resulted from data envelopment analysis (DEA) optimization model is consistent with the plant's dominant styles. Therefore, improvement in system performance could be achieved using the best operator for critical posts or in team arrangements. This is the first study that identifies the best decision styles with respect to HSE and ergonomics factors. Copyright © 2015 Elsevier Ltd. All rights reserved.
Detection of digital FSK using a phase-locked loop
NASA Technical Reports Server (NTRS)
Lindsey, W. C.; Simon, M. K.
1975-01-01
A theory is presented for the design of a digital FSK receiver which employs a phase-locked loop to set up the desired matched filter as the arriving signal frequency switches. The developed mathematical model makes it possible to establish the error probability performance of systems which employ a class of digital FM modulations. The noise mechanism which accounts for decision errors is modeled on the basis of the Meyr distribution and renewal Markov process theory.
Microscopic saw mark analysis: an empirical approach.
Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles
2015-01-01
Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang
2017-06-01
The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.
Jackson, Simon A.; Kleitman, Sabina; Howie, Pauline; Stankov, Lazar
2016-01-01
In this paper, we investigate whether individual differences in performance on heuristic and biases tasks can be explained by cognitive abilities, monitoring confidence, and control thresholds. Current theories explain individual differences in these tasks by the ability to detect errors and override automatic but biased judgments, and deliberative cognitive abilities that help to construct the correct response. Here we retain cognitive abilities but disentangle error detection, proposing that lower monitoring confidence and higher control thresholds promote error checking. Participants (N = 250) completed tasks assessing their fluid reasoning abilities, stable monitoring confidence levels, and the control threshold they impose on their decisions. They also completed seven typical heuristic and biases tasks such as the cognitive reflection test and Resistance to Framing. Using structural equation modeling, we found that individuals with higher reasoning abilities, lower monitoring confidence, and higher control threshold performed significantly and, at times, substantially better on the heuristic and biases tasks. Individuals with higher control thresholds also showed lower preferences for risky alternatives in a gambling task. Furthermore, residual correlations among the heuristic and biases tasks were reduced to null, indicating that cognitive abilities, monitoring confidence, and control thresholds accounted for their shared variance. Implications include the proposal that the capacity to detect errors does not differ between individuals. Rather, individuals might adopt varied strategies that promote error checking to different degrees, regardless of whether they have made a mistake or not. The results support growing evidence that decision-making involves cognitive abilities that construct actions and monitoring and control processes that manage their initiation. PMID:27790170
Jackson, Simon A; Kleitman, Sabina; Howie, Pauline; Stankov, Lazar
2016-01-01
In this paper, we investigate whether individual differences in performance on heuristic and biases tasks can be explained by cognitive abilities, monitoring confidence, and control thresholds. Current theories explain individual differences in these tasks by the ability to detect errors and override automatic but biased judgments, and deliberative cognitive abilities that help to construct the correct response. Here we retain cognitive abilities but disentangle error detection, proposing that lower monitoring confidence and higher control thresholds promote error checking. Participants ( N = 250) completed tasks assessing their fluid reasoning abilities, stable monitoring confidence levels, and the control threshold they impose on their decisions. They also completed seven typical heuristic and biases tasks such as the cognitive reflection test and Resistance to Framing. Using structural equation modeling, we found that individuals with higher reasoning abilities, lower monitoring confidence, and higher control threshold performed significantly and, at times, substantially better on the heuristic and biases tasks. Individuals with higher control thresholds also showed lower preferences for risky alternatives in a gambling task. Furthermore, residual correlations among the heuristic and biases tasks were reduced to null, indicating that cognitive abilities, monitoring confidence, and control thresholds accounted for their shared variance. Implications include the proposal that the capacity to detect errors does not differ between individuals. Rather, individuals might adopt varied strategies that promote error checking to different degrees, regardless of whether they have made a mistake or not. The results support growing evidence that decision-making involves cognitive abilities that construct actions and monitoring and control processes that manage their initiation.
Vassena, Eliana; Deraeve, James; Alexander, William H
2017-10-01
Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.
Should learners reason one step at a time? A randomised trial of two diagnostic scheme designs.
Blissett, Sarah; Morrison, Deric; McCarty, David; Sibbald, Matthew
2017-04-01
Making a diagnosis can be difficult for learners as they must integrate multiple clinical variables. Diagnostic schemes can help learners with this complex task. A diagnostic scheme is an algorithm that organises possible diagnoses by assigning signs or symptoms (e.g. systolic murmur) to groups of similar diagnoses (e.g. aortic stenosis and aortic sclerosis) and provides distinguishing features to help discriminate between similar diagnoses (e.g. carotid pulse). The current literature does not identify whether scheme layouts should guide learners to reason one step at a time in a terminally branching scheme or weigh multiple variables simultaneously in a hybrid scheme. We compared diagnostic accuracy, perceptual errors and cognitive load using two scheme layouts for cardiac auscultation. Focused on the task of identifying murmurs on Harvey, a cardiopulmonary simulator, 86 internal medicine residents used two scheme layouts. The terminally branching scheme organised the information into single variable decisions. The hybrid scheme combined single variable decisions with a chart integrating multiple distinguishing features. Using a crossover design, participants completed one set of murmurs (diastolic or systolic) with either the terminally branching or the hybrid scheme. The second set of murmurs was completed with the other scheme. A repeated measures manova was performed to compare diagnostic accuracy, perceptual errors and cognitive load between the scheme layouts. There was a main effect of the scheme layout (Wilks' λ = 0.841, F 3,80 = 5.1, p = 0.003). Use of a terminally branching scheme was associated with increased diagnostic accuracy (65 versus 53%, p = 0.02), fewer perceptual errors (0.61 versus 0.98 errors, p = 0.001) and lower cognitive load (3.1 versus 3.5/7, p = 0.023). The terminally branching scheme was associated with improved diagnostic accuracy, fewer perceptual errors and lower cognitive load, suggesting that terminally branching schemes are effective for improving diagnostic accuracy. These findings can inform the design of schemes and other clinical decision aids. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Parker, Andrew M; Weller, Joshua A
2015-01-01
Decision-making competence reflects individual differences in the susceptibility to committing decision-making errors, measured using tasks common from behavioral decision research (e.g., framing effects, under/overconfidence, following decision rules). Prior research demonstrates that those with higher decision-making competence report lower incidence of health-risking and antisocial behaviors, but there has been less focus on intermediate processes that may impact real-world decisions, and, in particular, those implicated by normative models. Here we test the associations between measures of youth decision-making competence (Y-DMC) and one such process, the degree to which individuals make choices consistent with maximizing expected value (EV). Using a task involving hypothetical gambles, we find that greater EV sensitivity is associated with greater Y-DMC. Higher Y-DMC scores are associated with (a) choosing risky options when EV favors those options and (b) avoiding risky options when EV favors a certain option. This relationship is stronger for gambles that involved potential losses. The results suggest that Y-DMC captures decision processes consistent with standard normative evaluations of risky decisions.
Nurses' role in medication safety.
Choo, Janet; Hutchinson, Alison; Bucknall, Tracey
2010-10-01
To explore the nurse's role in the process of medication management and identify the challenges associated with safe medication management in contemporary clinical practice. Medication errors have been a long-standing factor affecting consumer safety. The nursing profession has been identified as essential to the promotion of patient safety. A review of literature on medication errors and the use of electronic prescribing in medication errors. Medication management requires a multidisciplinary approach and interdisciplinary communication is essential to reduce medication errors. Information technologies can help to reduce some medication errors through eradication of transcription and dosing errors. Nurses must play a major role in the design of computerized medication systems to ensure a smooth transition to such as system. The nurses' roles in medication management cannot be over-emphasized. This is particularly true when designing a computerized medication system. The adoption of safety measures during decision making that parallel those of the aviation industry safety procedures can provide some strategies to prevent medication error. Innovations in information technology offer potential mechanisms to avert adverse events in medication management for nurses. © 2010 The Authors. Journal compilation © 2010 Blackwell Publishing Ltd.
[Medical errors: inevitable but preventable].
Giard, R W
2001-10-27
Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.
Nonlinear filter based decision feedback equalizer for optical communication systems.
Han, Xiaoqi; Cheng, Chi-Hao
2014-04-07
Nonlinear impairments in optical communication system have become a major concern of optical engineers. In this paper, we demonstrate that utilizing a nonlinear filter based Decision Feedback Equalizer (DFE) with error detection capability can deliver a better performance compared with the conventional linear filter based DFE. The proposed algorithms are tested in simulation using a coherent 100 Gb/sec 16-QAM optical communication system in a legacy optical network setting.
Seshia, Shashi S; Bryan Young, G; Makhinson, Michael; Smith, Preston A; Stobart, Kent; Croskerry, Pat
2018-02-01
Although patient safety has improved steadily, harm remains a substantial global challenge. Additionally, safety needs to be ensured not only in hospitals but also across the continuum of care. Better understanding of the complex cognitive factors influencing health care-related decisions and organizational cultures could lead to more rational approaches, and thereby to further improvement. A model integrating the concepts underlying Reason's Swiss cheese theory and the cognitive-affective biases plus cascade could advance the understanding of cognitive-affective processes that underlie decisions and organizational cultures across the continuum of care. Thematic analysis, qualitative information from several sources being used to support argumentation. Complex covert cognitive phenomena underlie decisions influencing health care. In the integrated model, the Swiss cheese slices represent dynamic cognitive-affective (mental) gates: Reason's successive layers of defence. Like firewalls and antivirus programs, cognitive-affective gates normally allow the passage of rational decisions but block or counter unsounds ones. Gates can be breached (ie, holes created) at one or more levels of organizations, teams, and individuals, by (1) any element of cognitive-affective biases plus (conflicts of interest and cognitive biases being the best studied) and (2) other potential error-provoking factors. Conversely, flawed decisions can be blocked and consequences minimized; for example, by addressing cognitive biases plus and error-provoking factors, and being constantly mindful. Informed shared decision making is a neglected but critical layer of defence (cognitive-affective gate). The integrated model can be custom tailored to specific situations, and the underlying principles applied to all methods for improving safety. The model may also provide a framework for developing and evaluating strategies to optimize organizational cultures and decisions. The concept is abstract, the model is virtual, and the best supportive evidence is qualitative and indirect. The proposed model may help enhance rational decision making across the continuum of care, thereby improving patient safety globally. © 2017 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons, Ltd.
Code of Federal Regulations, 2013 CFR
2013-07-01
... copayments. If the debt represents charges for outpatient medical care, inpatient hospital care, medication... error shown by the evidence in the file at the time of the prior decision as provided in § 1.969 of this...
Code of Federal Regulations, 2014 CFR
2014-07-01
... copayments. If the debt represents charges for outpatient medical care, inpatient hospital care, medication... error shown by the evidence in the file at the time of the prior decision as provided in § 1.969 of this...
Code of Federal Regulations, 2012 CFR
2012-07-01
... copayments. If the debt represents charges for outpatient medical care, inpatient hospital care, medication... error shown by the evidence in the file at the time of the prior decision as provided in § 1.969 of this...
Error assessment for emerging traffic data collection devices.
DOT National Transportation Integrated Search
2014-06-01
Because access to travel time information can significantly influence the decision making of both agencies : and travelers, accurate and reliable travel time information is increasingly needed. One important step in : providing that information is to...
Mellers, B A; Schwartz, A; Cooke, A D
1998-01-01
For many decades, research in judgment and decision making has examined behavioral violations of rational choice theory. In that framework, rationality is expressed as a single correct decision shared by experimenters and subjects that satisfies internal coherence within a set of preferences and beliefs. Outside of psychology, social scientists are now debating the need to modify rational choice theory with behavioral assumptions. Within psychology, researchers are debating assumptions about errors for many different definitions of rationality. Alternative frameworks are being proposed. These frameworks view decisions as more reasonable and adaptive that previously thought. For example, "rule following." Rule following, which occurs when a rule or norm is applied to a situation, often minimizes effort and provides satisfying solutions that are "good enough," though not necessarily the best. When rules are ambiguous, people look for reasons to guide their decisions. They may also let their emotions take charge. This chapter presents recent research on judgment and decision making from traditional and alternative frameworks.
Working memory capacity as controlled attention in tactical decision making.
Furley, Philip A; Memmert, Daniel
2012-06-01
The controlled attention theory of working memory capacity (WMC, Engle 2002) suggests that WMC represents a domain free limitation in the ability to control attention and is predictive of an individual's capability of staying focused, avoiding distraction and impulsive errors. In the present paper we test the predictive power of WMC in computer-based sport decision-making tasks. Experiment 1 demonstrated that high-WMC athletes were better able at focusing their attention on tactical decision making while blocking out irrelevant auditory distraction. Experiment 2 showed that high-WMC athletes were more successful at adapting their tactical decision making according to the situation instead of relying on prepotent inappropriate decisions. The present results provide additional but also unique support for the controlled attention theory of WMC by demonstrating that WMC is predictive of controlling attention in complex settings among different modalities and highlight the importance of working memory in tactical decision making.
Human Decision Making Based on Variations in Internal Noise: An EEG Study
Amitay, Sygal; Guiraud, Jeanne; Sohoglu, Ediz; Zobay, Oliver; Edmonds, Barrie A.; Zhang, Yu-Xuan; Moore, David R.
2013-01-01
Perceptual decision making is prone to errors, especially near threshold. Physiological, behavioural and modeling studies suggest this is due to the intrinsic or ‘internal’ noise in neural systems, which derives from a mixture of bottom-up and top-down sources. We show here that internal noise can form the basis of perceptual decision making when the external signal lacks the required information for the decision. We recorded electroencephalographic (EEG) activity in listeners attempting to discriminate between identical tones. Since the acoustic signal was constant, bottom-up and top-down influences were under experimental control. We found that early cortical responses to the identical stimuli varied in global field power and topography according to the perceptual decision made, and activity preceding stimulus presentation could predict both later activity and behavioural decision. Our results suggest that activity variations induced by internal noise of both sensory and cognitive origin are sufficient to drive discrimination judgments. PMID:23840904
Autonomous mechanism of internal choice estimate underlies decision inertia.
Akaishi, Rei; Umeda, Kazumasa; Nagase, Asako; Sakai, Katsuyuki
2014-01-08
Our choice is influenced by choices we made in the past, but the mechanism responsible for the choice bias remains elusive. Here we show that the history-dependent choice bias can be explained by an autonomous learning rule whereby an estimate of the likelihood of a choice to be made is updated in each trial by comparing between the actual and expected choices. We found that in perceptual decision making without performance feedback, a decision on an ambiguous stimulus is repeated on the subsequent trial more often than a decision on a salient stimulus. This inertia of decision was not accounted for by biases in motor response, sensory processing, or attention. The posterior cingulate cortex and frontal eye field represent choice prediction error and choice estimate in the learning algorithm, respectively. Interactions between the two regions during the intertrial interval are associated with decision inertia on a subsequent trial. Copyright © 2014 Elsevier Inc. All rights reserved.
To image analysis in computed tomography
NASA Astrophysics Data System (ADS)
Chukalina, Marina; Nikolaev, Dmitry; Ingacheva, Anastasia; Buzmakov, Alexey; Yakimchuk, Ivan; Asadchikov, Victor
2017-03-01
The presence of errors in tomographic image may lead to misdiagnosis when computed tomography (CT) is used in medicine, or the wrong decision about parameters of technological processes when CT is used in the industrial applications. Two main reasons produce these errors. First, the errors occur on the step corresponding to the measurement, e.g. incorrect calibration and estimation of geometric parameters of the set-up. The second reason is the nature of the tomography reconstruction step. At the stage a mathematical model to calculate the projection data is created. Applied optimization and regularization methods along with their numerical implementations of the method chosen have their own specific errors. Nowadays, a lot of research teams try to analyze these errors and construct the relations between error sources. In this paper, we do not analyze the nature of the final error, but present a new approach for the calculation of its distribution in the reconstructed volume. We hope that the visualization of the error distribution will allow experts to clarify the medical report impression or expert summary given by them after analyzing of CT results. To illustrate the efficiency of the proposed approach we present both the simulation and real data processing results.
NASA Astrophysics Data System (ADS)
Terando, A. J.; Wootten, A.; Eaton, M. J.; Runge, M. C.; Littell, J. S.; Bryan, A. M.; Carter, S. L.
2015-12-01
Two types of decisions face society with respect to anthropogenic climate change: (1) whether to enact a global greenhouse gas abatement policy, and (2) how to adapt to the local consequences of current and future climatic changes. The practice of downscaling global climate models (GCMs) is often used to address (2) because GCMs do not resolve key features that will mediate global climate change at the local scale. In response, the development of downscaling techniques and models has accelerated to aid decision makers seeking adaptation guidance. However, quantifiable estimates of the value of information are difficult to obtain, particularly in decision contexts characterized by deep uncertainty and low system-controllability. Here we demonstrate a method to quantify the additional value that decision makers could expect if research investments are directed towards developing new downscaled climate projections. As a proof of concept we focus on a real-world management problem: whether to undertake assisted migration for an endangered tropical avian species. We also take advantage of recently published multivariate methods that account for three vexing issues in climate impacts modeling: maximizing climate model quality information, accounting for model dependence in ensembles of opportunity, and deriving probabilistic projections. We expand on these global methods by including regional (Caribbean Basin) and local (Puerto Rico) domains. In the local domain, we test whether a high resolution (2km) dynamically downscaled GCM reduces the multivariate error estimate compared to the original coarse-scale GCM. Initial tests show little difference between the downscaled and original GCM multivariate error. When propagated through to a species population model, the Value of Information analysis indicates that the expected utility that would accrue to the manager (and species) if this downscaling were completed may not justify the cost compared to alternative actions.
Amemiya, S; Noji, T; Kubota, N; Nishijima, T; Kita, I
2014-04-18
Deliberation between possible options before making a decision is crucial to responding with an optimal choice. However, the neural mechanisms regulating this deliberative decision-making process are still unclear. Recent studies have proposed that the locus coeruleus-noradrenaline (LC-NA) system plays a role in attention, behavioral flexibility, and exploration, which contribute to the search for an optimal choice under uncertain situations. In the present study, we examined whether the LC-NA system relates to the deliberative process in a T-maze spatial decision-making task in rats. To quantify deliberation in rats, we recorded vicarious trial-and-error behavior (VTE), which is considered to reflect a deliberative process exploring optimal choices. In experiment 1, we manipulated the difficulty of choice by varying the amount of reward pellets between the two maze arms (0 vs. 4, 1 vs. 3, 2 vs. 2). A difficulty-dependent increase in VTE was accompanied by a reduction of choice bias toward the high reward arm and an increase in time required to select one of the two arms in the more difficult manipulation. In addition, the increase of c-Fos-positive NA neurons in the LC depended on the task difficulty and the amount of c-Fos expression in LC-NA neurons positively correlated with the occurrence of VTE. In experiment 2, we inhibited LC-NA activity by injection of clonidine, an agonist of the alpha2 autoreceptor, during a decision-making task (1 vs. 3). The clonidine injection suppressed occurrence of VTE in the early phase of the task and subsequently impaired a valuable choice later in the task. These results suggest that the LC-NA system regulates the deliberative process during decision-making. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Aydon, Laurene; Hauck, Yvonne; Zimmer, Margo; Murdoch, Jamee
2016-09-01
The aim of this study was to identify factors that influence nurse's decisions to question concerning aspects of medication administration within the context of a neonatal clinical care unit. Medication error in the neonatal setting can be high with this particularly vulnerable population. As the care giver responsible for medication administration, nurses are deemed accountable for most errors. However, they are recognised as the forefront of prevention. Minimal evidence is available around reasoning, decision making and questioning around medication administration. Therefore, this study focuses upon addressing the gap in knowledge around what nurses believe influences their decision to question. A critical incident design was employed where nurses were asked to describe clinical incidents around their decision to question a medication issue. Nurses were recruited from a neonatal clinical care unit and participated in an individual digitally recorded interview. One hundred and three nurses participated between December 2013-August 2014. Use of the constant comparative method revealed commonalities within transcripts. Thirty-six categories were grouped into three major themes: 'Working environment', 'Doing the right thing' and 'Knowledge about medications'. Findings highlight factors that influence nurses' decision to question issues around medication administration. Nurses feel it is their responsibility to do the right thing and speak up for their vulnerable patients to enhance patient safety. Negative dimensions within the themes will inform planning of educational strategies to improve patient safety, whereas positive dimensions must be reinforced within the multidisciplinary team. The working environment must support nurses to question and ultimately provide safe patient care. Clear and up to date policies, formal and informal education, role modelling by senior nurses, effective use of communication skills and a team approach can facilitate nurses to appropriately question aspects around medication administration. © 2016 John Wiley & Sons Ltd.
Kordel, Piotr; Kordel, Krzysztof
2014-11-01
The aim of the study was to present and analyze the verdicts of the Supreme Medical Court concerning professional misconduct among obstetrics and gynecology specialists between 2002-2012. Verdicts of the Supreme Medical Court from 84 cases concerning obstetrics and gynecology speciallsts, passed between 2002-20 12, were analyzed. The following categories were used to classify the types of professional misconduct: decisive erro, error in the performance of a medical procedure, organizational errol error of professional judgment, criminal offence, and unethical behavior. The largest group among the accused professionals were doctors working in private offices and on-call doctors in urban and district hospitals. The most frequent type of professional malpractice was decisive error and the most frequent type of case were obstetric labor complications. The analysis also showed a correlation between the type of case and the sentence in the Supreme Medical Court. A respective jurisdiction approach may be observed in the Supreme Medical Court ruling against cases concerning professional misconduct which are also criminal offences (i.e., illegal abortion, working under the influence). The most frequent types of professional misconduct should determine areas for professional training of obstetrics and gynecology specialists.
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E
2012-12-01
Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.
Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM
NASA Astrophysics Data System (ADS)
Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng
2015-07-01
We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.
Artificial Intelligence in Medicine and Radiation Oncology
Weidlich, Vincent
2018-01-01
Artifical Intelligence (AI) was reviewed with a focus on its potential applicability to radiation oncology. The improvement of process efficiencies and the prevention of errors were found to be the most significant contributions of AI to radiation oncology. It was found that the prevention of errors is most effective when data transfer processes were automated and operational decisions were based on logical or learned evaluations by the system. It was concluded that AI could greatly improve the efficiency and accuracy of radiation oncology operations. PMID:29904616
Artificial Intelligence in Medicine and Radiation Oncology.
Weidlich, Vincent; Weidlich, Georg A
2018-04-13
Artifical Intelligence (AI) was reviewed with a focus on its potential applicability to radiation oncology. The improvement of process efficiencies and the prevention of errors were found to be the most significant contributions of AI to radiation oncology. It was found that the prevention of errors is most effective when data transfer processes were automated and operational decisions were based on logical or learned evaluations by the system. It was concluded that AI could greatly improve the efficiency and accuracy of radiation oncology operations.
Optical communication system performance with tracking error induced signal fading.
NASA Technical Reports Server (NTRS)
Tycz, M.; Fitzmaurice, M. W.; Premo, D. A.
1973-01-01
System performance is determined for an optical communication system using noncoherent detection in the presence of tracking error induced signal fading assuming (1) binary on-off modulation (OOK) with both fixed and adaptive threshold receivers, and (2) binary polarization modulation (BPM). BPM is shown to maintain its inherent 2- to 3-dB advantage over OOK when adaptive thresholding is used, and to have a substantially greater advantage when the OOK system is restricted to a fixed decision threshold.
A high speed sequential decoder
NASA Technical Reports Server (NTRS)
Lum, H., Jr.
1972-01-01
The performance and theory of operation for the High Speed Hard Decision Sequential Decoder are delineated. The decoder is a forward error correction system which is capable of accepting data from binary-phase-shift-keyed and quadriphase-shift-keyed modems at input data rates up to 30 megabits per second. Test results show that the decoder is capable of maintaining a composite error rate of 0.00001 at an input E sub b/N sub o of 5.6 db. This performance has been obtained with minimum circuit complexity.
Twenty-First Annual Conference on Manual Control
NASA Technical Reports Server (NTRS)
Miller, R. A. (Compiler); Jagacinski, R. J. (Compiler)
1986-01-01
The proceedings of the entitled conference are presented. Twenty-nine manuscripts and eight abstracts pertaining to workload, attention and errors, controller evaluation, movement skills, coordination and decision making, display evaluation and human operator modeling and manual control.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Wagner, Tyler; DeWeber, Jefferson Tyrell; Tsang, Yin-Phan; Krueger, Damon; Whittier, Joanna B.; Infante, Dana M.; Whelan, Gary
2014-01-01
Flow and water temperature are fundamental properties of stream ecosystems upon which many freshwater resource management decisions are based. U.S. Geological Survey (USGS) gages are the most important source of streamflow and water temperature data available nationwide, but the degree to which gages represent landscape attributes of the larger population of streams has not been thoroughly evaluated. We identified substantial biases for seven landscape attributes in one or more regions across the conterminous United States. Streams with small watersheds (<10 km2) and at high elevations were often underrepresented, and biases were greater for water temperature gages and in arid regions. Biases can fundamentally alter management decisions and at a minimum this potential for error must be acknowledged accurately and transparently. We highlight three strategies that seek to reduce bias or limit errors arising from bias and illustrate how one strategy, supplementing USGS data, can greatly reduce bias.
Multi-criteria decision making approaches for quality control of genome-wide association studies.
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-03-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter's preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter's choices thus providing more reproducible results.
Eyewitness identification evidence and innocence risk.
Clark, Steven E; Godfrey, Ryan D
2009-02-01
It is well known that the frailties of human memory and vulnerability to suggestion lead to eyewitness identification errors. However, variations in different aspects of the eyewitnessing conditions produce different kinds of errors that are related to wrongful convictions in very different ways. We present a review of the eyewitness identification literature, organized around underlying cognitive mechanisms, memory, similarity, and decision processes, assessing the effects on both correct and mistaken identification. In addition, we calculate a conditional probability we call innocence risk, which is the probability that the suspect is innocent, given that the suspect was identified. Assessment of innocence risk is critical to the theoretical development of eyewitness identification research, as well as to legal decision making and policy evaluation. Our review shows a complex relationship between misidentification and innocence risk, sheds light on some areas of controversy, and suggests that some issues thought to be resolved are in need of additional research.
Case-based clinical reasoning in feline medicine: 2: Managing cognitive error.
Canfield, Paul J; Whitehead, Martin L; Johnson, Robert; O'Brien, Carolyn R; Malik, Richard
2016-03-01
This is Article 2 of a three-part series on clinical reasoning that encourages practitioners to explore and understand how they think and make case-based decisions. It is hoped that, in the process, they will learn to trust their intuition but, at the same time, put in place safeguards to diminish the impact of bias and misguided logic on their diagnostic decision-making. Article 1, published in the January 2016 issue of JFMS, discussed the relative merits and shortcomings of System 1 thinking (immediate and unconscious) and System 2 thinking (effortful and analytical). This second article examines ways of managing cognitive error, particularly the negative impact of bias, when making a diagnosis. Article 3, to appear in the May 2016 issue, explores the use of heuristics (mental short cuts) and illness scripts in diagnostic reasoning. © The Author(s) 2016.
Decision support tool for diagnosing the source of variation
NASA Astrophysics Data System (ADS)
Masood, Ibrahim; Azrul Azhad Haizan, Mohamad; Norbaya Jumali, Siti; Ghazali, Farah Najihah Mohd; Razali, Hazlin Syafinaz Md; Shahir Yahya, Mohd; Azlan, Mohd Azwir bin
2017-08-01
Identifying the source of unnatural variation (SOV) in manufacturing process is essential for quality control. The Shewhart control chart patterns (CCPs) are commonly used to monitor the SOV. However, a proper interpretation of CCPs associated to its SOV requires a high skill industrial practitioner. Lack of knowledge in process engineering will lead to erroneous corrective action. The objective of this study is to design the operating procedures of computerized decision support tool (DST) for process diagnosis. The DST is an embedded tool in CCPs recognition scheme. Design methodology involves analysis of relationship between geometrical features, manufacturing process and CCPs. The DST contents information about CCPs and its possible root cause error and description on SOV phenomenon such as process deterioration in tool bluntness, offsetting tool, loading error, and changes in materials hardness. The DST will be useful for an industrial practitioner in making effective troubleshooting.
Shichinohe, Natsuko; Akao, Teppei; Kurkin, Sergei; Fukushima, Junko; Kaneko, Chris R S; Fukushima, Kikuro
2009-06-11
Cortical motor areas are thought to contribute "higher-order processing," but what that processing might include is unknown. Previous studies of the smooth pursuit-related discharge of supplementary eye field (SEF) neurons have not distinguished activity associated with the preparation for pursuit from discharge related to processing or memory of the target motion signals. Using a memory-based task designed to separate these components, we show that the SEF contains signals coding retinal image-slip-velocity, memory, and assessment of visual motion direction, the decision of whether to pursue, and the preparation for pursuit eye movements. Bilateral muscimol injection into SEF resulted in directional errors in smooth pursuit, errors of whether to pursue, and impairment of initial correct eye movements. These results suggest an important role for the SEF in memory and assessment of visual motion direction and the programming of appropriate pursuit eye movements.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Benefit-risk Evaluation for Diagnostics: A Framework (BED-FRAME).
Evans, Scott R; Pennello, Gene; Pantoja-Galicia, Norberto; Jiang, Hongyu; Hujer, Andrea M; Hujer, Kristine M; Manca, Claudia; Hill, Carol; Jacobs, Michael R; Chen, Liang; Patel, Robin; Kreiswirth, Barry N; Bonomo, Robert A
2016-09-15
The medical community needs systematic and pragmatic approaches for evaluating the benefit-risk trade-offs of diagnostics that assist in medical decision making. Benefit-Risk Evaluation of Diagnostics: A Framework (BED-FRAME) is a strategy for pragmatic evaluation of diagnostics designed to supplement traditional approaches. BED-FRAME evaluates diagnostic yield and addresses 2 key issues: (1) that diagnostic yield depends on prevalence, and (2) that different diagnostic errors carry different clinical consequences. As such, evaluating and comparing diagnostics depends on prevalence and the relative importance of potential errors. BED-FRAME provides a tool for communicating the expected clinical impact of diagnostic application and the expected trade-offs of diagnostic alternatives. BED-FRAME is a useful fundamental supplement to the standard analysis of diagnostic studies that will aid in clinical decision making. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Valdes, Gilmer; Solberg, Timothy D.; Heskel, Marina; Ungar, Lyle; Simone, Charles B., II
2016-08-01
To develop a patient-specific ‘big data’ clinical decision tool to predict pneumonitis in stage I non-small cell lung cancer (NSCLC) patients after stereotactic body radiation therapy (SBRT). 61 features were recorded for 201 consecutive patients with stage I NSCLC treated with SBRT, in whom 8 (4.0%) developed radiation pneumonitis. Pneumonitis thresholds were found for each feature individually using decision stumps. The performance of three different algorithms (Decision Trees, Random Forests, RUSBoost) was evaluated. Learning curves were developed and the training error analyzed and compared to the testing error in order to evaluate the factors needed to obtain a cross-validated error smaller than 0.1. These included the addition of new features, increasing the complexity of the algorithm and enlarging the sample size and number of events. In the univariate analysis, the most important feature selected was the diffusion capacity of the lung for carbon monoxide (DLCO adj%). On multivariate analysis, the three most important features selected were the dose to 15 cc of the heart, dose to 4 cc of the trachea or bronchus, and race. Higher accuracy could be achieved if the RUSBoost algorithm was used with regularization. To predict radiation pneumonitis within an error smaller than 10%, we estimate that a sample size of 800 patients is required. Clinically relevant thresholds that put patients at risk of developing radiation pneumonitis were determined in a cohort of 201 stage I NSCLC patients treated with SBRT. The consistency of these thresholds can provide radiation oncologists with an estimate of their reliability and may inform treatment planning and patient counseling. The accuracy of the classification is limited by the number of patients in the study and not by the features gathered or the complexity of the algorithm.
NASA Astrophysics Data System (ADS)
Hancock, S.; Armston, J.; Tang, H.; Patterson, P. L.; Healey, S. P.; Marselis, S.; Duncanson, L.; Hofton, M. A.; Kellner, J. R.; Luthcke, S. B.; Sun, X.; Blair, J. B.; Dubayah, R.
2017-12-01
NASA's Global Ecosystem Dynamics Investigation will mount a multi-track, full-waveform lidar on the International Space Station (ISS) that is optimised for the measurement of forest canopy height and structure. GEDI will use ten laser tracks, two 10 mJ "power beams" and eight 5 mJ "coverage beams" to produce global (51.5oS to 51.5oN) maps of above ground biomass (AGB), canopy height, vegetation structure and other biophysical parameters. The mission has a requirement to generate a 1 km AGB map with 80% of pixels with ≤ 20% standard error or 20 Mg·ha-1, whichever is greater. To assess performance and compare to mission requirements, an end-to-end simulator has been developed. The simulator brings together tools to propagate the effects of measurement and sampling error on GEDI data products. The simulator allows us to evaluate the impact of instrument performance, ISS orbits, processing algorithms and losses of data that may occur due to clouds, snow, leaf-off conditions, and areas with an insufficient signal-to-noise ratio (SNR). By evaluating the consequences of operational decisions on GEDI data products, this tool provides a quantitative framework for decision-making and mission planning. Here we demonstrate the performance tool by using it to evaluate the trade-off between measurement and sampling error on the 1 km AGB data product. Results demonstrate that the use of coverage beams during the day (lowest GEDI SNR case) over very dense forests (>95% canopy cover) will result in some measurement bias. Omitting these low SNR cases increased the sampling error. Through this an SNR threshold for a given expected canopy cover can be set. The other applications of the performance tool are also discussed, such as assessing the impact of decisions made in the AGB modelling and signal processing stages on the accuracy of final data products.
ANFIS multi criteria decision making for overseas construction projects: a methodology
NASA Astrophysics Data System (ADS)
Utama, W. P.; Chan, A. P. C.; Zulherman; Zahoor, H.; Gao, R.; Jumas, D. Y.
2018-02-01
A critical part when a company targeting a foreign market is how to make a better decision in connection with potential project selection. Since different attributes of information are often incomplete, imprecise and ill-defined in overseas projects selection, the process of decision making by relying on the experiences and intuition is a risky attitude. This paper aims to demonstrate a decision support method in deciding overseas construction projects (OCPs). An Adaptive Neuro-Fuzzy Inference System (ANFIS), the amalgamation of Neural Network and Fuzzy Theory, was used as decision support tool to decide to go or not go on OCPs. Root mean square error (RMSE) and coefficient of correlation (R) were employed to identify the ANFIS system indicating an optimum and efficient result. The optimum result was obtained from ANFIS network with two input membership functions, Gaussian membership function (gaussmf) and hybrid optimization method. The result shows that ANFIS may help the decision-making process for go/not go decision in OCPs.
Bett, David; Allison, Elizabeth; Murdoch, Lauren H.; Kaefer, Karola; Wood, Emma R.; Dudchenko, Paul A.
2012-01-01
Vicarious trial-and-errors (VTEs) are back-and-forth movements of the head exhibited by rodents and other animals when faced with a decision. These behaviors have recently been associated with prospective sweeps of hippocampal place cell firing, and thus may reflect a rodent model of deliberative decision-making. The aim of the current study was to test whether the hippocampus is essential for VTEs in a spatial memory task and in a simple visual discrimination (VD) task. We found that lesions of the hippocampus with ibotenic acid produced a significant impairment in the accuracy of choices in a serial spatial reversal (SR) task. In terms of VTEs, whereas sham-lesioned animals engaged in more VTE behavior prior to identifying the location of the reward as opposed to repeated trials after it had been located, the lesioned animals failed to show this difference. In contrast, damage to the hippocampus had no effect on acquisition of a VD or on the VTEs seen in this task. For both lesion and sham-lesion animals, adding an additional choice to the VD increased the number of VTEs and decreased the accuracy of choices. Together, these results suggest that the hippocampus may be specifically involved in VTE behavior during spatial decision making. PMID:23115549
Executive function and decision-making in women with fibromyalgia.
Verdejo-García, Antonio; López-Torrecillas, Francisca; Calandre, Elena Pita; Delgado-Rodríguez, Antonia; Bechara, Antoine
2009-02-01
Patients with fibromyalgia (FM) typically report cognitive problems, and they state that these deficits are disturbing in everyday life. Despite these substantial subjective complaints by FM patients, very few studies have addressed objectively the effect of such aversive states on neuropsychological performance. In this study we aimed to examine possible impairment of executive function and decision-making in a sample of 36 women diagnosed with FM and 36 healthy women matched in age, education, and socio-economic status. We contrasted performance of both groups on two measures of executive functioning: the Wisconsin Card Sorting Test (WCST), which assesses cognitive flexibility skills, and the Iowa Gambling Tasks (IGT; original and variant versions), which assess emotion-based decision-making. We also examined the relationship between executive function performance and pain experience, and between executive function and personality traits of novelty-seeking, harm avoidance, reward dependence, and persistence (measured by the Temperament and Character Inventory-Revised). Results showed that on the WCST, FM women showed poorer performance than healthy comparison women on the number of categories and non-perseverative errors, but not on perseverative errors. FM patients also showed altered learning curve in the original IGT (where reward is immediate and punishment is delayed), suggesting compromised emotion-based decision-making; but not in the variant IGT (where punishment is immediate but reward is delayed), suggesting hypersensitivity to reward. Personality variables were very mildly associated with cognitive performance in FM women.
Error management for musicians: an interdisciplinary conceptual framework
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly – or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels. PMID:25120501
Error management for musicians: an interdisciplinary conceptual framework.
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels.
Currie detection limits in gamma-ray spectroscopy.
De Geer, Lars-Erik
2004-01-01
Currie Hypothesis testing is applied to gamma-ray spectral data, where an optimum part of the peak is used and the background is considered well known from nearby channels. With this, the risk of making Type I errors is about 100 times lower than commonly assumed. A programme, PeakMaker, produces random peaks with given characteristics on the screen and calculations are done to facilitate a full use of Poisson statistics in spectrum analyses. SHORT TECHNICAL NOTE SUMMARY: The Currie decision limit concept applied to spectral data is reinterpreted, which gives better consistency between the selected error risk and the observed error rates. A PeakMaker program is described and the few count problem is analyzed.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Information technology and medication safety: what is the benefit?
Kaushal, R; Bates, D
2002-01-01
Medication errors occur frequently and have significant clinical and financial consequences. Several types of information technologies can be used to decrease rates of medication errors. Computerized physician order entry with decision support significantly reduces serious inpatient medication error rates in adults. Other available information technologies that may prove effective for inpatients include computerized medication administration records, robots, automated pharmacy systems, bar coding, "smart" intravenous devices, and computerized discharge prescriptions and instructions. In outpatients, computerization of prescribing and patient oriented approaches such as personalized web pages and delivery of web based information may be important. Public and private mandates for information technology interventions are growing, but further development, application, evaluation, and dissemination are required. PMID:12486992
How Prediction Errors Shape Perception, Attention, and Motivation
den Ouden, Hanneke E. M.; Kok, Peter; de Lange, Floris P.
2012-01-01
Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals can be very different and are determined by the afferent and efferent connections within the neural circuitry in which they arise. PMID:23248610
Chen, Cong; Beckman, Robert A
2009-01-01
This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.
Improved CLARAty Functional-Layer/Decision-Layer Interface
NASA Technical Reports Server (NTRS)
Estlin, Tara; Rabideau, Gregg; Gaines, Daniel; Johnston, Mark; Chouinard, Caroline; Nessnas, Issa; Shu, I-Hsiang
2008-01-01
Improved interface software for communication between the CLARAty Decision and Functional layers has been developed. [The Coupled Layer Architecture for Robotics Autonomy (CLARAty) was described in Coupled-Layer Robotics Architecture for Autonomy (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: the CLARAty architecture was developed to improve the modularity of robotic software while tightening coupling between planning/execution and basic control subsystems. Whereas prior robotic software architectures typically contained three layers, the CLARAty contains two layers: a decision layer (DL) and a functional layer (FL).] Types of communication supported by the present software include sending commands from DL modules to FL modules and sending data updates from FL modules to DL modules. The present software supplants prior interface software that had little error-checking capability, supported data parameters in string form only, supported commanding at only one level of the FL, and supported only limited updates of the state of the robot. The present software offers strong error checking, and supports complex data structures and commanding at multiple levels of the FL, and relative to the prior software, offers a much wider spectrum of state-update capabilities.
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlensinger, Adam M.
2012-01-01
Modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more overhead through noisier channels, and software-defined radios use error-correction techniques that approach Shannon s theoretical limit of performance. The authors describe the benefit of closed-loop measurements for a receiver when paired with a counterpart transmitter and representative channel conditions. We also describe a real-time Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in real-time during the development of software defined radios.
Informed use of patients' records on trusted health care services.
Sahama, Tony; Miller, Evonne
2011-01-01
Health care is an information-intensive business. Sharing information in health care processes is a smart use of data enabling informed decision-making whilst ensuring. the privacy and security of patient information. To achieve this, we propose data encryption techniques embedded Information Accountability Framework (IAF) that establishes transitions of the technological concept, thus enabling understanding of shared responsibility, accessibility, and efficient cost effective informed decisions between health care professionals and patients. The IAF results reveal possibilities of efficient informed medical decision making and minimisation of medical errors. Of achieving this will require significant cultural changes and research synergies to ensure the sustainability, acceptability and durability of the IAF.
Reconciling uncertain costs and benefits in bayes nets for invasive species management
Burgman, M.A.; Wintle, B.A.; Thompson, C.A.; Moilanen, A.; Runge, M.C.; Ben-Haim, Y.
2010-01-01
Bayes nets are used increasingly to characterize environmental systems and formalize probabilistic reasoning to support decision making. These networks treat probabilities as exact quantities. Sensitivity analysis can be used to evaluate the importance of assumptions and parameter estimates. Here, we outline an application of info-gap theory to Bayes nets that evaluates the sensitivity of decisions to possibly large errors in the underlying probability estimates and utilities. We apply it to an example of management and eradication of Red Imported Fire Ants in Southern Queensland, Australia and show how changes in management decisions can be justified when uncertainty is considered. ?? 2009 Society for Risk Analysis.
Visual anticipation biases conscious decision making but not bottom-up visual processing
Mathews, Zenon; Cetnarski, Ryszard; Verschure, Paul F. M. J.
2015-01-01
Prediction plays a key role in control of attention but it is not clear which aspects of prediction are most prominent in conscious experience. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the formation of conscious experience. Yet, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and a psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and/or errors on conscious experience, attention and decision-making. Using a displacement detection task combined with reverse correlation, we reveal signatures of the usage of prediction at three different levels of perceptual processing: bottom-up fast saccades, top-down driven slow saccades and consciousnes decisions. Our results suggest that the brain employs multiple parallel mechanism at different levels of perceptual processing in order to shape effective sensory consciousness within a predicted perceptual scene. We further observe that bottom-up sensory and top-down predictive processes can be dissociated through cognitive load. We propose a probabilistic data association model from dynamical systems theory to model the predictive multi-scale bias in perceptual processing that we observe and its role in the formation of conscious experience. We propose that these results support the hypothesis that consciousness provides a time-delayed description of a task that is used to prospectively optimize real time control structures, rather than being engaged in the real-time control of behavior itself. PMID:25741290
Medical errors in primary care clinics – a cross sectional study
2012-01-01
Background Patient safety is vital in patient care. There is a lack of studies on medical errors in primary care settings. The aim of the study is to determine the extent of diagnostic inaccuracies and management errors in public funded primary care clinics. Methods This was a cross-sectional study conducted in twelve public funded primary care clinics in Malaysia. A total of 1753 medical records were randomly selected in 12 primary care clinics in 2007 and were reviewed by trained family physicians for diagnostic, management and documentation errors, potential errors causing serious harm and likelihood of preventability of such errors. Results The majority of patient encounters (81%) were with medical assistants. Diagnostic errors were present in 3.6% (95% CI: 2.2, 5.0) of medical records and management errors in 53.2% (95% CI: 46.3, 60.2). For management errors, medication errors were present in 41.1% (95% CI: 35.8, 46.4) of records, investigation errors in 21.7% (95% CI: 16.5, 26.8) and decision making errors in 14.5% (95% CI: 10.8, 18.2). A total of 39.9% (95% CI: 33.1, 46.7) of these errors had the potential to cause serious harm. Problems of documentation including illegible handwriting were found in 98.0% (95% CI: 97.0, 99.1) of records. Nearly all errors (93.5%) detected were considered preventable. Conclusions The occurrence of medical errors was high in primary care clinics particularly with documentation and medication errors. Nearly all were preventable. Remedial intervention addressing completeness of documentation and prescriptions are likely to yield reduction of errors. PMID:23267547
NASA Astrophysics Data System (ADS)
Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.
2017-12-01
The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.
Reducing diagnostic errors in medicine: what's the goal?
Graber, Mark; Gordon, Ruthanna; Franklin, Nancy
2002-10-01
This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.
ERIC Educational Resources Information Center
Brown, Robert T.; Jackson, Lee A.
1992-01-01
Reviews research on inductive reasoning errors, including seeing patterns or relationships where none exist, neglecting statistical regression, overgeneralizing unrepresentative data, and drawing conclusions based on incomplete decision matrices. Considers "false consensus effect," through which associations with like-minded people lead one to…
76 FR 73006 - General Motors, LLC, Grant of Petition for Decision of Inconsequential Noncompliance
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-28
... the rarest circumstances, the primary function of the PRNDM display is to inform the driver of gear... of shifting errors.'' In all but the rarest circumstances, the primary function of the transmission...
Potential barge transportation for inbound corn and grain
DOT National Transportation Integrated Search
1997-12-31
This research develops a model for estimating future barge and rail rates for decision making. The Box-Jenkins and the Regression Analysis with ARIMA errors forecasting methods were used to develop appropriate models for determining future rates. A s...
A preliminary taxonomy of medical errors in family practice
Dovey, S; Meyers, D; Phillips, R; Green, L; Fryer, G; Galliher, J; Kappus, J; Grob, P
2002-01-01
Objective: To develop a preliminary taxonomy of primary care medical errors. Design: Qualitative analysis to identify categories of error reported during a randomized controlled trial of computer and paper reporting methods. Setting: The National Network for Family Practice and Primary Care Research. Participants: Family physicians. Main outcome measures: Medical error category, context, and consequence. Results: Forty two physicians made 344 reports: 284 (82.6%) arose from healthcare systems dysfunction; 46 (13.4%) were errors due to gaps in knowledge or skills; and 14 (4.1%) were reports of adverse events, not errors. The main subcategories were: administrative failures (102; 30.9% of errors), investigation failures (82; 24.8%), treatment delivery lapses (76; 23.0%), miscommunication (19; 5.8%), payment systems problems (4; 1.2%), error in the execution of a clinical task (19; 5.8%), wrong treatment decision (14; 4.2%), and wrong diagnosis (13; 3.9%). Most reports were of errors that were recognized and occurred in reporters' practices. Affected patients ranged in age from 8 months to 100 years, were of both sexes, and represented all major US ethnic groups. Almost half the reports were of events which had adverse consequences. Ten errors resulted in patients being admitted to hospital and one patient died. Conclusions: This medical error taxonomy, developed from self-reports of errors observed by family physicians during their routine clinical practice, emphasizes problems in healthcare processes and acknowledges medical errors arising from shortfalls in clinical knowledge and skills. Patient safety strategies with most effect in primary care settings need to be broader than the current focus on medication errors. PMID:12486987
A preliminary taxonomy of medical errors in family practice.
Dovey, S M; Meyers, D S; Phillips, R L; Green, L A; Fryer, G E; Galliher, J M; Kappus, J; Grob, P
2002-09-01
To develop a preliminary taxonomy of primary care medical errors. Qualitative analysis to identify categories of error reported during a randomized controlled trial of computer and paper reporting methods. The National Network for Family Practice and Primary Care Research. Family physicians. Medical error category, context, and consequence. Forty two physicians made 344 reports: 284 (82.6%) arose from healthcare systems dysfunction; 46 (13.4%) were errors due to gaps in knowledge or skills; and 14 (4.1%) were reports of adverse events, not errors. The main subcategories were: administrative failure (102; 30.9% of errors), investigation failures (82; 24.8%), treatment delivery lapses (76; 23.0%), miscommunication (19; 5.8%), payment systems problems (4; 1.2%), error in the execution of a clinical task (19; 5.8%), wrong treatment decision (14; 4.2%), and wrong diagnosis (13; 3.9%). Most reports were of errors that were recognized and occurred in reporters' practices. Affected patients ranged in age from 8 months to 100 years, were of both sexes, and represented all major US ethnic groups. Almost half the reports were of events which had adverse consequences. Ten errors resulted in patients being admitted to hospital and one patient died. This medical error taxonomy, developed from self-reports of errors observed by family physicians during their routine clinical practice, emphasizes problems in healthcare processes and acknowledges medical errors arising from shortfalls in clinical knowledge and skills. Patient safety strategies with most effect in primary care settings need to be broader than the current focus on medication errors.
MacGillivray, Brian H
2017-08-01
In many environmental and public health domains, heuristic methods of risk and decision analysis must be relied upon, either because problem structures are ambiguous, reliable data is lacking, or decisions are urgent. This introduces an additional source of uncertainty beyond model and measurement error - uncertainty stemming from relying on inexact inference rules. Here we identify and analyse heuristics used to prioritise risk objects, to discriminate between signal and noise, to weight evidence, to construct models, to extrapolate beyond datasets, and to make policy. Some of these heuristics are based on causal generalisations, yet can misfire when these relationships are presumed rather than tested (e.g. surrogates in clinical trials). Others are conventions designed to confer stability to decision analysis, yet which may introduce serious error when applied ritualistically (e.g. significance testing). Some heuristics can be traced back to formal justifications, but only subject to strong assumptions that are often violated in practical applications. Heuristic decision rules (e.g. feasibility rules) in principle act as surrogates for utility maximisation or distributional concerns, yet in practice may neglect costs and benefits, be based on arbitrary thresholds, and be prone to gaming. We highlight the problem of rule-entrenchment, where analytical choices that are in principle contestable are arbitrarily fixed in practice, masking uncertainty and potentially introducing bias. Strategies for making risk and decision analysis more rigorous include: formalising the assumptions and scope conditions under which heuristics should be applied; testing rather than presuming their underlying empirical or theoretical justifications; using sensitivity analysis, simulations, multiple bias analysis, and deductive systems of inference (e.g. directed acyclic graphs) to characterise rule uncertainty and refine heuristics; adopting "recovery schemes" to correct for known biases; and basing decision rules on clearly articulated values and evidence, rather than convention. Copyright © 2017. Published by Elsevier Ltd.
Zhu, Lusha; Mathewson, Kyle E.; Hsu, Ming
2012-01-01
Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents’ beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs. PMID:22307594
Zhu, Lusha; Mathewson, Kyle E; Hsu, Ming
2012-01-31
Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents' beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs.
fMRI evidence for a dual process account of the speed-accuracy tradeoff in decision-making.
Ivanoff, Jason; Branning, Philip; Marois, René
2008-07-09
The speed and accuracy of decision-making have a well-known trading relationship: hasty decisions are more prone to errors while careful, accurate judgments take more time. Despite the pervasiveness of this speed-accuracy trade-off (SAT) in decision-making, its neural basis is still unknown. Using functional magnetic resonance imaging (fMRI) we show that emphasizing the speed of a perceptual decision at the expense of its accuracy lowers the amount of evidence-related activity in lateral prefrontal cortex. Moreover, this speed-accuracy difference in lateral prefrontal cortex activity correlates with the speed-accuracy difference in the decision criterion metric of signal detection theory. We also show that the same instructions increase baseline activity in a dorso-medial cortical area involved in the internal generation of actions. These findings suggest that the SAT is neurally implemented by modulating not only the amount of externally-derived sensory evidence used to make a decision, but also the internal urge to make a response. We propose that these processes combine to control the temporal dynamics of the speed-accuracy trade-off in decision-making.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calabrese, Edward J., E-mail: edwardc@schoolph.uma
This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. •more » The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.« less
Homeostatic Regulation of Memory Systems and Adaptive Decisions
Mizumori, Sheri JY; Jo, Yong Sang
2013-01-01
While it is clear that many brain areas process mnemonic information, understanding how their interactions result in continuously adaptive behaviors has been a challenge. A homeostatic-regulated prediction model of memory is presented that considers the existence of a single memory system that is based on a multilevel coordinated and integrated network (from cells to neural systems) that determines the extent to which events and outcomes occur as predicted. The “multiple memory systems of the brain” have in common output that signals errors in the prediction of events and/or their outcomes, although these signals differ in terms of what the error signal represents (e.g., hippocampus: context prediction errors vs. midbrain/striatum: reward prediction errors). The prefrontal cortex likely plays a pivotal role in the coordination of prediction analysis within and across prediction brain areas. By virtue of its widespread control and influence, and intrinsic working memory mechanisms. Thus, the prefrontal cortex supports the flexible processing needed to generate adaptive behaviors and predict future outcomes. It is proposed that prefrontal cortex continually and automatically produces adaptive responses according to homeostatic regulatory principles: prefrontal cortex may serve as a controller that is intrinsically driven to maintain in prediction areas an experience-dependent firing rate set point that ensures adaptive temporally and spatially resolved neural responses to future prediction errors. This same drive by prefrontal cortex may also restore set point firing rates after deviations (i.e. prediction errors) are detected. In this way, prefrontal cortex contributes to reducing uncertainty in prediction systems. An emergent outcome of this homeostatic view may be the flexible and adaptive control that prefrontal cortex is known to implement (i.e. working memory) in the most challenging of situations. Compromise to any of the prediction circuits should result in rigid and suboptimal decision making and memory as seen in addiction and neurological disease. © 2013 The Authors. Hippocampus Published by Wiley Periodicals, Inc. PMID:23929788
Homeostatic regulation of memory systems and adaptive decisions.
Mizumori, Sheri J Y; Jo, Yong Sang
2013-11-01
While it is clear that many brain areas process mnemonic information, understanding how their interactions result in continuously adaptive behaviors has been a challenge. A homeostatic-regulated prediction model of memory is presented that considers the existence of a single memory system that is based on a multilevel coordinated and integrated network (from cells to neural systems) that determines the extent to which events and outcomes occur as predicted. The "multiple memory systems of the brain" have in common output that signals errors in the prediction of events and/or their outcomes, although these signals differ in terms of what the error signal represents (e.g., hippocampus: context prediction errors vs. midbrain/striatum: reward prediction errors). The prefrontal cortex likely plays a pivotal role in the coordination of prediction analysis within and across prediction brain areas. By virtue of its widespread control and influence, and intrinsic working memory mechanisms. Thus, the prefrontal cortex supports the flexible processing needed to generate adaptive behaviors and predict future outcomes. It is proposed that prefrontal cortex continually and automatically produces adaptive responses according to homeostatic regulatory principles: prefrontal cortex may serve as a controller that is intrinsically driven to maintain in prediction areas an experience-dependent firing rate set point that ensures adaptive temporally and spatially resolved neural responses to future prediction errors. This same drive by prefrontal cortex may also restore set point firing rates after deviations (i.e. prediction errors) are detected. In this way, prefrontal cortex contributes to reducing uncertainty in prediction systems. An emergent outcome of this homeostatic view may be the flexible and adaptive control that prefrontal cortex is known to implement (i.e. working memory) in the most challenging of situations. Compromise to any of the prediction circuits should result in rigid and suboptimal decision making and memory as seen in addiction and neurological disease. Copyright © 2013 Wiley Periodicals, Inc.
Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A
2007-11-01
To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine operations for complex patients and circumstances.
Decision-directed detector for overlapping PCM/NRZ signals.
NASA Technical Reports Server (NTRS)
Wang, C. D.; Noack, T. L.
1973-01-01
A decision-directed (DD) technique for the detection of overlapping PCM/NRZ signals in the presence of white Gaussian noise is investigated. The performance of the DD detector is represented by probability of error Pe versus input signal-to-noise ratio (SNR). To examine how much improvement in performance can be achieved with this technique, Pe's with and without DD feedback are evaluated in parallel. Further, analytical results are compared with those found by Monte Carlo simulations. The results are in good agreement.
Graphical user interface for a neonatal parenteral nutrition decision support system.
Peverini, R. L.; Beach, D. S.; Wan, K. W.; Vyhmeister, N. R.
2000-01-01
We developed and implemented a decision support system for prescribing parenteral nutrition (PN) solutions for infants in our neonatal intensive care unit. We employed a graphical user interface to provide clinical guidelines and aid the understanding of the interaction among the various ingredients that make up a PN solution. In particular, by displaying the interaction between the PN total solution volume, protein, calcium and phosphorus, we have eliminated PN orders that previously would have resulted in calcium-phosphorus precipitation errors. PMID:11079964
Failure detection system design methodology. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chow, E. Y.
1980-01-01
The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.
Toward Online Measurement of Decision State
NASA Technical Reports Server (NTRS)
Lachter, Joel; Johnston, James C.; Corrado, Greg S.; McClelland, James L.
2009-01-01
In traditional perceptual decision-making experiments, two pieces of data are collected on each trial: response time and accuracy. But how confident were participants and how did their decision state evolve over time? We asked participants to provide a continuous readout of their decision state by moving a cursor along a sliding scale between a 100% certain left response and a 100% certain right response. Subjects did not terminate the trials; rather, trials were timed out at random and subjects were scored based on the cursor position at that time. Higher rewards for correct responses and higher penalties for errors were associated with extreme responses so that the response with the highest expected value was that which accurately reflected the participant's odds of being correct. This procedure encourages participants to expose the time-course of their evolving decision state. Evidence on how well they can do this will be presented.
Toward Online Measurement of Decision State
NASA Technical Reports Server (NTRS)
Lachter, Joel; Johnston, James C.; Corrado, Greg S.; McClelland, James L.
2009-01-01
In traditional perceptual decision-making experiments, two pieces of data recollected on each trial: response time and accuracy. But how confident were participants and how did their decision state evolve over time? We asked participants to provide a continuous readout of their decision state by moving a cursor along a sliding scale between a 100% certain left response and a 100% certain right response. Subjects did not terminate the trials; rather, trials were timed out at random and subjects were scored based on the cursor position at the time. Higher rewards for correct responses and higher penalties for errors were associated with extreme responses so that the response with the highest ex[pected value was that which accurately reflected the participant's odds of being correct. This procedure encourages participants to expose the time-course of their evolving decision state. Evidence on how well they can do this will be presented.
Training of perceptual-cognitive skills in offside decision making.
Catteeuw, Peter; Gilis, Bart; Jaspers, Arne; Wagemans, Johan; Helsen, Werner
2010-12-01
This study investigates the effect of two off-field training formats to improve offside decision making. One group trained with video simulations and another with computer animations. Feedback after every offside situation allowed assistant referees to compensate for the consequences of the flash-lag effect and to improve their decision-making accuracy. First, response accuracy improved and flag errors decreased for both training groups implying that training interventions with feedback taught assistant referees to better deal with the flash-lag effect. Second, the results demonstrated no effect of format, although assistant referees rated video simulations higher for fidelity than computer animations. This implies that a cognitive correction to a perceptual effect can be learned also when the format does not correspond closely with the original perceptual situation. Off-field offside decision-making training should be considered as part of training because it is a considerable help to gain more experience and to improve overall decision-making performance.
Simulation and Modeling Efforts to Support Decision Making in Healthcare Supply Chain Management
Lazarova-Molnar, Sanja
2014-01-01
Recently, most healthcare organizations focus their attention on reducing the cost of their supply chain management (SCM) by improving the decision making pertaining processes' efficiencies. The availability of products through healthcare SCM is often a matter of life or death to the patient; therefore, trial and error approaches are not an option in this environment. Simulation and modeling (SM) has been presented as an alternative approach for supply chain managers in healthcare organizations to test solutions and to support decision making processes associated with various SCM problems. This paper presents and analyzes past SM efforts to support decision making in healthcare SCM and identifies the key challenges associated with healthcare SCM modeling. We also present and discuss emerging technologies to meet these challenges. PMID:24683333
Higher incentives can impair performance: neural evidence on reinforcement and rationality
Achtziger, Anja; Hügelschäfer, Sabine; Steinhauser, Marco
2015-01-01
Standard economic thinking postulates that increased monetary incentives should increase performance. Human decision makers, however, frequently focus on past performance, a form of reinforcement learning occasionally at odds with rational decision making. We used an incentivized belief-updating task from economics to investigate this conflict through measurements of neural correlates of reward processing. We found that higher incentives fail to improve performance when immediate feedback on decision outcomes is provided. Subsequent analysis of the feedback-related negativity, an early event-related potential following feedback, revealed the mechanism behind this paradoxical effect. As incentives increase, the win/lose feedback becomes more prominent, leading to an increased reliance on reinforcement and more errors. This mechanism is relevant for economic decision making and the debate on performance-based payment. PMID:25816816
75 FR 65054 - General Motors, LLC, Receipt of Petition for Decision of Inconsequential Noncompliance
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-21
... reduce the likelihood of shifting errors.'' Thus, in all but the rarest circumstances, the primary function of the PRNDM display is to inform the driver of gear selection and relative position of the gears...
Gillespie, Mary
2010-11-01
Nurses' clinical decision-making is a complex process that holds potential to influence the quality of care provided and patient outcomes. The evolution of nurses' decision-making that occurs with experience has been well documented. In addition, literature includes numerous strategies and approaches purported to support development of nurses' clinical decision-making. There has been, however, significantly less attention given to the process of assessing nurses' clinical decision-making and novice clinical educators are often challenged with knowing how to best support nurses and nursing students in developing their clinical decision-making capacity. The Situated Clinical Decision-Making framework is presented for use by clinical educators: it provides a structured approach to analyzing nursing students' and novice nurses' decision-making in clinical nursing practice, assists educators in identifying specific issues within nurses' clinical decision-making, and guides selection of relevant strategies to support development of clinical decision-making. A series of questions is offered as a guide for clinical educators when assessing nurses' clinical decision-making. The discussion presents key considerations related to analysis of various decision-making components, including common sources of challenge and errors that may occur within nurses' clinical decision-making. An exemplar illustrates use of the framework and guiding questions. Implications of this approach for selection of strategies that support development of clinical decision-making are highlighted. Copyright © 2010 Elsevier Ltd. All rights reserved.
Horowitz-Kraus, Tzipi
2016-05-01
The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Defining health information technology-related errors: new developments since to err is human.
Sittig, Dean F; Singh, Hardeep
2011-07-25
Despite the promise of health information technology (HIT), recent literature has revealed possible safety hazards associated with its use. The Office of the National Coordinator for HIT recently sponsored an Institute of Medicine committee to synthesize evidence and experience from the field on how HIT affects patient safety. To lay the groundwork for defining, measuring, and analyzing HIT-related safety hazards, we propose that HIT-related error occurs anytime HIT is unavailable for use, malfunctions during use, is used incorrectly by someone, or when HIT interacts with another system component incorrectly, resulting in data being lost or incorrectly entered, displayed, or transmitted. These errors, or the decisions that result from them, significantly increase the risk of adverse events and patient harm. We describe how a sociotechnical approach can be used to understand the complex origins of HIT errors, which may have roots in rapidly evolving technological, professional, organizational, and policy initiatives.
Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2014-04-14
To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less
Frequent methodological errors in clinical research.
Silva Aycaguer, L C
2018-03-07
Several errors that are frequently present in clinical research are listed, discussed and illustrated. A distinction is made between what can be considered an "error" arising from ignorance or neglect, from what stems from a lack of integrity of researchers, although it is recognized and documented that it is not easy to establish when we are in a case and when in another. The work does not intend to make an exhaustive inventory of such problems, but focuses on those that, while frequent, are usually less evident or less marked in the various lists that have been published with this type of problems. It has been a decision to develop in detail the examples that illustrate the problems identified, instead of making a list of errors accompanied by an epidermal description of their characteristics. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
Teaching concepts of clinical measurement variation to medical students.
Hodder, R A; Longfield, J N; Cruess, D F; Horton, J A
1982-09-01
An exercise in clinical epidemiology was developed for medical students to demonstrate the process and limitations of scientific measurement using models that simulate common clinical experiences. All scales of measurement (nominal, ordinal and interval) were used to illustrate concepts of intra- and interobserver variation, systematic error, recording error, and procedural error. In a laboratory, students a) determined blood pressures on six videotaped subjects, b) graded sugar content of unknown solutions from 0 to 4+ using Clinitest tablets, c) measured papules that simulated PPD reactions, d) measured heart and kidney size on X-rays and, e) described a model skin lesion (melanoma). Traditionally, measurement variation is taught in biostatistics or epidemiology courses using previously collected data. Use of these models enables students to produce their own data using measurements commonly employed by the clinician. The exercise provided material for a meaningful discussion of the implications of measurement error in clinical decision-making.
Context is everything or how could I have been that stupid?
Croskerry, Pat
2009-01-01
Dual Process Theory provides a useful working model of decision-making. It broadly divides decision-making into intuitive (System 1) and analytical (System 2) processes. System 1 is especially dependent on contextual cues. There appears to be a universal human tendency to contextualize information, mostly in an effort to imbue meaning but also, perhaps, to conserve cognitive energy. Most decision errors occur in System 1, and this has two major implications. The first is that insufficient account may have been taken out of context when the original decision was made. Secondly, in trying to learn from decision failures, we need the highest fidelity of context reconstruction as possible. It should be appreciated that learning from past events is inevitably an imperfect process. Retrospective investigations, such as root-cause analysis, critical incident review, morbidity and mortality rounds and legal investigations, all suffer the limitation that they cannot faithfully reconstruct the context in which decisions were made and from which actions followed.
2015-12-24
Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the
2015-12-24
Ripple-Carry RCA Ripple-Carry Adder RF Radio Frequency RMS Root-Mean-Square SEU Single Event Upset SIPI Signal and Image Processing Institute SNR...correctness, where 0.5 < p < 1, and a probability (1−p) of error. Errors could be caused by noise, radio frequency (RF) interference, crosstalk...utilized in the Apollo Guidance Computer is the three input NOR Gate. . . At the time that the decision was made to use in- 11 tegrated circuits, the
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
Missed opportunities for diagnosis: lessons learned from diagnostic errors in primary care.
Goyder, Clare R; Jones, Caroline H D; Heneghan, Carl J; Thompson, Matthew J
2015-12-01
Because of the difficulties inherent in diagnosis in primary care, it is inevitable that diagnostic errors will occur. However, despite the important consequences associated with diagnostic errors and their estimated high prevalence, teaching and research on diagnostic error is a neglected area. To ascertain the key learning points from GPs' experiences of diagnostic errors and approaches to clinical decision making associated with these. Secondary analysis of 36 qualitative interviews with GPs in Oxfordshire, UK. Two datasets of semi-structured interviews were combined. Questions focused on GPs' experiences of diagnosis and diagnostic errors (or near misses) in routine primary care and out of hours. Interviews were audiorecorded, transcribed verbatim, and analysed thematically. Learning points include GPs' reliance on 'pattern recognition' and the failure of this strategy to identify atypical presentations; the importance of considering all potentially serious conditions using a 'restricted rule out' approach; and identifying and acting on a sense of unease. Strategies to help manage uncertainty in primary care were also discussed. Learning from previous examples of diagnostic errors is essential if these events are to be reduced in the future and this should be incorporated into GP training. At a practice level, learning points from experiences of diagnostic errors should be discussed more frequently; and more should be done to integrate these lessons nationally to understand and characterise diagnostic errors. © British Journal of General Practice 2015.
Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.
2008-01-01
A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).
Physician's error: medical or legal concept?
Mujovic-Zornic, Hajrija M
2010-06-01
This article deals with the common term of different physician's errors that often happen in daily practice of health care. Author begins with the term of medical malpractice, defined broadly as practice of unjustified acts or failures to act upon the part of a physician or other health care professionals, which results in harm to the patient. It is a common term that includes many types of medical errors, especially physician's errors. The author also discusses the concept of physician's error in particular, which is understood no more in traditional way only as classic error in acting something manually wrong without necessary skills (medical concept), but as an error which violates patient's basic rights and which has its final legal consequence (legal concept). In every case the essential element of liability is to establish this error as a breach of the physician's duty. The first point to note is that the standard of procedure and the standard of due care against which the physician will be judged is not going to be that of the ordinary reasonable man who enjoys no medical expertise. The court's decision should give finale answer and legal qualification in each concrete case. The author's conclusion is that higher protection of human rights in the area of health equaly demands broader concept of physician's error with the accent to its legal subject matter.
An Analysis of the Plumbing Occupation.
ERIC Educational Resources Information Center
Carlton, Earnest L.; Hollar, Charles E.
The occupational analysis contains a brief job description, presenting for the occupation of plumbing 12 detailed task statements which specify job duties (tools, equipment, materials, objects acted upon, performance knowledge, safety considerations/hazards, decisions, cues, and errors) and learning skills (science, mathematics/number systems, and…
Observing Reasonable Consumers.
ERIC Educational Resources Information Center
Silber, Norman I.
1991-01-01
Although courts and legislators usually set legal standards that correspond to empirical knowledge of human behavior, recent developments in behavioral psychology have led courts to appreciate the limits and errors in consumer decision making. "Reasonable consumer" standards that are congruent with cognitive reality should be developed.…
Urban rail transit projects : forecast versus actual ridership and costs. final report
DOT National Transportation Integrated Search
1989-10-01
Substantial errors in forecasting ridership and costs for the ten rail transit projects reviewed in this report, put forth the possibility that more accurate forecasts would have led decision-makers to select projects other than those reviewed in thi...
Analysis of the Medical Assisting Occupation.
ERIC Educational Resources Information Center
Keir, Lucille; And Others
The occupational analysis contains a brief job description, presenting for the occupation of medical assistant 113 detailed task statements which specify job duties (tools, equipment, materials, objects acted upon, performance knowledge, safety consideration/hazards, decisions, cues, and errors) and learning skills (science, mathematics/number…
20 CFR 404.1643 - Performance accuracy standard.
Code of Federal Regulations, 2011 CFR
2011-04-01
... DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643 Performance... well as the correctness of the decision. For example, if a particular item of medical evidence should... case, that is a performance error. Performance accuracy, therefore, is a higher standard than...
Spatial regression test for ensuring temperature data quality in southern Spain
NASA Astrophysics Data System (ADS)
Estévez, J.; Gavilán, P.; García-Marín, A. P.
2018-01-01
Quality assurance of meteorological data is crucial for ensuring the reliability of applications and models that use such data as input variables, especially in the field of environmental sciences. Spatial validation of meteorological data is based on the application of quality control procedures using data from neighbouring stations to assess the validity of data from a candidate station (the station of interest). These kinds of tests, which are referred to in the literature as spatial consistency tests, take data from neighbouring stations in order to estimate the corresponding measurement at the candidate station. These estimations can be made by weighting values according to the distance between the stations or to the coefficient of correlation, among other methods. The test applied in this study relies on statistical decision-making and uses a weighting based on the standard error of the estimate. This paper summarizes the results of the application of this test to maximum, minimum and mean temperature data from the Agroclimatic Information Network of Andalusia (southern Spain). This quality control procedure includes a decision based on a factor f, the fraction of potential outliers for each station across the region. Using GIS techniques, the geographic distribution of the errors detected has been also analysed. Finally, the performance of the test was assessed by evaluating its effectiveness in detecting known errors.
The influence of the uplink noise on the performance of satellite data transmission systems
NASA Astrophysics Data System (ADS)
Dewal, Vrinda P.
The problem of transmission of binary phase shift keying (BPSK) modulated digital data through a bandlimited nonlinear satellite channel in the presence of uplink, downlink Gaussian noise and intersymbol interface is examined. The satellite transponder is represented by a zero memory bandpass nonlinearity, with AM/AM conversion. The proposed optimum linear receiver structure consists of tapped-delay lines followed by a decision device. The linear receiver is designed to minimize the mean square error that is a function of the intersymbol interface, the uplink and the downlink noise. The minimum mean square error equalizer (MMSE) is derived using the Wiener-Kolmogorov theory. In this receiver, the decision about the transmitted signal is made by taking into account the received sequence of present sample, and the interfering past and future samples, which represent the intersymbol interference (ISI). Illustrative examples of the receiver structures are considered for the nonlinear channels with a symmetrical and asymmetrical frequency responses of the transmitter filter. The transponder nonlinearity is simulated by a polynomial using only the first and the third orders terms. A computer simulation determines the tap gain coefficients of the MMSE equalizer that adapt to the various uplink and downlink noise levels. The performance of the MMSE equalizer is evaluated in terms of an estimate of the average probability of error.
Automation bias: a systematic review of frequency, effect mediators, and mitigators
Roudsari, Abdul; Wyatt, Jeremy C
2011-01-01
Automation bias (AB)—the tendency to over-rely on automation—has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review of the literature from a variety of research fields has been carried out, assessing the frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is discussed alongside automation-induced complacency, or insufficient monitoring of automation output. A mix of subject specific and freetext terms around the themes of automation, human–automation interaction, and task performance and error were used to search article databases. Of 13 821 retrieved papers, 74 met the inclusion criteria. User factors such as cognitive style, decision support systems (DSS), and task specific experience mediated AB, as did attitudinal driving factors such as trust and confidence. Environmental mediators included workload, task complexity, and time constraint, which pressurized cognitive resources. Mitigators of AB included implementation factors such as training and emphasizing user accountability, and DSS design factors such as the position of advice on the screen, updated confidence levels attached to DSS output, and the provision of information versus recommendation. By uncovering the mechanisms by which AB operates, this review aims to help optimize the clinical decision-making process for CDSS developers and healthcare practitioners. PMID:21685142
Environmental cost of using poor decision metrics to prioritize environmental projects.
Pannell, David J; Gibson, Fiona L
2016-04-01
Conservation decision makers commonly use project-scoring metrics that are inconsistent with theory on optimal ranking of projects. As a result, there may often be a loss of environmental benefits. We estimated the magnitudes of these losses for various metrics that deviate from theory in ways that are common in practice. These metrics included cases where relevant variables were omitted from the benefits metric, project costs were omitted, and benefits were calculated using a faulty functional form. We estimated distributions of parameters from 129 environmental projects from Australia, New Zealand, and Italy for which detailed analyses had been completed previously. The cost of using poor prioritization metrics (in terms of lost environmental values) was often high--up to 80% in the scenarios we examined. The cost in percentage terms was greater when the budget was smaller. The most costly errors were omitting information about environmental values (up to 31% loss of environmental values), omitting project costs (up to 35% loss), omitting the effectiveness of management actions (up to 9% loss), and using a weighted-additive decision metric for variables that should be multiplied (up to 23% loss). The latter 3 are errors that occur commonly in real-world decision metrics, in combination often reducing potential benefits from conservation investments by 30-50%. Uncertainty about parameter values also reduced the benefits from investments in conservation projects but often not by as much as faulty prioritization metrics. © 2016 Society for Conservation Biology.
Automation bias: a systematic review of frequency, effect mediators, and mitigators.
Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C
2012-01-01
Automation bias (AB)--the tendency to over-rely on automation--has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review of the literature from a variety of research fields has been carried out, assessing the frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is discussed alongside automation-induced complacency, or insufficient monitoring of automation output. A mix of subject specific and freetext terms around the themes of automation, human-automation interaction, and task performance and error were used to search article databases. Of 13 821 retrieved papers, 74 met the inclusion criteria. User factors such as cognitive style, decision support systems (DSS), and task specific experience mediated AB, as did attitudinal driving factors such as trust and confidence. Environmental mediators included workload, task complexity, and time constraint, which pressurized cognitive resources. Mitigators of AB included implementation factors such as training and emphasizing user accountability, and DSS design factors such as the position of advice on the screen, updated confidence levels attached to DSS output, and the provision of information versus recommendation. By uncovering the mechanisms by which AB operates, this review aims to help optimize the clinical decision-making process for CDSS developers and healthcare practitioners.
Situation assessment in the Paladin tactical decision generation system
NASA Technical Reports Server (NTRS)
Mcmanus, John W.; Chappell, Alan R.; Arbuckle, P. Douglas
1992-01-01
Paladin is a real-time tactical decision generator for air combat engagements. Paladin uses specialized knowledge-based systems and other Artificial Intelligence (AI) programming techniques to address the modern air combat environment and agile aircraft in a clear and concise manner. Paladin is designed to provide insight into both the tactical benefits and the costs of enhanced agility. The system was developed using the Lisp programming language on a specialized AI workstation. Paladin utilizes a set of air combat rules, an active throttle controller, and a situation assessment module that have been implemented as a set of highly specialized knowledge-based systems. The situation assessment module was developed to determine the tactical mode of operation (aggressive, defensive, neutral, evasive, or disengagement) used by Paladin at each decision point in the air combat engagement. Paladin uses the situation assessment module; the situationally dependent modes of operation to more accurately represent the complex decision-making process of human pilots. This allows Paladin to adapt its tactics to the current situation and improves system performance. Discussed here are the details of Paladin's situation assessment and modes of operation. The results of simulation testing showing the error introduced into the situation assessment module due to estimation errors in positional and geometric data for the opponent aircraft are presented. Implementation issues for real-time performance are discussed and several solutions are presented, including Paladin's use of an inference engine designed for real-time execution.
Follow the heart or the head? The interactive influence model of emotion and cognition.
Luo, Jiayi; Yu, Rongjun
2015-01-01
The experience of emotion has a powerful influence on daily-life decision making. Following Plato's description of emotion and reason as two horses pulling us in opposite directions, modern dual-system models of decision making endorse the antagonism between reason and emotion. Decision making is perceived as the competition between an emotion system that is automatic but prone to error and a reason system that is slow but rational. The reason system (in "the head") reins in our impulses (from "the heart") and overrides our snap judgments. However, from Darwin's evolutionary perspective, emotion is adaptive, guiding us to make sound decisions in uncertainty. Here, drawing findings from behavioral economics and neuroeconomics, we provide a new model, labeled "The interactive influence model of emotion and cognition," to elaborate the relationship of emotion and reason in decision making. Specifically, in our model, we identify factors that determine when emotions override reason and delineate the type of contexts in which emotions help or hurt decision making. We then illustrate how cognition modulates emotion and how they cooperate to affect decision making.
First- and second-language phonological representations in the mental lexicon.
Sebastian-Gallés, Núria; Rodríguez-Fornells, Antoni; de Diego-Balaguer, Ruth; Díaz, Begoña
2006-08-01
Performance-based studies on the psychological nature of linguistic competence can conceal significant differences in the brain processes that underlie native versus nonnative knowledge of language. Here we report results from the brain activity of very proficient early bilinguals making a lexical decision task that illustrates this point. Two groups of Spanish-Catalan early bilinguals (Spanish-dominant and Catalan-dominant) were asked to decide whether a given form was a Catalan word or not. The nonwords were based on real words, with one vowel changed. In the experimental stimuli, the vowel change involved a Catalan-specific contrast that previous research had shown to be difficult for Spanish natives to perceive. In the control stimuli, the vowel switch involved contrasts common to Spanish and Catalan. The results indicated that the groups of bilinguals did not differ in their behavioral and event-related brain potential measurements for the control stimuli; both groups made very few errors and showed a larger N400 component for control nonwords than for control words. However, significant differences were observed for the experimental stimuli across groups: Specifically, Spanish-dominant bilinguals showed great difficulty in rejecting experimental nonwords. Indeed, these participants not only showed very high error rates for these stimuli, but also did not show an error-related negativity effect in their erroneous nonword decisions. However, both groups of bilinguals showed a larger correct-related negativity when making correct decisions about the experimental nonwords. The results suggest that although some aspects of a second language system may show a remarkable lack of plasticity (like the acquisition of some foreign contrasts), first-language representations seem to be more dynamic in their capacity of adapting and incorporating new information.
Analysis of ETMS Data Quality for Traffic Flow Management Decisions
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.; Sridhar, Banavar; Kim, Douglas
2003-01-01
The data needed for air traffic flow management decision support tools is provided by the Enhanced Traffic Management System (ETMS). This includes both the tools that are in current use and the ones being developed for future deployment. Since the quality of decision support provided by all these tools will be influenced by the quality of the input ETMS data, an assessment of ETMS data quality is needed. Motivated by this desire, ETMS data quality is examined in this paper in terms of the unavailability of flight plans, deviation from the filed flight plans, departure delays, altitude errors and track data drops. Although many of these data quality issues are not new, little is known about their extent. A goal of this paper is to document the magnitude of data quality issues supported by numerical analysis of ETMS data. Guided by this goal, ETMS data for a 24-hour period were processed to determine the number of aircraft with missing flight plan messages at any given instant of time. Results are presented for aircraft above 18,000 feet altitude and also at all altitudes. Since deviation from filed flight plan is also a major cause of trajectory-modeling errors, statistics of deviations are presented. Errors in proposed departure times and ETMS-generated vertical profiles are also shown. A method for conditioning the vertical profiles for improving demand prediction accuracy is described. Graphs of actual sector counts obtained using these vertical profiles are compared with those obtained using the Host data for sectors in the Fort Worth Center to demonstrate the benefit of preprocessing. Finally, results are presented to quantify the extent of data drops. A method for propagating track positions during ETMS data drops is also described.
Leitner, Stephan; Brauneis, Alexander; Rausch, Alexandra
2015-01-01
In this paper, we investigate the impact of inaccurate forecasting on the coordination of distributed investment decisions. In particular, by setting up a computational multi-agent model of a stylized firm, we investigate the case of investment opportunities that are mutually carried out by organizational departments. The forecasts of concern pertain to the initial amount of money necessary to launch and operate an investment opportunity, to the expected intertemporal distribution of cash flows, and the departments' efficiency in operating the investment opportunity at hand. We propose a budget allocation mechanism for coordinating such distributed decisions The paper provides guidance on how to set framework conditions, in terms of the number of investment opportunities considered in one round of funding and the number of departments operating one investment opportunity, so that the coordination mechanism is highly robust to forecasting errors. Furthermore, we show that-in some setups-a certain extent of misforecasting is desirable from the firm's point of view as it supports the achievement of the corporate objective of value maximization. We then address the question of how to improve forecasting quality in the best possible way, and provide policy advice on how to sequence activities for improving forecasting quality so that the robustness of the coordination mechanism to errors increases in the best possible way. At the same time, we show that wrong decisions regarding the sequencing can lead to a decrease in robustness. Finally, we conduct a comprehensive sensitivity analysis and prove that-in particular for relatively good forecasters-most of our results are robust to changes in setting the parameters of our multi-agent simulation model.
Avoiding and identifying errors and other threats to the credibility of health economic models.
Tappenden, Paul; Chilcott, James B
2014-10-01
Health economic models have become the primary vehicle for undertaking economic evaluation and are used in various healthcare jurisdictions across the world to inform decisions about the use of new and existing health technologies. Models are required because a single source of evidence, such as a randomised controlled trial, is rarely sufficient to provide all relevant information about the expected costs and health consequences of all competing decision alternatives. Whilst models are used to synthesise all relevant evidence, they also contain assumptions, abstractions and simplifications. By their very nature, all models are therefore 'wrong'. As such, the interpretation of estimates of the cost effectiveness of health technologies requires careful judgements about the degree of confidence that can be placed in the models from which they are drawn. The presence of a single error or inappropriate judgement within a model may lead to inappropriate decisions, an inefficient allocation of healthcare resources and ultimately suboptimal outcomes for patients. This paper sets out a taxonomy of threats to the credibility of health economic models. The taxonomy segregates threats to model credibility into three broad categories: (i) unequivocal errors, (ii) violations, and (iii) matters of judgement; and maps these across the main elements of the model development process. These three categories are defined according to the existence of criteria for judging correctness, the degree of force with which such criteria can be applied, and the means by which these credibility threats can be handled. A range of suggested processes and techniques for avoiding and identifying these threats is put forward with the intention of prospectively improving the credibility of models.
Leitner, Stephan; Brauneis, Alexander; Rausch, Alexandra
2015-01-01
In this paper, we investigate the impact of inaccurate forecasting on the coordination of distributed investment decisions. In particular, by setting up a computational multi-agent model of a stylized firm, we investigate the case of investment opportunities that are mutually carried out by organizational departments. The forecasts of concern pertain to the initial amount of money necessary to launch and operate an investment opportunity, to the expected intertemporal distribution of cash flows, and the departments’ efficiency in operating the investment opportunity at hand. We propose a budget allocation mechanism for coordinating such distributed decisions The paper provides guidance on how to set framework conditions, in terms of the number of investment opportunities considered in one round of funding and the number of departments operating one investment opportunity, so that the coordination mechanism is highly robust to forecasting errors. Furthermore, we show that—in some setups—a certain extent of misforecasting is desirable from the firm’s point of view as it supports the achievement of the corporate objective of value maximization. We then address the question of how to improve forecasting quality in the best possible way, and provide policy advice on how to sequence activities for improving forecasting quality so that the robustness of the coordination mechanism to errors increases in the best possible way. At the same time, we show that wrong decisions regarding the sequencing can lead to a decrease in robustness. Finally, we conduct a comprehensive sensitivity analysis and prove that—in particular for relatively good forecasters—most of our results are robust to changes in setting the parameters of our multi-agent simulation model. PMID:25803736
Ruva, Christine L; Guenther, Christina C
2015-06-01
This 2-part study explored how exposure to negative pretrial publicity (Neg-PTP) influences the jury process, as well as possible mechanisms responsible for its biasing effects on decisions. Study Part A explored how PTP and jury deliberations affect juror/jury verdicts, memory, and impressions of the defendant and attorneys. One week before viewing a criminal trial mock-jurors (N = 320 university students) were exposed to Neg-PTP or unrelated crime stories (No-PTP). Two days later deliberating jurors came to a group decision, whereas nondeliberating jurors completed an unrelated task before making an individual decision. Neg-PTP jurors were more likely to vote guilty, make memory errors, and rate the defendant lower in credibility. Deliberation reduced Neg-PTP jurors' memory accuracy and No-PTP jurors' guilty verdicts (leniency bias). Jurors' memory and ratings of the defendant and prosecuting attorney significantly mediated the effect of PTP on guilt ratings. Study Part B content analyzed 30 mock-jury deliberations and explored how PTP influenced deliberations and ultimately jury decisions. Neg-PTP juries were more likely than No-PTP juries to discuss ambiguous trial evidence in a proprosecution manner and less likely to discuss judicial instructions and lack of evidence. All Neg-PTP juries mentioned PTP, after instructed otherwise, and rarely corrected jury members who mentioned PTP. Discussion of ambiguous trial evidence in a proprosecution manner and lack of evidence significantly mediated the effect of PTP on jury-level guilt ratings. Together the findings suggest that judicial admonishments and deliberations may not be sufficient to reduce PTP bias, because of memory errors, biased impressions, and predecisional distortion. (c) 2015 APA, all rights reserved).
Dehghani Soufi, Mahsa; Samad-Soltani, Taha; Shams Vahdati, Samad; Rezaei-Hachesu, Peyman
2018-06-01
Fast and accurate patient triage for the response process is a critical first step in emergency situations. This process is often performed using a paper-based mode, which intensifies workload and difficulty, wastes time, and is at risk of human errors. This study aims to design and evaluate a decision support system (DSS) to determine the triage level. A combination of the Rule-Based Reasoning (RBR) and Fuzzy Logic Classifier (FLC) approaches were used to predict the triage level of patients according to the triage specialist's opinions and Emergency Severity Index (ESI) guidelines. RBR was applied for modeling the first to fourth decision points of the ESI algorithm. The data relating to vital signs were used as input variables and modeled using fuzzy logic. Narrative knowledge was converted to If-Then rules using XML. The extracted rules were then used to create the rule-based engine and predict the triage levels. Fourteen RBR and 27 fuzzy rules were extracted and used in the rule-based engine. The performance of the system was evaluated using three methods with real triage data. The accuracy of the clinical decision support systems (CDSSs; in the test data) was 99.44%. The evaluation of the error rate revealed that, when using the traditional method, 13.4% of the patients were miss-triaged, which is statically significant. The completeness of the documentation also improved from 76.72% to 98.5%. Designed system was effective in determining the triage level of patients and it proved helpful for nurses as they made decisions, generated nursing diagnoses based on triage guidelines. The hybrid approach can reduce triage misdiagnosis in a highly accurate manner and improve the triage outcomes. Copyright © 2018 Elsevier B.V. All rights reserved.
Vaccine administration decision making: the case of yellow fever vaccine.
Lown, Beth A; Chen, Lin H; Wilson, Mary E; Sisson, Emily; Gershman, Mark; Yanni, Emad; Jentes, Emily S; Hochberg, Natasha S; Hamer, Davidson H; Barnett, Elizabeth D
2012-09-01
Providers must counsel travelers to yellow fever (YF)-endemic areas, although risk estimates of disease and vaccine serious adverse events (SAEs) may be imprecise. The impact of risk information and patients' requests for participation in vaccine decisions on providers' recommendations is unknown. Vaccine providers were surveyed regarding decisions for 4 patient scenarios before and after being presented information about risk of YF disease vs vaccine SAEs. Participants' theoretical attitudes were compared with actual responses to scenarios in which patients wanted to share vaccine decisions. Analyses were done by using χ(2) tests with significance level of .05. Forty-six percent of respondents made appropriate initial YF vaccine administration decisions for a pregnant woman, 73% for an immunosuppressed man, and 49% for an 8-month-old infant. After receiving scenario-specific information, 20%, 54%, and 23% of respondents respectively who initially responded incorrectly changed to a more appropriate decision. Thirty-one percent of participants made consistently appropriate decisions. Among participants who made ≥1 incorrect decision, 35.7% made no decision changes after receiving information. In the scenario in which either a decision to withhold or to administer vaccine was acceptable, 19% of respondents refused a patient's request for vaccine. Targeted information is necessary but insufficient to change the process of vaccine administration decision making. Providers need additional education to enable them to apply evidence, overcome cognitive decision-making errors, and involve patients in vaccine decisions.
Pilot age and error in air taxi crashes.
Rebok, George W; Qiang, Yandong; Baker, Susan P; Li, Guohua
2009-07-01
The associations of pilot error with the type of flight operations and basic weather conditions are well documented. The correlation between pilot characteristics and error is less clear. This study aims to examine whether pilot age is associated with the prevalence and patterns of pilot error in air taxi crashes. Investigation reports from the National Transportation Safety Board for crashes involving non-scheduled Part 135 operations (i.e., air taxis) in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Crash circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Of the 1751 air taxi crashes studied, 28% resulted from mechanical failure, 25% from loss of control at landing or takeoff, 7% from visual flight rule conditions into instrument meteorological conditions, 7% from fuel starvation, 5% from taxiing, and 28% from other causes. Crashes among older pilots were more likely to occur during the daytime rather than at night and off airport than on airport. The patterns of pilot error in air taxi crashes were similar across age groups. Of the errors identified, 27% were flawed decisions, 26% were inattentiveness, 23% mishandled aircraft kinetics, 15% mishandled wind and/or runway conditions, and 11% were others. Pilot age is associated with crash circumstances but not with the prevalence and patterns of pilot error in air taxi crashes. Lack of age-related differences in pilot error may be attributable to the "safe worker effect."
Hooper, Brionny J; O'Hare, David P A
2013-08-01
Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.
Understanding Decision Making in Critical Care
Lighthall, Geoffrey K.; Vazquez-Guillamet, Cristina
2015-01-01
Background Human decision making involves the deliberate formulation of hypotheses and plans as well as the use of subconscious means of judging probability, likely outcome, and proper action. Rationale There is a growing recognition that intuitive strategies such as use of heuristics and pattern recognition described in other industries are applicable to high-acuity environments in medicine. Despite the applicability of theories of cognition to the intensive care unit, a discussion of decision-making strategies is currently absent in the critical care literature. Content This article provides an overview of known cognitive strategies, as well as a synthesis of their use in critical care. By understanding the ways by which humans formulate diagnoses and make critical decisions, we may be able to minimize errors in our own judgments as well as build training activities around known strengths and limitations of cognition. PMID:26387708
Youth Attitude Tracking Study II Wave 17 -- Fall 1986.
1987-06-01
decision, unless so designated by other official documentation. TABLE OF CONTENTS Page PREFACE ................................................. xi...Segmentation Analyses .......................... 2-7 .3. METHODOLOGY OF YATS II....................................... 3-1 A. Sampling Design Overview...Sampling Design , Estimation Procedures and Estimated Sampling Errors ................................. A-i Appendix B: Data Collection Procedures
4 CFR 21.14 - Request for reconsideration.
Code of Federal Regulations, 2010 CFR
2010-01-01
... GOVERNMENT ACCOUNTABILITY OFFICE GENERAL PROCEDURES BID PROTEST REGULATIONS § 21.14 Request for... reconsideration of a bid protest decision. GAO will not consider a request for reconsideration that does not... deemed warranted, specifying any errors of law made or information not previously considered. (b) A...
Criteria for assessing problem solving and decision making in complex environments
NASA Technical Reports Server (NTRS)
Orasanu, Judith
1993-01-01
Training crews to cope with unanticipated problems in high-risk, high-stress environments requires models of effective problem solving and decision making. Existing decision theories use the criteria of logical consistency and mathematical optimality to evaluate decision quality. While these approaches are useful under some circumstances, the assumptions underlying these models frequently are not met in dynamic time-pressured operational environments. Also, applying formal decision models is both labor and time intensive, a luxury often lacking in operational environments. Alternate approaches and criteria are needed. Given that operational problem solving and decision making are embedded in ongoing tasks, evaluation criteria must address the relation between those activities and satisfaction of broader task goals. Effectiveness and efficiency become relevant for judging reasoning performance in operational environments. New questions must be addressed: What is the relation between the quality of decisions and overall performance by crews engaged in critical high risk tasks? Are different strategies most effective for different types of decisions? How can various decision types be characterized? A preliminary model of decision types found in air transport environments will be described along with a preliminary performance model based on an analysis of 30 flight crews. The performance analysis examined behaviors that distinguish more and less effective crews (based on performance errors). Implications for training and system design will be discussed.
Driving out errors through tight integration between software and automation.
Reifsteck, Mark; Swanson, Thomas; Dallas, Mary
2006-01-01
A clear case has been made for using clinical IT to improve medication safety, particularly bar-code point-of-care medication administration and computerized practitioner order entry (CPOE) with clinical decision support. The equally important role of automation has been overlooked. When the two are tightly integrated, with pharmacy information serving as a hub, the distinctions between software and automation become blurred. A true end-to-end medication management system drives out errors from the dockside to the bedside. Presbyterian Healthcare Services in Albuquerque has been building such a system since 1999, beginning by automating pharmacy operations to support bar-coded medication administration. Encouraged by those results, it then began layering on software to further support clinician workflow and improve communication, culminating with the deployment of CPOE and clinical decision support. This combination, plus a hard-wired culture of safety, has resulted in a dramatically lower mortality and harm rate that could not have been achieved with a partial solution.
Efficient boundary hunting via vector quantization
NASA Astrophysics Data System (ADS)
Diamantini, Claudia; Panti, Maurizio
2001-03-01
A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.
Multi-Criteria Decision Making Approaches for Quality Control of Genome-Wide Association Studies
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-01-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter’s preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter’s choices thus providing more reproducible results. PMID:21347174
Tiwari, Ruchi; Tsapepas, Demetra S; Powell, Jaclyn T; Martin, Spencer T
2013-01-01
Healthcare organizations continue to adopt information technologies with clinical decision support (CDS) to prevent potential medication-related adverse drug events. End-users who are unfamiliar with certain high-risk patient populations are at an increased risk of unknowingly causing medication errors. The following case describes a heart transplant recipient exposed to supra-therapeutic concentrations of tacrolimus during co-administration of ritonavir as a result of vendor supplied CDS tools that omitted an interaction alert. After review of 4692 potential tacrolimus-based DDIs between 329 different drug pairs supplied by vendor CDS, the severity of 20 DDIs were downgraded and the severity of 62 were upgraded. The need for institution-specific customization of vendor-provided CDS is paramount to ensure avoidance of medication errors. Individualized care will become more important as patient populations and institutions become more specialized. In the future, vendors providing integrated CDS tools must be proactive in developing institution-specific and easily customizable CDS tools.
Tiwari, Ruchi; Tsapepas, Demetra S; Powell, Jaclyn T
2013-01-01
Healthcare organizations continue to adopt information technologies with clinical decision support (CDS) to prevent potential medication-related adverse drug events. End-users who are unfamiliar with certain high-risk patient populations are at an increased risk of unknowingly causing medication errors. The following case describes a heart transplant recipient exposed to supra-therapeutic concentrations of tacrolimus during co-administration of ritonavir as a result of vendor supplied CDS tools that omitted an interaction alert. After review of 4692 potential tacrolimus-based DDIs between 329 different drug pairs supplied by vendor CDS, the severity of 20 DDIs were downgraded and the severity of 62 were upgraded. The need for institution-specific customization of vendor-provided CDS is paramount to ensure avoidance of medication errors. Individualized care will become more important as patient populations and institutions become more specialized. In the future, vendors providing integrated CDS tools must be proactive in developing institution-specific and easily customizable CDS tools. PMID:22813760
Four principles for user interface design of computerised clinical decision support systems.
Kanstrup, Anne Marie; Christiansen, Marion Berg; Nøhr, Christian
2011-01-01
The paper presents results from a design research project of a user interface (UI) for a Computerised Clinical Decision Support System (CDSS). The ambition has been to design Human-Computer Interaction (HCI) that can minimise medication errors. Through an iterative design process a digital prototype for prescription of medicine has been developed. This paper presents results from the formative evaluation of the prototype conducted in a simulation laboratory with ten participating physicians. Data from the simulation is analysed by use of theory on how users perceive information. The conclusion is a model, which sum up four principles of interaction for design of CDSS. The four principles for design of user interfaces for CDSS are summarised as four A's: All in one, At a glance, At hand and Attention. The model emphasises integration of all four interaction principles in the design of user interfaces for CDSS, i.e. the model is an integrated model which we suggest as a guide for interaction design when working with preventing medication errors.
Kluge, Annette; Grauel, Britta; Burkolter, Dina
2013-03-01
Two studies are presented in which the design of a procedural aid and the impact of an additional decision aid for process control were assessed. In Study 1, a procedural aid was developed that avoids imposing unnecessary extraneous cognitive load on novices when controlling a complex technical system. This newly designed procedural aid positively affected germane load, attention, satisfaction, motivation, knowledge acquisition and diagnostic speed for novel faults. In Study 2, the effect of a decision aid for use before the procedural aid was investigated, which was developed based on an analysis of diagnostic errors committed in Study 1. Results showed that novices were able to diagnose both novel faults and practised faults, and were even faster at diagnosing novel faults. This research contributes to the question of how to optimally support novices in dealing with technical faults in process control. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Nievas-Cazorla, Francisco; Soriano-Ferrer, Manuel; Sánchez-López, Pilar
2016-01-01
The aim of this study was to compare the reaction times and errors of Spanish children with developmental dyslexia to the reaction times and errors of readers without dyslexia on a masked lexical decision task with identity or repetition priming. A priming paradigm was used to study the role of the lexical deficit in dyslexic children, manipulating the frequency and length of the words, with a short Stimulus Onset Asynchrony (SOA = 150 ms) and degraded stimuli. The sample consisted of 80 participants from 9 to 14 years old, divided equally into a group with a developmental dyslexia diagnosis and a control group without dyslexia. Results show that identity priming is higher in control children (133 ms) than in dyslexic children (55 ms). Thus, the "frequency" and "word length" variables are not the source or origin of this reduction in identity priming reaction times in children with developmental dyslexia compared to control children.
Mesolimbic Dopamine Signals the Value of Work
Hamid, Arif A.; Pettibone, Jeffrey R.; Mabrouk, Omar S.; Hetrick, Vaughn L.; Schmidt, Robert; Vander Weele, Caitlin M.; Kennedy, Robert T.; Aragona, Brandon J.; Berke, Joshua D.
2015-01-01
Dopamine cell firing can encode errors in reward prediction, providing a learning signal to guide future behavior. Yet dopamine is also a key modulator of motivation, invigorating current behavior. Existing theories propose that fast (“phasic”) dopamine fluctuations support learning, while much slower (“tonic”) dopamine changes are involved in motivation. We examined dopamine release in the nucleus accumbens across multiple time scales, using complementary microdialysis and voltammetric methods during adaptive decision-making. We first show that minute-by-minute dopamine levels covary with reward rate and motivational vigor. We then show that second-by-second dopamine release encodes an estimate of temporally-discounted future reward (a value function). We demonstrate that changing dopamine immediately alters willingness to work, and reinforces preceding action choices by encoding temporal-difference reward prediction errors. Our results indicate that dopamine conveys a single, rapidly-evolving decision variable, the available reward for investment of effort, that is employed for both learning and motivational functions. PMID:26595651
NASA Astrophysics Data System (ADS)
Batra, Arun; Zeidler, James R.; Beex, A. A. Louis
2007-12-01
It has previously been shown that a least-mean-square (LMS) decision-feedback filter can mitigate the effect of narrowband interference (L.-M. Li and L. Milstein, 1983). An adaptive implementation of the filter was shown to converge relatively quickly for mild interference. It is shown here, however, that in the case of severe narrowband interference, the LMS decision-feedback equalizer (DFE) requires a very large number of training symbols for convergence, making it unsuitable for some types of communication systems. This paper investigates the introduction of an LMS prediction-error filter (PEF) as a prefilter to the equalizer and demonstrates that it reduces the convergence time of the two-stage system by as much as two orders of magnitude. It is also shown that the steady-state bit-error rate (BER) performance of the proposed system is still approximately equal to that attained in steady-state by the LMS DFE-only. Finally, it is shown that the two-stage system can be implemented without the use of training symbols. This two-stage structure lowers the complexity of the overall system by reducing the number of filter taps that need to be adapted, while incurring a slight loss in the steady-state BER.
NASA Astrophysics Data System (ADS)
Kollat, J. B.; Reed, P. M.
2009-12-01
This study contributes the ASSIST (Adaptive Strategies for Sampling in Space and Time) framework for improving long-term groundwater monitoring decisions across space and time while accounting for the influences of systematic model errors (or predictive bias). The ASSIST framework combines contaminant flow-and-transport modeling, bias-aware ensemble Kalman filtering (EnKF) and many-objective evolutionary optimization. Our goal in this work is to provide decision makers with a fuller understanding of the information tradeoffs they must confront when performing long-term groundwater monitoring network design. Our many-objective analysis considers up to 6 design objectives simultaneously and consequently synthesizes prior monitoring network design methodologies into a single, flexible framework. This study demonstrates the ASSIST framework using a tracer study conducted within a physical aquifer transport experimental tank located at the University of Vermont. The tank tracer experiment was extensively sampled to provide high resolution estimates of tracer plume behavior. The simulation component of the ASSIST framework consists of stochastic ensemble flow-and-transport predictions using ParFlow coupled with the Lagrangian SLIM transport model. The ParFlow and SLIM ensemble predictions are conditioned with tracer observations using a bias-aware EnKF. The EnKF allows decision makers to enhance plume transport predictions in space and time in the presence of uncertain and biased model predictions by conditioning them on uncertain measurement data. In this initial demonstration, the position and frequency of sampling were optimized to: (i) minimize monitoring cost, (ii) maximize information provided to the EnKF, (iii) minimize failure to detect the tracer, (iv) maximize the detection of tracer flux, (v) minimize error in quantifying tracer mass, and (vi) minimize error in quantifying the moment of the tracer plume. The results demonstrate that the many-objective problem formulation provides a tremendous amount of information for decision makers. Specifically our many-objective analysis highlights the limitations and potentially negative design consequences of traditional single and two-objective problem formulations. These consequences become apparent through visual exploration of high-dimensional tradeoffs and the identification of regions with interesting compromise solutions. The prediction characteristics of these compromise designs are explored in detail, as well as their implications for subsequent design decisions in both space and time.
Image-based topology for sensor gridlocking and association
NASA Astrophysics Data System (ADS)
Stanek, Clay J.; Javidi, Bahram; Yanni, Philip
2002-07-01
Correlation engines have been evolving since the implementation of radar. In modern sensor fusion architectures, correlation and gridlock filtering are required to produce common, continuous, and unambiguous tracks of all objects in the surveillance area. The objective is to provide a unified picture of the theatre or area of interest to battlefield decision makers, ultimately enabling them to make better inferences for future action and eliminate fratricide by reducing ambiguities. Here, correlation refers to association, which in this context is track-to-track association. A related process, gridlock filtering or gridlocking, refers to the reduction in navigation errors and sensor misalignment errors so that one sensor's track data can be accurately transformed into another sensor's coordinate system. As platforms gain multiple sensors, the correlation and gridlocking of tracks become significantly more difficult. Much of the existing correlation technology revolves around various interpretations of the generalized Bayesian decision rule: choose the action that minimizes conditional risk. One implementation of this principle equates the risk minimization statement to the comparison of ratios of a priori probability distributions to thresholds. The binary decision problem phrased in terms of likelihood ratios is also known as the famed Neyman-Pearson hypothesis test. Using another restatement of the principle for a symmetric loss function, risk minimization leads to a decision that maximizes the a posteriori probability distribution. Even for deterministic decision rules, situations can arise in correlation where there are ambiguities. For these situations, a common algorithm used is a sparse assignment technique such as the Munkres or JVC algorithm. Furthermore, associated tracks may be combined with the hope of reducing the positional uncertainty of a target or object identified by an existing track from the information of several fused/correlated tracks. Gridlocking is typically accomplished with some type of least-squares algorithm, such as the Kalman filtering technique, which attempts to locate the best bias error vector estimate from a set of correlated/fused track pairs. Here, we will introduce a new approach to this longstanding problem by adapting many of the familiar concepts from pattern recognition, ones certainly familiar to target recognition applications. Furthermore, we will show how this technique can lend itself to specialized processing, such as that available through an optical or hybrid correlator.
Study of a co-designed decision feedback equalizer, deinterleaver, and decoder
NASA Technical Reports Server (NTRS)
Peile, Robert E.; Welch, Loyd
1990-01-01
A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.
Kuwabara, Masaru; Mansouri, Farshad A.; Buckley, Mark J.
2014-01-01
Monkeys were trained to select one of three targets by matching in color or matching in shape to a sample. Because the matching rule frequently changed and there were no cues for the currently relevant rule, monkeys had to maintain the relevant rule in working memory to select the correct target. We found that monkeys' error commission was not limited to the period after the rule change and occasionally occurred even after several consecutive correct trials, indicating that the task was cognitively demanding. In trials immediately after such error trials, monkeys' speed of selecting targets was slower. Additionally, in trials following consecutive correct trials, the monkeys' target selections for erroneous responses were slower than those for correct responses. We further found evidence for the involvement of the cortex in the anterior cingulate sulcus (ACCs) in these error-related behavioral modulations. First, ACCs cell activity differed between after-error and after-correct trials. In another group of ACCs cells, the activity differed depending on whether the monkeys were making a correct or erroneous decision in target selection. Second, bilateral ACCs lesions significantly abolished the response slowing both in after-error trials and in error trials. The error likelihood in after-error trials could be inferred by the error feedback in the previous trial, whereas the likelihood of erroneous responses after consecutive correct trials could be monitored only internally. These results suggest that ACCs represent both context-dependent and internally detected error likelihoods and promote modes of response selections in situations that involve these two types of error likelihood. PMID:24872558
Improved HDRG decoders for qudit and non-Abelian quantum error correction
NASA Astrophysics Data System (ADS)
Hutter, Adrian; Loss, Daniel; Wootton, James R.
2015-03-01
Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.
Error behaviors associated with loss of competency in Alzheimer's disease.
Marson, D C; Annis, S M; McInturff, B; Bartolucci, A; Harrell, L E
1999-12-10
To investigate qualitative behavioral changes associated with declining medical decision-making capacity (competency) in patients with AD. Qualitative measures can yield clinical information about functional changes in neurologic disease not available through quantitative measures. Normal older controls (n = 21) and patients with mild and moderate probable AD (n = 72) were compared using a standardized competency measure and neuropsychological measures. A system of 16 qualitative error scores representing conceptual domains of language, executive dysfunction, affective dysfunction, and compensatory responses was used to analyze errors produced on the competency measure. Patterns of errors were examined across groups. Relationships between error behaviors and competency performance were determined, and neurocognitive correlates of specific error behaviors were identified. AD patients demonstrated more miscomprehension, factual confusion, intrusions, incoherent responses, nonresponsive answers, loss of task, and delegation than controls. Errors in the executive domain (loss of task, nonresponsive answer, and loss of detachment) were key predictors of declining competency performance by AD patients. Neuropsychological analyses in the AD group generally confirmed the conceptual domain assignments of the qualitative scores. Loss of task, nonresponsive answers, and loss of detachment were key behavioral changes associated with declining competency of AD patients and with neurocognitive measures of executive dysfunction. These findings support the growing linkage between executive dysfunction and competency loss.
Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.
Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J
2018-01-01
Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.