Neural mechanisms and personality correlates of the sunk cost effect
Fujino, Junya; Fujimoto, Shinsuke; Kodaka, Fumitoshi; Camerer, Colin F.; Kawada, Ryosaku; Tsurumi, Kosuke; Tei, Shisei; Isobe, Masanori; Miyata, Jun; Sugihara, Genichi; Yamada, Makiko; Fukuyama, Hidenao; Murai, Toshiya; Takahashi, Hidehiko
2016-01-01
The sunk cost effect, an interesting and well-known maladaptive behavior, is pervasive in real life, and thus has been studied in various disciplines, including economics, psychology, organizational behavior, politics, and biology. However, the neural mechanisms underlying the sunk cost effect have not been clearly established, nor have their association with differences in individual susceptibility to the effect. Using functional magnetic resonance imaging, we investigated neural responses induced by sunk costs along with measures of core human personality. We found that individuals who tend to adhere to social rules and regulations (who are high in measured agreeableness and conscientiousness) are more susceptible to the sunk cost effect. Furthermore, this behavioral observation was strongly mediated by insula activity during sunk cost decision-making. Tight coupling between the insula and lateral prefrontal cortex was also observed during decision-making under sunk costs. Our findings reveal how individual differences can affect decision-making under sunk costs, thereby contributing to a better understanding of the psychological and neural mechanisms of the sunk cost effect. PMID:27611212
Magalhães, Paula; Geoffrey White, K
2016-05-01
The sunk cost effect is the bias or tendency to persist in a course of action due to prior investments of effort, money or time. At the time of the only review on the sunk cost effect across species (Arkes & Ayton, 1999), research with nonhuman animals had been ecological in its nature, and the findings about the effect of past investments on current choice were inconclusive. However, in the last decade a new line of experimental laboratory-based research has emerged with the promise of revolutionizing the way we approach the study of the sunk cost effect in nonhumans. In the present review we challenge Arkes and Ayton's conclusion that the sunk cost effect is exclusive to humans, and describe evidence for the sunk cost effect in nonhuman animals. By doing so, we also challenge the current explanations for the sunk cost effect in humans, as they are not applicable to nonhumans. We argue that a unified theory is called for, because different independent variables, in particular, investment amount, have the same influence on the sunk cost effect across species. Finally, we suggest possible psychological mechanisms shared across different species, contrast and depreciation, that could explain the sunk cost effect. © 2016 Society for the Experimental Analysis of Behavior.
The Interpersonal Sunk-Cost Effect.
Olivola, Christopher Y
2018-05-01
The sunk-cost fallacy-pursuing an inferior alternative merely because we have previously invested significant, but nonrecoverable, resources in it-represents a striking violation of rational decision making. Whereas theoretical accounts and empirical examinations of the sunk-cost effect have generally been based on the assumption that it is a purely intrapersonal phenomenon (i.e., solely driven by one's own past investments), the present research demonstrates that it is also an interpersonal effect (i.e., people will alter their choices in response to other people's past investments). Across eight experiments ( N = 6,076) covering diverse scenarios, I documented sunk-cost effects when the costs are borne by someone other than the decision maker. Moreover, the interpersonal sunk-cost effect is not moderated by social closeness or whether other people observe their sunk costs being "honored." These findings uncover a previously undocumented bias, reveal that the sunk-cost effect is a much broader phenomenon than previously thought, and pose interesting challenges for existing accounts of this fascinating human tendency.
How does cognitive dissonance influence the sunk cost effect?
Chung, Shao-Hsi; Cheng, Kuo-Chih
2018-01-01
The sunk cost effect is the scenario when individuals are willing to continue to invest capital in a failing project. The purpose of this study was to explain such irrational behavior by exploring how sunk costs affect individuals' willingness to continue investing in an unfavorable project and to understand the role of cognitive dissonance on the sunk cost effect. This study used an experimental questionnaire survey on managers of firms listed on the Taiwan Stock Exchange and Over-The-Counter. The empirical results show that cognitive dissonance does not mediate the relationship between sunk costs and willingness to continue an unfavorable investment project. However, cognitive dissonance has a moderating effect, and only when the level of cognitive dissonance is high does the sunk cost have significantly positive impacts on willingness to continue on with an unfavorable investment. This study offers psychological mechanisms to explain the sunk cost effect based on the theory of cognitive dissonance, and it also provides some recommendations for corporate management.
How does cognitive dissonance influence the sunk cost effect?
Chung, Shao-Hsi; Cheng, Kuo-Chih
2018-01-01
Background The sunk cost effect is the scenario when individuals are willing to continue to invest capital in a failing project. The purpose of this study was to explain such irrational behavior by exploring how sunk costs affect individuals’ willingness to continue investing in an unfavorable project and to understand the role of cognitive dissonance on the sunk cost effect. Methods This study used an experimental questionnaire survey on managers of firms listed on the Taiwan Stock Exchange and Over-The-Counter. Results The empirical results show that cognitive dissonance does not mediate the relationship between sunk costs and willingness to continue an unfavorable investment project. However, cognitive dissonance has a moderating effect, and only when the level of cognitive dissonance is high does the sunk cost have significantly positive impacts on willingness to continue on with an unfavorable investment. Conclusion This study offers psychological mechanisms to explain the sunk cost effect based on the theory of cognitive dissonance, and it also provides some recommendations for corporate management. PMID:29535561
Assessment of the sunk-cost effect in clinical decision-making.
Braverman, Jennifer A; Blumenthal-Barby, J S
2012-07-01
Despite the current push toward the practice of evidence-based medicine and comparative effectiveness research, clinicians' decisions may be influenced not only by evidence, but also by cognitive biases. A cognitive bias describes a tendency to make systematic errors in certain circumstances based on cognitive factors rather than evidence. Though health care providers have been shown in several studies to be susceptible to a variety of types of cognitive biases, research on the role of the sunk-cost bias in clinical decision-making is extremely limited. The sunk-cost bias is the tendency to pursue a course of action, even after it has proved to be suboptimal, because resources have been invested in that course of action. This study explores whether health care providers' medical treatment recommendations are affected by prior investments in a course of treatment. Specifically, we surveyed 389 health care providers in a large urban medical center in the United States during August 2009. We asked participants to make a treatment recommendation based on one of four hypothetical clinical scenarios that varied in the source and type of prior investment described. By comparing recommendations across scenarios, we found that providers did not demonstrate a sunk-cost effect; rather, they demonstrated a significant tendency to over-compensate for the effect. In addition, we found that more than one in ten health care providers recommended continuation of an ineffective treatment. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Sunk Cost Effect with Pigeons: Some Determinants of Decisions about Persistence
ERIC Educational Resources Information Center
Macaskill, Anne C.; Hackenberg, Timothy D.
2012-01-01
The sunk cost effect occurs when an individual persists following an initial investment, even when persisting is costly in the long run. The current study used a laboratory model of the sunk cost effect. Two response alternatives were available: Pigeons could persist by responding on a schedule key with mixed ratio requirements, or escape by…
On the Treatment of Fixed and Sunk Costs in the Principles Textbooks
ERIC Educational Resources Information Center
Colander, David
2004-01-01
The author argues that, although the standard principles level treatment of fixed and sunk costs has problems, it is logically consistent as long as all fixed costs are assumed to be sunk costs. As long as the instructor makes that assumption clear to students, the costs of making the changes recently suggested by X. Henry Wang and Bill Z. Yang in…
Debiasing the mind through meditation: mindfulness and the sunk-cost bias.
Hafenbrack, Andrew C; Kinias, Zoe; Barsade, Sigal G
2014-02-01
In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.
The Sunk Cost Effect in Pigeons and Humans
ERIC Educational Resources Information Center
Navarro, Anton D.; Fantino, Edmund
2005-01-01
The sunk cost effect is the increased tendency to persist in an endeavor once an investment of money, effort, or time has been made. To date, humans are the only animal in which this effect has been observed unambiguously. We developed a behavior-analytic model of the sunk cost effect to explore the potential for this behavior in pigeons as well…
Sunk costs, psychological symptomology, and help seeking.
Jarmolowicz, David P; Bickel, Warren K; Sofis, Michael J; Hatz, Laura E; Mueller, E Terry
2016-01-01
Individuals often allow prior investments of time, money or effort to influence their current behavior. A tendency to allow previous investments to impact further investment, referred to as the sunk-cost fallacy, may be related to adverse psychological health. Unfortunately, little is known about the relation between the sunk-cost fallacy and psychological symptoms or help seeking. The current study used a relatively novel approach (i.e., Amazon.com's Mechanical Turk crowdsourcing [AMT] service) to examine various aspects of psychological health in internet users (n = 1053) that did and did not commit the sunk-cost fallacy. In this observational study, individuals logged on to AMT, selected the "decision making survey" amongst the array of currently available tasks, and completed the approximately 200-question survey (which included a two-trial sunk cost task, the brief symptom inventory 18, the Binge Eating Scale, portions of the SF-8 health survey, and other questions about treatment utilization). Individuals that committed the fallacy reported a greater number of symptoms related to Binge Eating Disorder and Depression, being bothered more by emotional problems, yet waited longer to seek assistance when feeling ill. The current findings are discussed in relation to promoting help-seeking behavior amongst individuals that commit this logical fallacy.
A query theory account of the effect of memory retrieval on the sunk cost bias.
Ting, Hsuchi; Wallsten, Thomas S
2011-08-01
The sunk cost bias occurs when individuals continue to invest in the same option when better alternatives are available. Many researchers believe that this bias is due to overemphasizing the past investment over the (missed) opportunities offered by alternatives. As an alternative or complement to this view, we show that memory retrieval and attention play important roles in the sunk cost bias. In two experiments, individuals generated more reasons for pursuing the invested option than for an alternative; they generated those reasons earlier in a sequence of reasons; and these effects increased as the individuals made progress toward attaining the reward yielded by the invested option. Associated with these effects, individuals perceived an increasingly wide gap in value between the invested and alternative options as they progressed toward the goal, thereby creating the sunk cost bias. Forcing individuals to reverse the order in which they generated reasons for the invested and alternative options reduced the bias. [corrected
Fujimaki, Shun; Sakagami, Takayuki
2016-01-01
The sunk cost fallacy is one of the irrational choice behaviors robustly observed in humans. This fallacy can be defined as a preference for a higher-cost alternative to a lower-cost one after previous investment in a higher-cost alternative. The present study examined this irrational choice by exposing pigeons to several types of trials with differently illuminated colors. We prepared three types of non-choice trials for experiencing different outcomes after presenting same or different colors as alternatives and three types of choice trials for testing whether pigeons demonstrated irrational choice. In non-choice trials, animals experienced either of the following: (1) no reinforcement after the presentation of an unrelated colored stimulus to the alternatives used in the choice situation, (2) no reinforcement after investment in the lower-cost alternative, or (3) reinforcement or no reinforcement after investment in the higher-cost alternative. In choice trials, animals were required to choose in the following three situations: (A) higher-cost vs. lower-cost alternatives, (B) higher-cost vs. lower-cost ones after some investment in the higher-cost alternative, and (C) higher-cost vs. lower-cost alternatives after the presentation of an unrelated colored stimulus. From the definition of the sunk cost fallacy, we assumed that animals would exhibit this fallacy if they preferred the higher-cost alternative in situation (B) compared with (A) or (C). We made several conditions, each of which comprised various combinations of three types of non-choice trials and tested their preference in three choice trials. Pigeons committed the sunk cost fallacy only in the condition that contained non-choice trials (3), i.e., pigeons experienced reinforcement after investing in the higher-cost alternative. This result suggests that sunk cost fallacy might be caused by the experiences of reinforcement after investing in the higher-cost alternative. PMID:27014166
What Were They Thinking? Reducing Sunk-Cost Bias in a Life-Span Sample
Strough, JoNell; Bruine de Bruin, Wändi; Parker, Andrew M.; Karns, Tara; Lemaster, Philip; Pichayayothin, Nipat; Delaney, Rebecca; Stoiko, Rachel
2016-01-01
We tested interventions to reduce “sunk-cost bias,” the tendency to continue investing in failing plans even when those plans have soured and are no longer rewarding. We showed members of a national U.S. life-span panel a hypothetical scenario about a failing plan that was halfway complete. Participants were randomly assigned to an intervention to focus on how to improve the situation, an intervention to focus on thoughts and feelings, or a no-intervention control group. First, we found that the thoughts and feelings intervention reduced sunk-cost bias in decisions about project completion, as compared to the improvement intervention and the no-intervention control. Second, older age was associated with greater willingness to cancel the failing plan across all three groups. Third, we found that introspection processes helped to explain the effectiveness of the interventions. Specifically, the larger reduction in sunk-cost bias as observed in the thoughts and feelings intervention (vs. the improvement intervention) was associated with suppression of future-oriented thoughts of eventual success, and with suppression of augmentations of the scenario that could make it seem reasonable to continue the plan. Fourth, we found that introspection processes were related to age differences in decisions. Older people were less likely to mention future-oriented thoughts of eventual success associated with greater willingness to continue the failing plan. We discuss factors to consider when designing interventions for reducing sunk-cost bias. PMID:27831712
DDG 1000 Zumwalt Class Destroyer (DDG 1000)
2013-12-01
Missile Defense Radar is the most cost-effective solution to fleet air and missile defense requirements. The Secretary of the Navy notified Congress...not reach an affordable solution and deliveries of these components for DDG 1002 were becoming time-critical. The Navy concurrently pursued a steel...DD(X) Construction (Shared) (Sunk) 2464 DD(X) Sys Design, Dev & Integration (Shared) (Sunk) 2465 DC Survivability (Shared) (Sunk) 2466 MFR
Sunk cost and work ethic effects reflect suboptimal choice between different work requirements.
Magalhães, Paula; White, K Geoffrey
2013-03-01
We investigated suboptimal choice between different work requirements in pigeons (Columba livia), namely the sunk cost effect, an irrational tendency to persist with an initial investment, despite the availability of a better option. Pigeons chose between two keys, one with a fixed work requirement to food of 20 pecks (left key), and the other with a work requirement to food which varied across conditions (center key). On some trials within each session, such choices were preceded by an investment of 35 pecks on the center key, whereas on others they were not. On choice trials preceded by the investment, the pigeons tended to stay and complete the schedule associated with the center key, even when the number of pecks to obtain reward was greater than for the concurrently available left key. This result indicates that pigeons, like humans, commit the sunk cost effect. With higher work requirements, this preference was extended to trials where there was no initial investment, so an overall preference for the key associated with more work was evident, consistent with the work ethic effect. We conclude that a more general work ethic effect is amplified by the effect of the prior investment, that is, the sunk cost effect. Copyright © 2013 Elsevier B.V. All rights reserved.
Feldman, Gilad; Wong, Kin Fai Ellick
2018-04-01
Escalation of commitment to a failing course of action occurs in the presence of (a) sunk costs, (b) negative feedback that things are deviating from expectations, and (c) a decision between escalation and de-escalation. Most of the literature to date has focused on sunk costs, yet we offer a new perspective on the classic escalation-of-commitment phenomenon by focusing on the impact of negative feedback. On the basis of the inaction-effect bias, we theorized that negative feedback results in the tendency to take action, regardless of what that action may be. In four experiments, we demonstrated that people facing escalation-decision situations were indeed action oriented and that framing escalation as action and de-escalation as inaction resulted in a stronger tendency to escalate than framing de-escalation as action and escalation as inaction (mini-meta-analysis effect d = 0.37, 95% confidence interval = [0.21, 0.53]).
Rats behave optimally in a sunk cost task.
Yáñez, Nataly; Bouzas, Arturo; Orduña, Vladimir
2017-07-01
The sunk cost effect has been defined as the tendency to persist in an alternative once an investment of effort, time or money has been made, even if better options are available. The goal of this study was to investigate in rats the relationship between sunk cost and the information about when it is optimal to leave the situation, which was studied by Navarro and Fantino (2005) with pigeons. They developed a procedure in which different fixed-ratio schedules were randomly presented, with the richest one being more likely; subjects could persist in the trial until they obtained the reinforcer, or start a new trial in which the most favorable option would be available with a high probability. The information about the expected number of responses needed to obtain the reinforcer was manipulated through the presence or absence of discriminative stimuli; also, they used different combinations of schedule values and their probabilities of presentation to generate escape-optimal and persistence- optimal conditions. They found optimal behavior in the conditions with presence of discriminative stimuli, but non-optimal behavior when they were absent. Unlike their results, we found optimal behavior in both conditions regardless of the absence of discriminative stimuli; rats seemed to use the number of responses already emitted in the trial as a criterion to escape. In contrast to pigeons, rats behaved optimally and the sunk cost effect was not observed. Copyright © 2017 Elsevier B.V. All rights reserved.
Motivational Reasons for Biased Decisions: The Sunk-Cost Effect's Instrumental Rationality.
Domeier, Markus; Sachse, Pierre; Schäfer, Bernd
2018-01-01
The present study describes the mechanism of need regulation, which accompanies the so-called "biased" decisions. We hypothesized an unconscious urge for psychological need satisfaction as the trigger for cognitive biases. In an experimental study ( N = 106), participants had the opportunity to win money in a functionality test. In the test, they could either use the solution they had developed (sunk cost) or an alternative solution that offered a higher probability of winning. The selection of the sunk-cost option (SCO) was the most chosen option, supporting the hypothesis of this study. The reason behind the majority of participants choosing the SCO seemed to be the satisfaction of psychological needs, despite a reduced chance of winning money. An intervention, which aimed at triggering self-reflection, had no impact on the decision. The findings of this study contribute to the discussion on the reasons for cognitive biases and their formation in the human mind. Moreover, it discusses the application of the label "irrational" for biased decisions and proposes reasons for instrumental rationality, which exist at an unconscious, need-regulative level.
Motivational Reasons for Biased Decisions: The Sunk-Cost Effect’s Instrumental Rationality
Domeier, Markus; Sachse, Pierre; Schäfer, Bernd
2018-01-01
The present study describes the mechanism of need regulation, which accompanies the so-called “biased” decisions. We hypothesized an unconscious urge for psychological need satisfaction as the trigger for cognitive biases. In an experimental study (N = 106), participants had the opportunity to win money in a functionality test. In the test, they could either use the solution they had developed (sunk cost) or an alternative solution that offered a higher probability of winning. The selection of the sunk-cost option (SCO) was the most chosen option, supporting the hypothesis of this study. The reason behind the majority of participants choosing the SCO seemed to be the satisfaction of psychological needs, despite a reduced chance of winning money. An intervention, which aimed at triggering self-reflection, had no impact on the decision. The findings of this study contribute to the discussion on the reasons for cognitive biases and their formation in the human mind. Moreover, it discusses the application of the label “irrational” for biased decisions and proposes reasons for instrumental rationality, which exist at an unconscious, need-regulative level. PMID:29881366
Getting older isn't all that bad: better decisions and coping when facing "sunk costs".
Bruine de Bruin, Wändi; Strough, JoNell; Parker, Andrew M
2014-09-01
Because people of all ages face decisions that affect their quality of life, decision-making competence is important across the life span. According to theories of rational decision making, one crucial decision skill involves the ability to discontinue failing commitments despite irrecoverable investments also referred to as "sunk costs." We find that older adults are better than younger adults at making decisions to discontinue such failing commitments especially when irrecoverable losses are large, as well as at coping with the associated irrecoverable losses. Our results are relevant to interventions that aim to promote better decision-making competence across the life span. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Culture Moderates Biases in Search Decisions.
Pattaratanakun, Jake A; Mak, Vincent
2015-08-01
Prior studies suggest that people often search insufficiently in sequential-search tasks compared with the predictions of benchmark optimal strategies that maximize expected payoff. However, those studies were mostly conducted in individualist Western cultures; Easterners from collectivist cultures, with their higher susceptibility to escalation of commitment induced by sunk search costs, could exhibit a reversal of this undersearch bias by searching more than optimally, but only when search costs are high. We tested our theory in four experiments. In our pilot experiment, participants generally undersearched when search cost was low, but only Eastern participants oversearched when search cost was high. In Experiments 1 and 2, we obtained evidence for our hypothesized effects via a cultural-priming manipulation on bicultural participants in which we manipulated the language used in the program interface. We obtained further process evidence for our theory in Experiment 3, in which we made sunk costs nonsalient in the search task-as expected, cross-cultural effects were largely mitigated. © The Author(s) 2015.
Decision-making competence and attempted suicide
Szanto, Katalin; Bruine de Bruin, Wändi; Parker, Andrew M; Hallquist, Michael N; Vanyukov, Polina M; Dombrovski, Alexandre Y
2015-01-01
Objective The propensity of people vulnerable to suicide to make poor life decisions is increasingly well documented. Do they display an extreme degree of decision biases? The present study used a behavioral decision approach to examine the susceptibility of low-lethality and high-lethality suicide attempters to common decision biases, which may ultimately obscure alternative solutions and deterrents to suicide in a crisis. Method We assessed older and middle-aged individuals who made high-lethality (medically serious; N=31) and low-lethality suicide attempts (N=29). Comparison groups included suicide ideators (N=30), non-suicidal depressed (N=53), and psychiatrically healthy participants (N=28). Attempters, ideators, and non-suicidal depressed participants had unipolar non-psychotic major depression. Decision biases included sunk cost (inability to abort an action for which costs are irrecoverable), framing (responding to superficial features of how a problem is presented), under/overconfidence (appropriateness of confidence in knowledge), and inconsistent risk perception. Data were collected between June of 2010 and February of 2014. Results Both high- and low-lethality attempters were more susceptible to framing effects, as compared to the other groups included in this study (p≤ 0.05, ηp2 =.06). In contrast, low-lethality attempters were more susceptible to sunk costs than both the comparison groups and high-lethality attempters (p≤ 0.01, ηp2 =.09). These group differences remained after accounting for age, global cognitive performance, and impulsive traits. Premorbid IQ partially explained group differences in framing effects. Conclusion Suicide attempters’ failure to resist framing may reflect their inability to consider a decision from an objective standpoint in a crisis. Low-lethality attempters’ failure to resist sunk-cost may reflect their tendency to confuse past and future costs of their behavior, lowering their threshold for acting on suicidal thoughts. PMID:26717535
Decision-making competence and attempted suicide.
Szanto, Katalin; Bruine de Bruin, Wändi; Parker, Andrew M; Hallquist, Michael N; Vanyukov, Polina M; Dombrovski, Alexandre Y
2015-12-01
The propensity of people vulnerable to suicide to make poor life decisions is increasingly well documented. Do they display an extreme degree of decision biases? The present study used a behavioral-decision approach to examine the susceptibility of low-lethality and high-lethality suicide attempters to common decision biases that may ultimately obscure alternative solutions and deterrents to suicide in a crisis. We assessed older and middle-aged (42-97 years) individuals who made high-lethality (medically serious) (n = 31) and low-lethality suicide attempts (n = 29). Comparison groups included suicide ideators (n = 30), nonsuicidal depressed participants (n = 53), and psychiatrically healthy participants (n = 28). Attempters, ideators, and nonsuicidal depressed participants had nonpsychotic major depression (DSM-IV criteria). Decision biases included sunk cost (inability to abort an action for which costs are irrecoverable), framing (responding to superficial features of how a problem is presented), underconfidence/overconfidence (appropriateness of confidence in knowledge), and inconsistent risk perception. Data were collected between June 2010 and February 2014. Both high- and low-lethality attempters were more susceptible to framing effects as compared to the other groups included in this study (P ≤ .05, ηp2 = 0.06). In contrast, low-lethality attempters were more susceptible to sunk costs than both the comparison groups and high-lethality attempters (P ≤ .01, ηp2 = 0.09). These group differences remained after accounting for age, global cognitive performance, and impulsive traits. Premorbid IQ partially explained group differences in framing effects. Suicide attempters' failure to resist framing may reflect their inability to consider a decision from an objective standpoint in a crisis. Failure of low-lethality attempters to resist sunk cost may reflect their tendency to confuse past and future costs of their behavior, lowering their threshold for acting on suicidal thoughts. © Copyright 2015 Physicians Postgraduate Press, Inc.
30 CFR 203.68 - What pre-application costs will BSEE consider in determining economic viability?
Code of Federal Regulations, 2012 CFR
2012-07-01
... in determining economic viability? 203.68 Section 203.68 Mineral Resources BUREAU OF SAFETY AND... determining economic viability? (a) We will not consider ineligible costs as set forth in § 203.89(h) in determining economic viability for purposes of royalty relief. (b) We will consider sunk costs according to...
30 CFR 203.68 - What pre-application costs will BSEE consider in determining economic viability?
Code of Federal Regulations, 2013 CFR
2013-07-01
... in determining economic viability? 203.68 Section 203.68 Mineral Resources BUREAU OF SAFETY AND... determining economic viability? (a) We will not consider ineligible costs as set forth in § 203.89(h) in determining economic viability for purposes of royalty relief. (b) We will consider sunk costs according to...
30 CFR 203.68 - What pre-application costs will MMS consider in determining economic viability?
Code of Federal Regulations, 2011 CFR
2011-07-01
... determining economic viability? 203.68 Section 203.68 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT... economic viability? (a) We will not consider ineligible costs as set forth in § 203.89(h) in determining economic viability for purposes of royalty relief. (b) We will consider sunk costs according to the...
30 CFR 203.68 - What pre-application costs will BSEE consider in determining economic viability?
Code of Federal Regulations, 2014 CFR
2014-07-01
... in determining economic viability? 203.68 Section 203.68 Mineral Resources BUREAU OF SAFETY AND... determining economic viability? (a) We will not consider ineligible costs as set forth in § 203.89(h) in determining economic viability for purposes of royalty relief. (b) We will consider sunk costs according to...
76 FR 4254 - Raisins Produced From Grapes Grown in California; Increased Assessment Rate
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... accounting period to another. Because these are ``sunk'' costs, like rent, salaries and other related... committee recommended $1,745,000 to cover salaries for all 18 committee employees, vacation accruals...
Getting older isn’t all that bad: Better decisions and coping when facing ’sunk costs’
de Bruin, Wändi Bruine; Strough, JoNell; Parker, Andrew M.
2014-01-01
Because people of all ages face decisions that affect their quality of life, decision-making competence is important across the life span. According to theories of rational decision making, one crucial decision skill involves the ability to discontinue failing commitments despite irrecoverable investments also referred to as ‘sunk costs.’ We find that older adults are better than younger adults at making decisions to discontinue such failing commitments especially when irrecoverable losses are large, as well as at coping with the associated irrecoverable losses. Our results are relevant to interventions that aim to promote better decision-making competence across the life span. PMID:25244483
Thomas P. Holmes; Will Allen; Robert G. Haight; E. Carina H. Keskitalo; Mariella Marzano; Maria Pettersson; Christopher P. Quine; E. R. Langer
2017-01-01
National and international efforts to manage forest biosecurity create tension between opposing sources of ecological and economic irreversibility. Phytosanitary policies designed to protect national borders from biological invasions incur sunk costs deriving from economic and political irreversibilities that incentivizes wait-and-see decision-making. However, the...
Essays on Experimental Economics and Education
ERIC Educational Resources Information Center
Ogawa, Scott Richard
2013-01-01
In Chapter 1 I consider three separate explanations for how price affects the usage rate of a purchased product: Screening, signaling, and sunk-cost bias. I propose an experimental design that disentangles the three effects. Furthermore, in order to quantify and compare these effects I introduce a simple structural model and show that the…
Micro-Level Adaptation, Macro-Level Selection, and the Dynamics of Market Partitioning
García-Díaz, César; van Witteloostuijn, Arjen; Péli, Gábor
2015-01-01
This paper provides a micro-foundation for dual market structure formation through partitioning processes in marketplaces by developing a computational model of interacting economic agents. We propose an agent-based modeling approach, where firms are adaptive and profit-seeking agents entering into and exiting from the market according to their (lack of) profitability. Our firms are characterized by large and small sunk costs, respectively. They locate their offerings along a unimodal demand distribution over a one-dimensional product variety, with the distribution peak constituting the center and the tails standing for the peripheries. We found that large firms may first advance toward the most abundant demand spot, the market center, and release peripheral positions as predicted by extant dual market explanations. However, we also observed that large firms may then move back toward the market fringes to reduce competitive niche overlap in the center, triggering nonlinear resource occupation behavior. Novel results indicate that resource release dynamics depend on firm-level adaptive capabilities, and that a minimum scale of production for low sunk cost firms is key to the formation of the dual structure. PMID:26656107
Micro-Level Adaptation, Macro-Level Selection, and the Dynamics of Market Partitioning.
García-Díaz, César; van Witteloostuijn, Arjen; Péli, Gábor
2015-01-01
This paper provides a micro-foundation for dual market structure formation through partitioning processes in marketplaces by developing a computational model of interacting economic agents. We propose an agent-based modeling approach, where firms are adaptive and profit-seeking agents entering into and exiting from the market according to their (lack of) profitability. Our firms are characterized by large and small sunk costs, respectively. They locate their offerings along a unimodal demand distribution over a one-dimensional product variety, with the distribution peak constituting the center and the tails standing for the peripheries. We found that large firms may first advance toward the most abundant demand spot, the market center, and release peripheral positions as predicted by extant dual market explanations. However, we also observed that large firms may then move back toward the market fringes to reduce competitive niche overlap in the center, triggering nonlinear resource occupation behavior. Novel results indicate that resource release dynamics depend on firm-level adaptive capabilities, and that a minimum scale of production for low sunk cost firms is key to the formation of the dual structure.
Escalation of Commitment in the Surgical ICU.
Braxton, Carla C; Robinson, Celia N; Awad, Samir S
2017-04-01
Escalation of commitment is a business term that describes the continued investment of resources into a project even after there is objective evidence of the project's impending failure. Escalation of commitment may be a contributor to high healthcare costs associated with critically ill patients as it has been shown that, despite almost certain futility, most ICU costs are incurred in the last week of life. Our objective was to determine if escalation of commitment occurs in healthcare settings, specifically in the surgical ICU. We hypothesize that factors previously identified in business and organizational psychology literature including self-justification, accountability, sunk costs, and cognitive dissonance result in escalation of commitment behavior in the surgical ICU setting resulting in increased utilization of resources and cost. A descriptive case study that illustrates common ICU narratives in which escalation of commitment can occur. In addition, we describe factors that are thought to contribute to escalation of commitment behaviors. Escalation of commitment behavior was observed with self-justification, accountability, and cognitive dissonance accounting for the majority of the behavior. Unlike in business decisions, sunk costs was not as evident. In addition, modulating factors such as personality, individual experience, culture, and gender were identified as contributors to escalation of commitment. Escalation of commitment occurs in the surgical ICU, resulting in significant expenditure of resources despite a predicted and often known poor outcome. Recognition of this phenomenon may lead to actions aimed at more rational decision making and may contribute to lowering healthcare costs. Investigation of objective measures that can help aid decision making in the surgical ICU is warranted.
Mineral Revenues: Potential Cost to Repurchase Offshore Oil and Gas Leases
1991-02-22
Interior to compensate the owner for the lesser Availablli tY .- - • of (1) the fair value of the lease at the time of cancellation ( fair value ii or...8217 might be cancelled, we did not examine the fair value approach. Instead, as agreed with your office, we used the sunk cost approach to estimate the...range from $889.4 million to Under the OCSLA $970.7 million depending on the interest rate used. The compensation to lessees under the fair value approach
Li, Sean S; Copeland-Halperin, Libby R; Kaminsky, Alexander J; Li, Jihui; Lodhi, Fahad K; Miraliakbari, Reza
2018-06-01
Computer-aided surgical simulation (CASS) has redefined surgery, improved precision and reduced the reliance on intraoperative trial-and-error manipulations. CASS is provided by third-party services; however, it may be cost-effective for some hospitals to develop in-house programs. This study provides the first cost analysis comparison among traditional (no CASS), commercial CASS, and in-house CASS for head and neck reconstruction. The costs of three-dimensional (3D) pre-operative planning for mandibular and maxillary reconstructions were obtained from an in-house CASS program at our large tertiary care hospital in Northern Virginia, as well as a commercial provider (Synthes, Paoli, PA). A cost comparison was performed among these modalities and extrapolated in-house CASS costs were derived. The calculations were based on estimated CASS use with cost structures similar to our institution and sunk costs were amortized over 10 years. Average operating room time was estimated at 10 hours, with an average of 2 hours saved with CASS. The hourly cost to the hospital for the operating room (including anesthesia and other ancillary costs) was estimated at $4,614/hour. Per case, traditional cases were $46,140, commercial CASS cases were $40,951, and in-house CASS cases were $38,212. Annual in-house CASS costs were $39,590. CASS reduced operating room time, likely due to improved efficiency and accuracy. Our data demonstrate that hospitals with similar cost structure as ours, performing greater than 27 cases of 3D head and neck reconstructions per year can see a financial benefit from developing an in-house CASS program. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Inaction Inertia, the Sunk Cost Effect, and Handedness: Avoiding the Losses of Past Decisions
ERIC Educational Resources Information Center
Westfall, Jonathan E.; Jasper, John D.; Christman, Stephen
2012-01-01
Strength of handedness, or the degree to which an individual prefers to use a single hand to perform various tasks, is a neurological marker for brain organization and has been shown to be linked to episodic memory, attribute framing, and anchoring, as well as other domains and tasks. The present work explores the relationship of handedness to…
ERIC Educational Resources Information Center
Hawthorn-Embree, Meredith L.; Taylor, Emily P.; Skinner, Christopher H.; Parkhurst, John; Nalls, Meagan L.
2014-01-01
After students acquire a skill, mastery often requires them to choose to engage in assigned academic activities (e.g., independent seatwork, and homework). Although students may be more likely to choose to work on partially completed assignments than on new assignments, the partial assignment completion (PAC) effect may not be very powerful. The…
NASA Astrophysics Data System (ADS)
Nickson, Robert
2012-06-01
Regarding John Chubb's comment that the Titanic might not have sunk if it had included some longitudinal watertight bulkheads (May p22), this issue was actually discussed during the Board of Trade enquiry after the ship sunk.
Vaccine supply: a cross-national perspective.
Danzon, Patricia M; Pereira, Nuno Sousa; Tejwani, Sapna S
2005-01-01
In U.S. vaccine markets, competing producers with high fixed, sunk costs face relatively concentrated demand. This tends to lead to exit of all but one or very few producers per vaccine. Detailed evidence of exits and shortages in the flu vaccine market demonstrates the importance of high fixed costs, demand uncertainty, and dynamic quality competition. A comparison of vaccine suppliers in four industrialized countries compared with the United States shows that smaller foreign markets often have more and different vaccine suppliers. High, country-specific, fixed costs, combined with price and volume uncertainty, plausibly deters these potential suppliers from attempting to enter the U.S. market.
U.S. Department of Defense Official Website
Japanese Navy attacked Pearl Harbor, Dec. 7, 1941, five of eight U.S. battleships were sunk or sinking and attacked Pearl Harbor, Dec. 7, 1941, five of eight U.S. battleships were sunk or sinking and more than 2
Essays on Industry Response to Energy and Environmental Policy
NASA Astrophysics Data System (ADS)
Sweeney, Richard Leonard
This dissertation consists of three essays on the relationship between firm incentives and energy and environmental policy outcomes. Chapters 1 and 2 study the impact of the 1990 Clean Air Act Amendments on the United States oil refining industry. This legislation imposed extensive restrictions on refined petroleum product markets, requiring select end users to purchase new cleaner versions of gasoline and diesel. In Chapter 2, I estimate the static impact of this intervention on refining costs, product prices and consumer welfare. Isolating these effects is complicated by several challenges likely to appear in other regulatory settings, including overlap between regulated and non-regulated markets and deviations from perfect competition. Using a rich database of refinery operations, I estimate a structural model that incorporates each of these dimensions, and then use this cost structure to simulate policy counterfactuals. I find that the policies increased gasoline production costs by 7 cents per gallon and diesel costs by 3 cents per gallon on average, although these costs varied considerably across refineries. As a result of these restrictions, consumers in regulated markets experienced welfare losses on the order of 3.7 billion per year, but this welfare loss was partially offset by gains of 1.5 billion dollars per year among consumers in markets not subject to regulation. The results highlight the importance of accounting for imperfect competition and market spillovers when assessing the cost of environmental regulation. Chapter 2 estimates the sunk costs incurred by United States oil refineries as a result of the low sulfur diesel program. The complex, regionally integrated nature of the industry poses many challenges for estimating these costs. I overcome them by placing the decision to invest in sulfur removal technology within the framework of a two period model and estimate the model using moment inequalities. I find that the regulation induced between 2.8 and 3.3 billion worth of investment in order to produce this new fuel. The results highlight the importance of accounting for sunk costs when evaluating environmental regulation, and suggest that the estimation approach used here might provide a viable way to estimate the sunk costs of other environmental policies. Chapter 3, coauthored with Hunt Allcott, turns the to retail market for water heaters to study the topic of energy efficiency. We run a natural field experiment at a large nationwide retailer to measure the effects of energy use information disclosure, customer rebates, and sales agent incentives on demand for energy efficient durable goods. We find that while a combination of large rebates plus sales incentives substantially increases market share, information and sales incentives alone each have zero statistical effect and explain at most a small fraction of the low baseline market share. Sales agents strategically comply only partially with the experiment, targeting information at more interested consumers but not discussing energy efficiency with the disinterested majority. These results suggest that at current prices in this context, seller-provided information is not a major barrier to energy efficiency investments. We theoretically and empirically explore the novel policy option of combining customer subsidies with government-provided sales incentives.
Two-part payments for the reimbursement of investments in health technologies.
Levaggi, Rosella; Moretto, Michele; Pertile, Paolo
2014-04-01
The paper studies the impact of alternative reimbursement systems on two provider decisions: whether to adopt a technology whose provision requires a sunk investment cost and how many patients to treat with it. Using a simple economic model we show that the optimal pricing policy involves a two-part payment: a price equal to the marginal cost of the patient whose benefit of treatment equals the cost of provision, and a separate payment for the partial reimbursement of capital costs. Departures from this scheme, which are frequent in DRG tariff systems designed around the world, lead to a trade-off between the objective of making effective technologies available to patients and the need to ensure appropriateness in use. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
1991-09-01
to Britain by Germany, the British ship Lusitania was sunk, 7 May 1915, by a German submarine. Included in the 1,198 passengers killed were 128...against neutral shipping. Seven months to the day the Lusitania was sunk, President Wilson asked Congress for an increase in military furds. On February 3
Processing implicit control: evidence from reading times
McCourt, Michael; Green, Jeffrey J.; Lau, Ellen; Williams, Alexander
2015-01-01
Sentences such as “The ship was sunk to collect the insurance” exhibit an unusual form of anaphora, implicit control, where neither anaphor nor antecedent is audible. The non-finite reason clause has an understood subject, PRO, that is anaphoric; here it may be understood as naming the agent of the event of the host clause. Yet since the host is a short passive, this agent is realized by no audible dependent. The putative antecedent to PRO is therefore implicit, which it normally cannot be. What sorts of representations subserve the comprehension of this dependency? Here we present four self-paced reading time studies directed at this question. Previous work showed no processing cost for implicit vs. explicit control, and took this to support the view that PRO is linked syntactically to a silent argument in the passive. We challenge this conclusion by reporting that we also find no processing cost for remote implicit control, as in: “The ship was sunk. The reason was to collect the insurance.” Here the dependency crosses two independent sentences, and so cannot, we argue, be mediated by syntax. Our Experiments 1–4 examined the processing of both implicit (short passive) and explicit (active or long passive) control in both local and remote configurations. Experiments 3 and 4 added either “3 days ago” or “just in order” to the local conditions, to control for the distance between the passive and infinitival verbs, and for the predictability of the reason clause, respectively. We replicate the finding that implicit control does not impose an additional processing cost. But critically we show that remote control does not impose a processing cost either. Reading times at the reason clause were never slower when control was remote. In fact they were always faster. Thus, efficient processing of local implicit control cannot show that implicit control is mediated by syntax; nor, in turn, that there is a silent but grammatically active argument in passives. PMID:26579016
Principles and methods of managerial cost-accounting systems.
Suver, J D; Cooper, J C
1988-01-01
An introduction to cost-accounting systems for pharmacy managers is provided; terms are defined and examples of specific applications are given. Cost-accounting systems determine, record, and report the resources consumed in providing services. An effective cost-accounting system must provide the information needed for both internal and external reports. In accounting terms, cost is the value given up to secure an asset. In determining how volumes of activity affect costs, fixed costs and variable costs are calculated; applications include pricing strategies, cost determinations, and break-even analysis. Also discussed are the concepts of direct and indirect costs, opportunity costs, and incremental and sunk costs. For most pharmacy department services, process costing, an accounting of intermediate outputs and homogeneous units, is used; in determining the full cost of providing a product or service (e.g., patient stay), job-order costing is used. Development of work-performance standards is necessary for monitoring productivity and determining product costs. In allocating pharmacy department costs, a ratio of costs to charges can be used; this method is convenient, but microcosting (specific identification of the costs of products) is more accurate. Pharmacy managers can use cost-accounting systems to evaluate the pharmacy's strategies, policies, and services and to improve budgets and reports.
Aerospace Technology Innovation. Volume 9
NASA Technical Reports Server (NTRS)
Turner, Janelle (Editor); Cousins, Liz (Editor)
2001-01-01
Commercializing technology is a daunting task. Of every 11 new product ideas, only one will successfully make it to the marketplace. Fully 46% of new product investment becomes sunk in cost. Yet, a few good companies consistently attain an 80% technology commercialization success rate and have lead the way in establishing best practices. The NASA Incubator program consists of nine incubators, each residing near a NASA research center. The purpose of the incubators is to use the best practices is to use the best practices of technology commercialization to help early stage businesses successfully launch new products that incorporate NASA technology.
Modeling the violation of reward maximization and invariance in reinforcement schedules.
La Camera, Giancarlo; Richmond, Barry J
2008-08-08
It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect"). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys.
Van Dyken, J. David; Wade, Michael J.
2012-01-01
Nature abounds with a rich variety of altruistic strategies, including public resource enhancement, resource provisioning, communal foraging, alarm calling, and nest defense. Yet, despite their vastly different ecological roles, current theory typically treats diverse altruistic traits as being favored under the same general conditions. Here we introduce greater ecological realism into social evolution theory and find evidence of at least four distinct modes of altruism. Contrary to existing theory, we find that altruistic traits contributing to “resource-enhancement” (e.g., siderophore production, provisioning, agriculture) and “resource-efficiency” (e.g., pack hunting, communication) are most strongly favored when there is strong local competition. These resource-based modes of helping are “K-strategies” that increase a social group’s growth yield, and should characterize species with scarce resources and/or high local crowding caused by low mortality, high fecundity, and/or mortality occurring late in the process of resource-acquisition. The opposite conditions, namely weak local competition (abundant resource, low crowding), favor survival (e.g., nest defense) and fecundity (e.g., nurse workers) altruism, which are “r-strategies” that increase a social group’s growth rate. We find that survival altruism is uniquely favored by a novel evolutionary force that we call “sunk cost selection”. Sunk cost selection favors helping that prevents resources from being wasted on individuals destined to die before reproduction. Our results contribute to explaining the observed natural diversity of altruistic strategies, reveal the necessary connection between the evolution and the ecology of sociality, and correct the widespread but inaccurate view that local competition uniformly impedes the evolution of altruism. PMID:22834747
Van Dyken, J David; Wade, Michael J
2012-08-01
Nature abounds with a rich variety of altruistic strategies, including public resource enhancement, resource provisioning, communal foraging, alarm calling, and nest defense. Yet, despite their vastly different ecological roles, current theory typically treats diverse altruistic traits as being favored under the same general conditions. Here, we introduce greater ecological realism into social evolution theory and find evidence of at least four distinct modes of altruism. Contrary to existing theory, we find that altruistic traits contributing to "resource-enhancement" (e.g., siderophore production, provisioning, agriculture) and "resource-efficiency" (e.g., pack hunting, communication) are most strongly favored when there is strong local competition. These resource-based modes of helping are "K-strategies" that increase a social group's growth yield, and should characterize species with scarce resources and/or high local crowding caused by low mortality, high fecundity, and/or mortality occurring late in the process of resource-acquisition. The opposite conditions, namely weak local competition (abundant resource, low crowding), favor survival (e.g., nest defense) and fecundity (e.g., nurse workers) altruism, which are "r-strategies" that increase a social group's growth rate. We find that survival altruism is uniquely favored by a novel evolutionary force that we call "sunk cost selection." Sunk cost selection favors helping that prevents resources from being wasted on individuals destined to die before reproduction. Our results contribute to explaining the observed natural diversity of altruistic strategies, reveal the necessary connection between the evolution and the ecology of sociality, and correct the widespread but inaccurate view that local competition uniformly impedes the evolution of altruism. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Connolly, Barbara Mary
This dissertation applies theoretical insights from transaction cost economics to explain and predict the organizational form of cooperative agreements between Eastern and Western Europe in areas of regional environmental and political concern. It examines five contracting problems related to nuclear power safety and acid rain, and describes the history of international negotiations to manage these problems. It argues that the level of interdependence in a given issue area, or costly effects experienced in one state due to activities and decisions of other states, along with the level of transactional vulnerability, or sunk costs invested in support of a particular contractual relationship among these states, are key determinants of the governance structures states choose to facilitate cooperation in that issue area. Empirically, the dissertation traces the evolution of three sets of institutional arrangements related to nuclear safety: governance for western nuclear safety assistance to Eastern Europe, negotiations of a global convention on safety standards for nuclear power plants, and contracts among utilities and multilateral banks to build new nuclear power plants in Eastern Europe. Next it studies European acid rain, chronicling the history of international acid rain controls within the UNECE Convention on Long-Range Transboundary Air Pollution (LRTAP) and the European Union, and finally examining institutional arrangements for burden-sharing to promote European bargains on emissions reduction, including bilateral aid transfers and proposals for multilateral burden sharing. Political actors have a wide range of choice among institutional arrangements to facilitate international cooperation, from simple market-type exchanges, to arbitration-type regimes that provide information and enhance reputation effects, to self-enforcing agreements such as issue-linkage, to supranational governance. The governance structures states devise to manage their cooperative relations affects outcomes of cooperation, by influencing the bargains states make and how well those bargains stick. This research shows that patterns of interdependence and sunk costs in cooperative relationships with particular states strongly condition the choices states make between these institutional structures to facilitate mutually beneficial international cooperation while protecting against opportunism.
Bleakley, Hoyt; Lin, Jeffrey
2012-01-01
We examine portage sites in the U.S. South, Mid-Atlantic, and Midwest, including those on the fall line, a geomorphological feature in the southeastern U.S. marking the final rapids on rivers before the ocean. Historically, waterborne transport of goods required portage around the falls at these points, while some falls provided water power during early industrialization. These factors attracted commerce and manufacturing. Although these original advantages have long since been made obsolete, we document the continuing importance of these portage sites over time. We interpret these results as path dependence and contrast explanations based on sunk costs interacting with decreasing versus increasing returns to scale. PMID:23935217
Duijmelinck, Daniëlle M I D; Mosca, Ilaria; van de Ven, Wynand P M M
2015-05-01
Competitive health insurance markets will only enhance cost-containment, efficiency, quality, and consumer responsiveness if all consumers feel free to easily switch insurer. Consumers will switch insurer if their perceived switching benefits outweigh their perceived switching costs. We developed a conceptual framework with potential switching benefits and costs in competitive health insurance markets. Moreover, we used a questionnaire among Dutch consumers (1091 respondents) to empirically examine the relevance of the different switching benefits and costs in consumers' decision to (not) switch insurer. Price, insurers' service quality, insurers' contracted provider network, the benefits of supplementary insurance, and welcome gifts are potential switching benefits. Transaction costs, learning costs, 'benefit loss' costs, uncertainty costs, the costs of (not) switching provider, and sunk costs are potential switching costs. In 2013 most Dutch consumers switched insurer because of (1) price and (2) benefits of supplementary insurance. Nearly half of the non-switchers - and particularly unhealthy consumers - mentioned one of the switching costs as their main reason for not switching. Because unhealthy consumers feel not free to easily switch insurer, insurers have reduced incentives to invest in high-quality care for them. Therefore, policymakers should develop strategies to increase consumer choice. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Reasoned Decision Making Without Math? Adaptability and Robustness in Response to Surprise.
Smithson, Michael; Ben-Haim, Yakov
2015-10-01
Many real-world planning and decision problems are far too uncertain, too variable, and too complicated to support realistic mathematical models. Nonetheless, we explain the usefulness, in these situations, of qualitative insights from mathematical decision theory. We demonstrate the integration of info-gap robustness in decision problems in which surprise and ignorance are predominant and where personal and collective psychological factors are critical. We present practical guidelines for employing adaptable-choice strategies as a proxy for robustness against uncertainty. These guidelines include being prepared for more surprises than we intuitively expect, retaining sufficiently many options to avoid premature closure and conflicts among preferences, and prioritizing outcomes that are steerable, whose consequences are observable, and that do not entail sunk costs, resource depletion, or high transition costs. We illustrate these concepts and guidelines with the example of the medical management of the 2003 SARS outbreak in Vietnam. © 2015 Society for Risk Analysis.
Use of an Arduino to study buoyancy force
NASA Astrophysics Data System (ADS)
Espindola, P. R.; Cena, C. R.; Alves, D. C. B.; Bozano, D. F.; Goncalves, A. M. B.
2018-05-01
The study of buoyancy becomes very interesting when we measure the apparent weight of the body and the liquid vessel weight. In this paper, we propose an experimental apparatus that measures both the forces mentioned before as a function of the depth that a cylinder is sunk into the water. It is done using two load cells connected to an Arduino. With this experiment, the student can verify Archimedes’ principle, Newton’s third law, and calculate the density of a liquid. This apparatus can be used in fluid physics laboratories as a substitute for very expensive sensor kits or even to improve too simple approaches, usually employed, but still at low cost.
A good time to leave?: the sunk time effect in pigeons.
Magalhães, Paula; White, K Geoffrey
2014-06-01
Persistence in a losing course of action due to prior investments of time, known as the sunk time effect, has seldom been studied in nonhuman animals. On every trial in the present study, pigeons were required to choose between two response keys. Responses on one key produced food after a short fixed interval (FI) of time on some trials, or on other trials, no food (Extinction) after a longer time. FI and Extinction trials were not differently signaled, were equiprobable, and alternated randomly. Responses on a second Escape key allowed the pigeon to terminate the current trial and start a new one. The optimal behavior was for pigeons to peck the escape key once the duration equivalent to the short FI had elapsed without reward. Durations of the short FI and the longer Extinction schedules were varied over conditions. In some conditions, the pigeons suboptimally responded through the Extinction interval, thus committing the sunk time effect. The absolute duration of the short FI had no effect on the choice between persisting and escaping. Instead, the ratio of FI and Extinction durations determined the likelihood of persistence during extinction. Copyright © 2014 Elsevier B.V. All rights reserved.
Learning and forgetting in the jet fighter aircraft industry.
Bongers, Anelí
2017-01-01
A recent strategy carried out by the aircraft industry to reduce the total cost of the new generation fighters has consisted in the development of a single airframe with different technical and operational specifications. This strategy has been designed to reduce costs in the Research, Design and Development phase with the ultimate objective of reducing the final unit price per aircraft. This is the case of the F-35 Lightning II, where three versions, with significant differences among them, are produced simultaneously based on a single airframe. Whereas this strategy seems to be useful to cut down pre-production sunk costs, their effects on production costs remain to be studied. This paper shows that this strategy can imply larger costs in the production phase by reducing learning acquisition and hence, the total effect on the final unit price of the aircraft is indeterminate. Learning curves are estimated based on the flyaway cost for the latest three fighter aircraft models: The A/F-18E/F Super Hornet, the F-22A Raptor, and the F-35A Lightning II. We find that learning rates for the F-35A are significantly lower (an estimated learning rate of around 9%) than for the other two models (around 14%).
Learning and forgetting in the jet fighter aircraft industry
2017-01-01
A recent strategy carried out by the aircraft industry to reduce the total cost of the new generation fighters has consisted in the development of a single airframe with different technical and operational specifications. This strategy has been designed to reduce costs in the Research, Design and Development phase with the ultimate objective of reducing the final unit price per aircraft. This is the case of the F-35 Lightning II, where three versions, with significant differences among them, are produced simultaneously based on a single airframe. Whereas this strategy seems to be useful to cut down pre-production sunk costs, their effects on production costs remain to be studied. This paper shows that this strategy can imply larger costs in the production phase by reducing learning acquisition and hence, the total effect on the final unit price of the aircraft is indeterminate. Learning curves are estimated based on the flyaway cost for the latest three fighter aircraft models: The A/F-18E/F Super Hornet, the F-22A Raptor, and the F-35A Lightning II. We find that learning rates for the F-35A are significantly lower (an estimated learning rate of around 9%) than for the other two models (around 14%). PMID:28957359
Cost of an informatics-based diabetes management program.
Blanchfield, Bonnie B; Grant, Richard W; Estey, Greg A; Chueh, Henry C; Gazelle, G Scott; Meigs, James B
2006-01-01
The relatively high cost of information technology systems may be a barrier to hospitals thinking of adopting this technology. The experiences of early adopters may facilitate decision making for hospitals less able to risk their limited resources. This study identifies the costs to design, develop, implement, and operate an innovative informatics-based registry and disease management system (POPMAN) to manage type 2 diabetes in a primary care setting. The various cost components of POPMAN were systematically identified and collected. POPMAN cost 450,000 dollars to develop and operate over 3.5 years (1999-2003). Approximately 250,000 dollars of these costs are one-time expenditures or sunk costs. Annual operating costs are expected to range from 90,000 dollars to 110,000 dollars translating to approximately 90 dollars per patient for a 1,200 patient registry. The cost of POPMAN is comparable to the costs of other quality-improving interventions for patients with diabetes. Modifications to POPMAN for adaptation to other chronic diseases or to interface with new electronic medical record systems will require additional investment but should not be as high as initial development costs. POPMAN provides a means of tracking progress against negotiated quality targets, allowing hospitals to negotiate pay for performance incentives with insurers that may exceed the annual operating cost of POPMAN. As a result, the quality of care of patients with diabetes through use of POPMAN could be improved at a minimal net cost to hospitals.
Can an economist find happiness setting public utility rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahn, A.E.
Alfred E. Kahn describes his applications of economic theories to rate level regulatory policies during his career as a public utilities regulator. Two shifts in regulatory thinking have responded to the intent to have rates set to reflect rising costs that will be occuring while the rates are in effect rather than on past costs and have recognized that varying the allowable rate of return can be as effective as changing the rate base. Pressures have been exerted on economists to place too much emphasis on regulatory lag, full-cost pricing, sunk cost, and other factors affecting capital formation. A rationalmore » policy recognizes differences between the needs of individual companies to raise capital and the variations that occur during a given accounting period. Rates based on trends that recognize returns over a time span is one solution. Anticipated cost increases have been included in rate levels so that automatic cost adjustments can be made during times of inflation. Because of increases in the marginal cost of capital, regulators must decide how to selectively vary cash flow during construction periods in a way that will give adequate signals to consumers. Situations in which a water company is associated with a real estate developer can avoid a double recovery of investment by granting rate increases in relation to cost over time.« less
Subjective costs drive overly patient foraging strategies in rats on an intertemporal foraging task.
Wikenheiser, Andrew M; Stephens, David W; Redish, A David
2013-05-14
Laboratory studies of decision making often take the form of two-alternative, forced-choice paradigms. In natural settings, however, many decision problems arise as stay/go choices. We designed a foraging task to test intertemporal decision making in rats via stay/go decisions. Subjects did not follow the rate-maximizing strategy of choosing only food items associated with short delays. Instead, rats were often willing to wait for surprisingly long periods, and consequently earned a lower rate of food intake than they might have by ignoring long-delay options. We tested whether foraging theory or delay discounting models predicted the behavior we observed but found that these models could not account for the strategies subjects selected. Subjects' behavior was well accounted for by a model that incorporated a cost for rejecting potential food items. Interestingly, subjects' cost sensitivity was proportional to environmental richness. These findings are at odds with traditional normative accounts of decision making but are consistent with retrospective considerations having a deleterious influence on decisions (as in the "sunk-cost" effect). More broadly, these findings highlight the utility of complementing existing assays of decision making with tasks that mimic more natural decision topologies.
Coal fired air turbine cogeneration
NASA Astrophysics Data System (ADS)
Foster-Pegg, R. W.
Fuel options and generator configurations for installation of cogenerator equipment are reviewed, noting that the use of oil or gas may be precluded by cost or legislation within the lifetime of any cogeneration equipment yet to be installed. A coal fueled air turbine cogenerator plant is described, which uses external combustion in a limestone bed at atmospheric pressure and in which air tubes are sunk to gain heat for a gas turbine. The limestone in the 26 MW unit absorbs sulfur from the coal, and can be replaced by other sorbents depending on types of coal available and stringency of local environmental regulations. Low temperature combustion reduces NOx formation and release of alkali salts and corrosion. The air heat is exhausted through a heat recovery boiler to produce process steam, then can be refed into the combustion chamber to satisfy preheat requirements. All parts of the cogenerator are designed to withstand full combustion temperature (1500 F) in the event of air flow stoppage. Costs are compared with those of a coal fired boiler and purchased power, and it is shown that the increased capital requirements for cogenerator apparatus will yield a 2.8 year payback. Detailed flow charts, diagrams and costs schedules are included.
Subjective costs drive overly patient foraging strategies in rats on an intertemporal foraging task
Wikenheiser, Andrew M.; Stephens, David W.; Redish, A. David
2013-01-01
Laboratory studies of decision making often take the form of two-alternative, forced-choice paradigms. In natural settings, however, many decision problems arise as stay/go choices. We designed a foraging task to test intertemporal decision making in rats via stay/go decisions. Subjects did not follow the rate-maximizing strategy of choosing only food items associated with short delays. Instead, rats were often willing to wait for surprisingly long periods, and consequently earned a lower rate of food intake than they might have by ignoring long-delay options. We tested whether foraging theory or delay discounting models predicted the behavior we observed but found that these models could not account for the strategies subjects selected. Subjects’ behavior was well accounted for by a model that incorporated a cost for rejecting potential food items. Interestingly, subjects’ cost sensitivity was proportional to environmental richness. These findings are at odds with traditional normative accounts of decision making but are consistent with retrospective considerations having a deleterious influence on decisions (as in the “sunk-cost” effect). More broadly, these findings highlight the utility of complementing existing assays of decision making with tasks that mimic more natural decision topologies. PMID:23630289
Using Support Vector Machine on EEG for Advertisement Impact Assessment.
Wei, Zhen; Wu, Chao; Wang, Xiaoyi; Supratak, Akara; Wang, Pan; Guo, Yike
2018-01-01
The advertising industry depends on an effective assessment of the impact of advertising as a key performance metric for their products. However, current assessment methods have relied on either indirect inference from observing changes in consumer behavior after the launch of an advertising campaign, which has long cycle times and requires an ad campaign to have already have been launched (often meaning costs having been sunk). Or through surveys or focus groups, which have a potential for experimental biases, peer pressure, and other psychological and sociological phenomena that can reduce the effectiveness of the study. In this paper, we investigate a new approach to assess the impact of advertisement by utilizing low-cost EEG headbands to record and assess the measurable impact of advertising on the brain. Our evaluation shows the desired performance of our method based on user experiment with 30 recruited subjects after watching 220 different advertisements. We believe the proposed SVM method can be further developed to a general and scalable methodology that can enable advertising agencies to assess impact rapidly, quantitatively, and without bias.
Using Support Vector Machine on EEG for Advertisement Impact Assessment
Wei, Zhen; Wu, Chao; Wang, Xiaoyi; Supratak, Akara; Wang, Pan; Guo, Yike
2018-01-01
The advertising industry depends on an effective assessment of the impact of advertising as a key performance metric for their products. However, current assessment methods have relied on either indirect inference from observing changes in consumer behavior after the launch of an advertising campaign, which has long cycle times and requires an ad campaign to have already have been launched (often meaning costs having been sunk). Or through surveys or focus groups, which have a potential for experimental biases, peer pressure, and other psychological and sociological phenomena that can reduce the effectiveness of the study. In this paper, we investigate a new approach to assess the impact of advertisement by utilizing low-cost EEG headbands to record and assess the measurable impact of advertising on the brain. Our evaluation shows the desired performance of our method based on user experiment with 30 recruited subjects after watching 220 different advertisements. We believe the proposed SVM method can be further developed to a general and scalable methodology that can enable advertising agencies to assess impact rapidly, quantitatively, and without bias. PMID:29593481
Development of an Enhanced Payback Function for the Superior Energy Performance Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Therkelsen, Peter; Rao, Prakash; McKane, Aimee
2015-08-03
The U.S. DOE Superior Energy Performance (SEP) program provides recognition to industrial and commercial facilities that achieve certification to the ISO 50001 energy management system standard and third party verification of energy performance improvements. Over 50 industrial facilities are participating and 28 facilities have been certified in the SEP program. These facilities find value in the robust, data driven energy performance improvement result that the SEP program delivers. Previous analysis of SEP certified facility data demonstrated the cost effectiveness of SEP and identified internal staff time to be the largest cost component related to SEP implementation and certification. This papermore » analyzes previously reported and newly collected data of costs and benefits associated with the implementation of an ISO 50001 and SEP certification. By disaggregating “sunk energy management system (EnMS) labor costs”, this analysis results in a more accurate and detailed understanding of the costs and benefits of SEP participation. SEP is shown to significantly improve and sustain energy performance and energy cost savings, resulting in a highly attractive return on investment. To illustrate these results, a payback function has been developed and is presented. On average facilities with annual energy spend greater than $2M can expect to implement SEP with a payback of less than 1.5 years. Finally, this paper also observes and details decreasing facility costs associated with implementing ISO 50001 and certifying to the SEP program, as the program has improved from pilot, to demonstration, to full launch.« less
As an Administrator, Columbis, Alas, Was Sunk.
ERIC Educational Resources Information Center
Zakariya, Sally Banks
1984-01-01
With reference to the "Columbus principle" of innovative boldness endorsed in the article immediately preceeding this one, this columnist observes that, historically, Columbus may have been a good navigator but was a failure in management and died broke. (TE)
Helicopter lifts Grissom from water
NASA Technical Reports Server (NTRS)
1961-01-01
Marine helicopter has astronaut Virgil I. Grissom in harness and is bringing him up out of the water. The Liberty Bell 7 spacecraft has just sunk below the water. His Mercury-Redstone 4 launch was the second in the U.S. manned space effort.
Money for nothing: How firms have financed R&D-projects since the Industrial Revolution
Bakker, Gerben
2013-01-01
We investigate the long-run historical pattern of R&D-outlays by reviewing aggregate growth rates and historical cases of particular R&D projects, following the historical-institutional approach of Chandler (1962), North (1981) and Williamson (1985). We find that even the earliest R&D-projects used non-insignificant cash outlays and that until the 1970s aggregate R&D outlays grew far faster than GDP, despite five well-known challenges that implied that R&D could only be financed with cash, for which no perfect market existed: the presence of sunk costs, real uncertainty, long time lags, adverse selection, and moral hazard. We then review a wide variety of organisational forms and institutional instruments that firms historically have used to overcome these financing obstacles, and without which the enormous growth of R&D outlays since the nineteenth century would not have been possible. PMID:24910477
Promoting de-escalation of commitment: a regulatory-focus perspective on sunk costs.
Molden, Daniel C; Hui, Chin Ming
2011-01-01
People frequently escalate their commitment to failing endeavors. Explanations for such behavior typically involve loss aversion, failure to recognize other alternatives, and concerns with justifying prior actions; all of these factors produce recommitment to previous decisions with the goal of erasing losses and vindicating these decisions. Solutions to escalation of commitment have therefore focused on external oversight and divided responsibility during decision making to attenuate loss aversion, blindness to alternatives, and justification biases. However, these solutions require substantial resources and have additional adverse effects. The present studies tested an alternative method for de-escalating commitment: activating broad motivations for growth and advancement (promotion). This approach should reduce concerns with loss and increase perceptions of alternatives, thereby attenuating justification motives. In two studies featuring hypothetical financial decisions, activating promotion motivations reduced recommitment to poorly performing investments as compared with both not activating any additional motivations and activating motivations for safety and security (prevention).
Money for nothing: How firms have financed R&D-projects since the Industrial Revolution.
Bakker, Gerben
2013-12-01
We investigate the long-run historical pattern of R&D-outlays by reviewing aggregate growth rates and historical cases of particular R&D projects, following the historical-institutional approach of Chandler (1962), North (1981) and Williamson (1985). We find that even the earliest R&D-projects used non-insignificant cash outlays and that until the 1970s aggregate R&D outlays grew far faster than GDP, despite five well-known challenges that implied that R&D could only be financed with cash, for which no perfect market existed: the presence of sunk costs, real uncertainty, long time lags, adverse selection, and moral hazard. We then review a wide variety of organisational forms and institutional instruments that firms historically have used to overcome these financing obstacles, and without which the enormous growth of R&D outlays since the nineteenth century would not have been possible.
On the relative independence of thinking biases and cognitive ability.
Stanovich, Keith E; West, Richard F
2008-04-01
In 7 different studies, the authors observed that a large number of thinking biases are uncorrelated with cognitive ability. These thinking biases include some of the most classic and well-studied biases in the heuristics and biases literature, including the conjunction effect, framing effects, anchoring effects, outcome bias, base-rate neglect, "less is more" effects, affect biases, omission bias, myside bias, sunk-cost effect, and certainty effects that violate the axioms of expected utility theory. In a further experiment, the authors nonetheless showed that cognitive ability does correlate with the tendency to avoid some rational thinking biases, specifically the tendency to display denominator neglect, probability matching rather than maximizing, belief bias, and matching bias on the 4-card selection task. The authors present a framework for predicting when cognitive ability will and will not correlate with a rational thinking tendency. (c) 2008 APA, all rights reserved.
Essays on oil price volatility and irreversible investment
NASA Astrophysics Data System (ADS)
Pastor, Daniel J.
In chapter 1, we provide an extensive and systematic evaluation of the relative forecasting performance of several models for the volatility of daily spot crude oil prices. Empirical research over the past decades has uncovered significant gains in forecasting performance of Markov Switching GARCH models over GARCH models for the volatility of financial assets and crude oil futures. We find that, for spot oil price returns, non-switching models perform better in the short run, whereas switching models tend to do better at longer horizons. In chapter 2, I investigate the impact of volatility on firms' irreversible investment decisions using real options theory. Cost incurred in oil drilling is considered sunk cost, thus irreversible. I collect detailed data on onshore, development oil well drilling on the North Slope of Alaska from 2003 to 2014. Volatility is modeled by constructing GARCH, EGARCH, and GJR-GARCH forecasts based on monthly real oil prices, and realized volatility from 5-minute intraday returns of oil futures prices. Using a duration model, I show that oil price volatility generally has a negative relationship with the hazard rate of drilling an oil well both when aggregating all the fields, and in individual fields.
Error Cost Escalation Through the Project Life Cycle
NASA Technical Reports Server (NTRS)
Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory
2004-01-01
It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
33 CFR 241.7 - Application of test.
Code of Federal Regulations, 2014 CFR
2014-07-01
... payments, equals the amount of Federal expenditures (including sunk pre-construction engineering and design... 33 Navigation and Navigable Waters 3 2014-07-01 2014-07-01 false Application of test. 241.7... test. (a) A preliminary ability to pay test will be applied during the study phase of any proposed...
33 CFR 241.7 - Application of test.
Code of Federal Regulations, 2010 CFR
2010-07-01
... payments, equals the amount of Federal expenditures (including sunk pre-construction engineering and design... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Application of test. 241.7... test. (a) A preliminary ability to pay test will be applied during the study phase of any proposed...
33 CFR 241.7 - Application of test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... payments, equals the amount of Federal expenditures (including sunk pre-construction engineering and design... 33 Navigation and Navigable Waters 3 2013-07-01 2013-07-01 false Application of test. 241.7... test. (a) A preliminary ability to pay test will be applied during the study phase of any proposed...
33 CFR 241.7 - Application of test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... payments, equals the amount of Federal expenditures (including sunk pre-construction engineering and design... 33 Navigation and Navigable Waters 3 2012-07-01 2012-07-01 false Application of test. 241.7... test. (a) A preliminary ability to pay test will be applied during the study phase of any proposed...
33 CFR 241.7 - Application of test.
Code of Federal Regulations, 2011 CFR
2011-07-01
... payments, equals the amount of Federal expenditures (including sunk pre-construction engineering and design... 33 Navigation and Navigable Waters 3 2011-07-01 2011-07-01 false Application of test. 241.7... test. (a) A preliminary ability to pay test will be applied during the study phase of any proposed...
27. Graffiti in north cells: 'When the golden sun has ...
27. Graffiti in north cells: 'When the golden sun has sunk beyond the desert horizon, and darkness followed, under a dim light casting my lonesome heart.'; 135mm lens with electronic flash illumination. - Tule Lake Project Jail, Post Mile 44.85, State Route 139, Newell, Modoc County, CA
Winning in the Past: The Implications Today
1966-04-08
against further atrocities. Accordingly, Germany instructed her U-boat commanders that no ocean liner was to be sunk without warning or provisions made...remainder of the war. Debate in the United States over the sinking of the Lusitania was bitter, and caused Secretary of State Bryan to resign. Congres
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
To evaluate the cost-effectiveness of an automated medication system (AMS) implemented in a Danish hospital setting. An economic evaluation was performed alongside a controlled before-and-after effectiveness study with one control ward and one intervention ward. The primary outcome measure was the number of errors in the medication administration process observed prospectively before and after implementation. To determine the difference in proportion of errors after implementation of the AMS, logistic regression was applied with the presence of error(s) as the dependent variable. Time, group, and interaction between time and group were the independent variables. The cost analysis used the hospital perspective with a short-term incremental costing approach. The total 6-month costs with and without the AMS were calculated as well as the incremental costs. The number of avoided administration errors was related to the incremental costs to obtain the cost-effectiveness ratio expressed as the cost per avoided administration error. The AMS resulted in a statistically significant reduction in the proportion of errors in the intervention ward compared with the control ward. The cost analysis showed that the AMS increased the ward's 6-month cost by €16,843. The cost-effectiveness ratio was estimated at €2.01 per avoided administration error, €2.91 per avoided procedural error, and €19.38 per avoided clinical error. The AMS was effective in reducing errors in the medication administration process at a higher overall cost. The cost-effectiveness analysis showed that the AMS was associated with affordable cost-effectiveness rates. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Operation Cottage: A Cautionary Tale of Assumption and Perceptual Bias
2015-01-01
dealing with churning seas and 25-degree tem- peratures, but it did not bode well for an advance to the island interior, which now faced murderous mortar...but this scheme was aborted in late June after three submarines as- signed to the operation were detected and sunk by Allied destroyers.39 It was
The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error
Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G
2012-01-01
Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908
Financing Strategies For A Nuclear Fuel Cycle Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Shropshire; Sharon Chandler
2006-07-01
To help meet the nation’s energy needs, recycling of partially used nuclear fuel is required to close the nuclear fuel cycle, but implementing this step will require considerable investment. This report evaluates financing scenarios for integrating recycling facilities into the nuclear fuel cycle. A range of options from fully government owned to fully private owned were evaluated using DPL (Decision Programming Language 6.0), which can systematically optimize outcomes based on user-defined criteria (e.g., lowest lifecycle cost, lowest unit cost). This evaluation concludes that the lowest unit costs and lifetime costs are found for a fully government-owned financing strategy, due tomore » government forgiveness of debt as sunk costs. However, this does not mean that the facilities should necessarily be constructed and operated by the government. The costs for hybrid combinations of public and private (commercial) financed options can compete under some circumstances with the costs of the government option. This analysis shows that commercial operations have potential to be economical, but there is presently no incentive for private industry involvement. The Nuclear Waste Policy Act (NWPA) currently establishes government ownership of partially used commercial nuclear fuel. In addition, the recently announced Global Nuclear Energy Partnership (GNEP) suggests fuels from several countries will be recycled in the United States as part of an international governmental agreement; this also assumes government ownership. Overwhelmingly, uncertainty in annual facility capacity led to the greatest variations in unit costs necessary for recovery of operating and capital expenditures; the ability to determine annual capacity will be a driving factor in setting unit costs. For private ventures, the costs of capital, especially equity interest rates, dominate the balance sheet; and the annual operating costs, forgiveness of debt, and overnight costs dominate the costs computed for the government case. The uncertainty in operations, leading to lower than optimal processing rates (or annual plant throughput), is the most detrimental issue to achieving low unit costs. Conversely, lowering debt interest rates and the required return on investments can reduce costs for private industry.« less
Prevalence and cost of hospital medical errors in the general and elderly United States populations.
Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S
2013-12-01
The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.
Haung, Ching-Ying; Wang, Sheng-Pen; Chiang, Chih-Wei
2010-01-01
Medical tourism is a relatively recent global economic and political phenomenon that has assumed increasing importance for developing countries, particularly in Asia. In fact, Taiwan possesses a niche for developing medical tourism because many hospitals provide state-of-the-art medicine in all disciplines and many doctors are trained in the United States (US). Among the most common medical procedures outsourced, joint replacements such as total knee replacement (TKR) and total hip replacement (THR) are two surgeries offered to US patients at a lower cost and shorter waiting time than in the US. This paper proposed a pre-checking medical tourism system (PCMTS) and evaluated the cost feasibility of recruiting American clients traveling to Taiwan for joint replacement surgery. Cost analysis was used to estimate the prime costs for each stage in the proposed PCMTS. Sensitivity analysis was implemented to examine how different pricings for medical checking and a surgical operation (MC&SO) and recovery, can influence the surplus per patient considering the PCMTS. Finally, the break-even method was adopted to test the tradeoff between the sunk costs of investment in the PCMTS and the annual surplus for participating hospitals. A novel business plan was built showing that pre-checking stations in medical tourism can provide post-operative care and recovery follow-up. Adjustable pricing for hospital administrators engaged in the PCMTS consisted of two main costs: US$3,700 for MC&SO and US$120 for the hospital stay. Guidelines for pricing were provided to maximize the annual surplus from this plan with different number of patients participating in PCMTS. The maximal profit margin from each American patient undertaking joint surgery is about US$24,315. Using cost analysis, this article might be the first to evaluate the feasibility of PCMTS for joint replacement surgeries. The research framework in this article is applicable when hospital administrators evaluate the feasibility of outsourced medical procedures other than TKR and THR.
Economic impact of medication error: a systematic review.
Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P
2017-05-01
Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
25. A QUIRK ON THE FACING OF THE NORTHEASTERN ABUTMENT. ...
25. A QUIRK ON THE FACING OF THE NORTHEASTERN ABUTMENT. IT HAS BEEN CAST IN PLACE, THE GHOSTS OF THE WOODEN FORMERS CAN BE SEEN. EVEN THE MITRES WITHIN THE SUNK PORTIONS OF THE CASTING ARE VISIBLE. POISON IVY AND TRUMPET VINE CLING WELL TO THE ROUGH CONCRETE. - Main Street Bridge, Spanning East Fork Whitewater River, Richmond, Wayne County, IN
Use of an Arduino to Study Buoyancy Force
ERIC Educational Resources Information Center
Espindola, P. R.; Cena, C. R.; Alves, D. C. B.; Bozano, D. F.; Goncalves, A. M. B.
2018-01-01
The study of buoyancy becomes very interesting when we measure the apparent weight of the body and the liquid vessel weight. In this paper, we propose an experimental apparatus that measures both the forces mentioned before as a function of the depth that a cylinder is sunk into the water. It is done using two load cells connected to an Arduino.…
You Sunk My Constitution: Using a Popular Off-the-Shelf Board Game to Simulate Political Concepts
ERIC Educational Resources Information Center
Bridge, Dave
2014-01-01
Using an example, this article demonstrates how instructors can make use of popular off-the-shelf board games to model politics. I show how the rules of the popular board game "Battleship" can be manipulated to simulate centralization of power and, more specifically, the differences between the Articles of Confederation and the…
Vantage Points: Perspectives on Airpower and the Profession of Arms
2007-08-01
do the work of one extraordinary man. —Elbert Hubbard (856–95), American essayist of “A Message to Garcia”; died when the British liner ... Lusitania was sunk by the German Uboat U20, 7 May 95 01-Text.indd 40 8/15/07 7:34:56 AM 4 As weapons increase in lethality, precision, and standoff
Vaccination against pandemic influenza A/H1N1v in England: a real-time economic evaluation.
Baguelin, Marc; Hoek, Albert Jan Van; Jit, Mark; Flasche, Stefan; White, Peter J; Edmunds, W John
2010-03-11
Decisions on how to mitigate an evolving pandemic are technically challenging. We present a real-time assessment of the effectiveness and cost-effectiveness of alternative influenza A/H1N1v vaccination strategies. A transmission dynamic model was fitted to the estimated number of cases in real-time, and used to generate plausible autumn scenarios under different vaccination options. The proportion of these cases by age and risk group leading to primary care consultations, National Pandemic Flu Service consultations, emergency attendances, hospitalisations, intensive care and death was then estimated using existing data from the pandemic. The real-time model suggests that the epidemic will peak in early November, with the peak height being similar in magnitude to the summer wave. Vaccination of the high-risk groups is estimated to prevent about 45 deaths (80% credibility interval 26-67), and save around 2900 QALYs (80% credibility interval 1600-4500). Such a programme is very likely to be cost-effective if the cost of vaccine purchase itself is treated as a sunk cost. Extending vaccination to low-risk individuals is expected to result in more modest gains in deaths and QALYs averted. Extending vaccination to school-age children would be the most cost-effective extension. The early availability of vaccines is crucial in determining the impact of such extensions. There have been a considerable number of cases of H1N1v in England, and so the benefits of vaccination to mitigate the ongoing autumn wave are limited. However, certain groups appear to be at significantly higher risk of complications and deaths, and so it appears both effective and cost-effective to vaccinate them. The United Kingdom was the first country to have a major epidemic in Europe. In countries where the epidemic is not so far advanced vaccination of children may be cost-effective. Similar, detailed, real-time modelling and economic studies could help to clarify the situation.
Modeling the Violation of Reward Maximization and Invariance in Reinforcement Schedules
La Camera, Giancarlo; Richmond, Barry J.
2008-01-01
It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as “schedule length effect”). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: “framing,” wherein equivalent options are treated differently depending on the context in which they are presented, and the “sunk cost” effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys. PMID:18688266
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-16
... due Revision due to agency Collection Old burden to error error (old-- error) IC1: ``Ready to Move... Revisions of Estimates of Annual Costs to Respondents Total cost Collection New cost Old cost reduction (new--old) IC1: ``Ready to Move?'' $288,000 $720,000 -$432,000 ``Rights & Responsibilities'' 3,264,000 8,160...
Development of a Methodology to Optimally Allocate Visual Inspection Time
1989-06-01
Model and then takes into account the costs of the errors. The purpose of the Alternative Model is to not make 104 costly mistakes while meeting the...James Buck, and Virgil Anderson, AIIE Transactions, Volume 11, No.4, December 1979. 26. "Inspection of Sheet Materials - Model and Data", Colin G. Drury ...worker error, the probability of inspector error, and the cost of system error. Paired comparisons of error phenomena from operational personnel are
Uncharted territory: measuring costs of diagnostic errors outside the medical record.
Schwartz, Alan; Weiner, Saul J; Weaver, Frances; Yudkowsky, Rachel; Sharma, Gunjan; Binns-Calvey, Amy; Preyss, Ben; Jordan, Neil
2012-11-01
In a past study using unannounced standardised patients (USPs), substantial rates of diagnostic and treatment errors were documented among internists. Because the authors know the correct disposition of these encounters and obtained the physicians' notes, they can identify necessary treatment that was not provided and unnecessary treatment. They can also discern which errors can be identified exclusively from a review of the medical records. To estimate the avoidable direct costs incurred by physicians making errors in our previous study. In the study, USPs visited 111 internal medicine attending physicians. They presented variants of four previously validated cases that jointly manipulate the presence or absence of contextual and biomedical factors that could lead to errors in management if overlooked. For example, in a patient with worsening asthma symptoms, a complicating biomedical factor was the presence of reflux disease and a complicating contextual factor was inability to afford the currently prescribed inhaler. Costs of missed or unnecessary services were computed using Medicare cost-based reimbursement data. Fourteen practice locations, including two academic clinics, two community-based primary care networks with multiple sites, a core safety net provider, and three Veteran Administration government facilities. Contribution of errors to costs of care. Overall, errors in care resulted in predicted costs of approximately $174,000 across 399 visits, of which only $8745 was discernible from a review of the medical records alone (without knowledge of the correct diagnoses). The median cost of error per visit with an incorrect care plan differed by case and by presentation variant within case. Chart reviews alone underestimate costs of care because they typically reflect appropriate treatment decisions conditional on (potentially erroneous) diagnoses. Important information about patient context is often entirely missing from medical records. Experimental methods, including the use of USPs, reveal the substantial costs of these errors.
Decision-making heuristics and biases across the life span.
Strough, Jonell; Karns, Tara E; Schlosnagle, Leo
2011-10-01
We outline a contextual and motivational model of judgment and decision-making (JDM) biases across the life span. Our model focuses on abilities and skills that correspond to deliberative, experiential, and affective decision-making processes. We review research that addresses links between JDM biases and these processes as represented by individual differences in specific abilities and skills (e.g., fluid and crystallized intelligence, executive functioning, emotion regulation, personality traits). We focus on two JDM biases-the sunk-cost fallacy (SCF) and the framing effect. We trace the developmental trajectory of each bias from preschool through middle childhood, adolescence, early adulthood, and later adulthood. We conclude that life-span developmental trajectories differ depending on the bias investigated. Existing research suggests relative stability in the framing effect across the life span and decreases in the SCF with age, including in later life. We highlight directions for future research on JDM biases across the life span, emphasizing the need for process-oriented research and research that increases our understanding of JDM biases in people's everyday lives. © 2011 New York Academy of Sciences.
The Multifold Relationship Between Memory and Decision Making: An Individual-differences Study
Del Missier, Fabio; Mäntylä, Timo; Hansson, Patrik; Bruine de Bruin, Wändi; Parker, Andrew M.; Nilsson, Lars-Göran
2014-01-01
Several judgment and decision-making tasks are assumed to involve memory functions, but significant knowledge gaps on the memory processes underlying these tasks remain. In a study on 568 adults between 25 to 80 years, hypotheses were tested on the specific relationships between individual differences in working memory, episodic memory, and semantic memory, respectively, and six main components of decision-making competence. In line with the hypotheses, working memory was positively related with the more cognitively-demanding tasks (Resistance to Framing, Applying Decision Rules, and Under/Overconfidence), whereas episodic memory was positively associated with a more experience-based judgment task (Recognizing Social Norms). Furthermore, semantic memory was positively related with two more knowledge-based decision-making tasks (Consistency in Risk Perception and Resistance to Sunk Costs). Finally, the age-related decline observed in some of the decision-making tasks was (partially or totally) mediated by the age-related decline in working memory or episodic memory. These findings are discussed in relation to the functional roles fulfilled by different memory processes in judgment and decision-making tasks. PMID:23565790
Decision-making heuristics and biases across the life span
Strough, JoNell; Karns, Tara E.; Schlosnagle, Leo
2013-01-01
We outline a contextual and motivational model of judgment and decision-making (JDM) biases across the life span. Our model focuses on abilities and skills that correspond to deliberative, experiential, and affective decision-making processes. We review research that addresses links between JDM biases and these processes as represented by individual differences in specific abilities and skills (e.g., fluid and crystallized intelligence, executive functioning, emotion regulation, personality traits). We focus on two JDM biases—the sunk-cost fallacy (SCF) and the framing effect. We trace the developmental trajectory of each bias from preschool through middle childhood, adolescence, early adulthood, and later adulthood. We conclude that life-span developmental trajectories differ depending on the bias investigated. Existing research suggests relative stability in the framing effect across the life span and decreases in the SCF with age, including in later life. We highlight directions for future research on JDM biases across the life span, emphasizing the need for process-oriented research and research that increases our understanding of JDM biases in people’s everyday lives. PMID:22023568
The dragons of inaction: psychological barriers that limit climate change mitigation and adaptation.
Gifford, Robert
2011-01-01
Most people think climate change and sustainability are important problems, but too few global citizens engaged in high-greenhouse-gas-emitting behavior are engaged in enough mitigating behavior to stem the increasing flow of greenhouse gases and other environmental problems. Why is that? Structural barriers such as a climate-averse infrastructure are part of the answer, but psychological barriers also impede behavioral choices that would facilitate mitigation, adaptation, and environmental sustainability. Although many individuals are engaged in some ameliorative action, most could do more, but they are hindered by seven categories of psychological barriers, or "dragons of inaction": limited cognition about the problem, ideological world views that tend to preclude pro-environmental attitudes and behavior, comparisons with key other people, sunk costs and behavioral momentum, discredence toward experts and authorities, perceived risks of change, and positive but inadequate behavior change. Structural barriers must be removed wherever possible, but this is unlikely to be sufficient. Psychologists must work with other scientists, technical experts, and policymakers to help citizens overcome these psychological barriers.
Handedness differences in information framing.
Jasper, John D; Fournier, Candice; Christman, Stephen D
2014-02-01
Previous research has shown that strength of handedness predicts differences in sensory illusions, Stroop interference, episodic memory, and beliefs about body image. Recent evidence also suggests handedness differences in the susceptibility to common decision biases such as anchoring and sunk cost. The present paper extends this line of work to attribute framing effects. Sixty-three undergraduates were asked to advise a friend concerning the use of a safe allergy medication during pregnancy. A third of the participants received negatively-framed information concerning the fetal risk of the drug (1-3% chance of having a malformed child); another third received positively-framed information (97-99% chance of having a normal child); and the final third received no counseling information and served as the control. Results indicated that, as predicted, inconsistent (mixed)-handers were more responsive than consistent (strong)-handers to information changes and readily update their beliefs. Although not significant, the data also suggested that only inconsistent handers were affected by information framing. Theoretical implications as well as ongoing work in holistic versus analytic processing, contextual sensitivity, and brain asymmetry will be discussed. Copyright © 2013 Elsevier Inc. All rights reserved.
Long, K.R.
1995-01-01
Modern mining law, by facilitating socially and environmentally acceptable exploration, development, and production of mineral materials, helps secure the benefits of mineral production while minimizing environmental harm and accounting for increasing land-use competition. Mining investments are sunk costs, irreversibly tied to a particular mineral site, and require many years to recoup. Providing security of tenure is the most critical element of a practical mining law. Governments owning mineral rights have a conflict of interest between their roles as a profit-maximizing landowner and as a guardian of public welfare. As a monopoly supplier, governments have considerable power to manipulate mineral-rights markets. To avoid monopoly rent-seeking by governments, a competitive market for government-owned mineral rights must be created by artifice. What mining firms will pay for mineral rights depends on expected exploration success and extraction costs. Landowners and mining firms will negotlate respective shares of anticipated differential rents, usually allowing for some form of risk sharing. Private landowners do not normally account for external benefits or costs of minerals use. Government ownership of mineral rights allows for direct accounting of social prices for mineral-bearing lands and external costs. An equitable and efficient method is to charge an appropriate reservation price for surface land use, net of the value of land after reclamation, and to recover all or part of differential rents through a flat income or resource-rent tax. The traditional royalty on gross value of production, essentially a regressive income tax, cannot recover as much rent as a flat income tax, causes arbitrary mineral-reserve sterilization, and creates a bias toward development on the extensive margin where marginal environmental costs are higher. Mitigating environmental costs and resolving land-use conflicts require local evaluation and planning. National oversight ensures that the relative global avaliability of minerals and other values are considered, and can also promote adaptive efficiency by publicizing creative local solutions, providing technical support, and funding useful research. ?? 1995 Oxford University Press.
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
2018-02-01
Automated medication systems have been found to reduce errors in the medication process, but little is known about the cost-effectiveness of such systems. The objective of this study was to perform a model-based indirect cost-effectiveness comparison of three different, real-world automated medication systems compared with current standard practice. The considered automated medication systems were a patient-specific automated medication system (psAMS), a non-patient-specific automated medication system (npsAMS), and a complex automated medication system (cAMS). The economic evaluation used original effect and cost data from prospective, controlled, before-and-after studies of medication systems implemented at a Danish hematological ward and an acute medical unit. Effectiveness was described as the proportion of clinical and procedural error opportunities that were associated with one or more errors. An error was defined as a deviation from the electronic prescription, from standard hospital policy, or from written procedures. The cost assessment was based on 6-month standardization of observed cost data. The model-based comparative cost-effectiveness analyses were conducted with system-specific assumptions of the effect size and costs in scenarios with consumptions of 15,000, 30,000, and 45,000 doses per 6-month period. With 30,000 doses the cost-effectiveness model showed that the cost-effectiveness ratio expressed as the cost per avoided clinical error was €24 for the psAMS, €26 for the npsAMS, and €386 for the cAMS. Comparison of the cost-effectiveness of the three systems in relation to different valuations of an avoided error showed that the psAMS was the most cost-effective system regardless of error type or valuation. The model-based indirect comparison against the conventional practice showed that psAMS and npsAMS were more cost-effective than the cAMS alternative, and that psAMS was more cost-effective than npsAMS.
The cost of implementing inpatient bar code medication administration.
Sakowski, Julie Ann; Ketchel, Alan
2013-02-01
To calculate the costs associated with implementing and operating an inpatient bar-code medication administration (BCMA) system in the community hospital setting and to estimate the cost per harmful error prevented. This is a retrospective, observational study. Costs were calculated from the hospital perspective and a cost-consequence analysis was performed to estimate the cost per preventable adverse drug event averted. Costs were collected from financial records and key informant interviews at 4 not-for profit community hospitals. Costs included direct expenditures on capital, infrastructure, additional personnel, and the opportunity costs of time for existing personnel working on the project. The number of adverse drug events prevented using BCMA was estimated by multiplying the number of doses administered using BCMA by the rate of harmful errors prevented by interventions in response to system warnings. Our previous work found that BCMA identified and intercepted medication errors in 1.1% of doses administered, 9% of which potentially could have resulted in lasting harm. The cost of implementing and operating BCMA including electronic pharmacy management and drug repackaging over 5 years is $40,000 (range: $35,600 to $54,600) per BCMA-enabled bed and $2000 (range: $1800 to $2600) per harmful error prevented. BCMA can be an effective and potentially cost-saving tool for preventing the harm and costs associated with medication errors.
Maloney, Stephen; Nicklen, Peter; Rivers, George; Foo, Jonathan; Ooi, Ying Ying; Reeves, Scott; Walsh, Kieran; Ilic, Dragan
2015-07-21
Blended learning describes a combination of teaching methods, often utilizing digital technologies. Research suggests that learner outcomes can be improved through some blended learning formats. However, the cost-effectiveness of delivering blended learning is unclear. This study aimed to determine the cost-effectiveness of a face-to-face learning and blended learning approach for evidence-based medicine training within a medical program. The economic evaluation was conducted as part of a randomized controlled trial (RCT) comparing the evidence-based medicine (EBM) competency of medical students who participated in two different modes of education delivery. In the traditional face-to-face method, students received ten 2-hour classes. In the blended learning approach, students received the same total face-to-face hours but with different activities and additional online and mobile learning. Online activities utilized YouTube and a library guide indexing electronic databases, guides, and books. Mobile learning involved self-directed interactions with patients in their regular clinical placements. The attribution and differentiation of costs between the interventions within the RCT was measured in conjunction with measured outcomes of effectiveness. An incremental cost-effectiveness ratio was calculated comparing the ongoing operation costs of each method with the level of EBM proficiency achieved. Present value analysis was used to calculate the break-even point considering the transition cost and the difference in ongoing operation cost. The incremental cost-effectiveness ratio indicated that it costs 24% less to educate a student to the same level of EBM competency via the blended learning approach used in the study, when excluding transition costs. The sunk cost of approximately AUD $40,000 to transition to the blended model exceeds any savings from using the approach within the first year of its implementation; however, a break-even point is achieved within its third iteration and relative savings in the subsequent years. The sensitivity analysis indicates that approaches with higher transition costs, or staffing requirements over that of a traditional method, are likely to result in negative value propositions. Under the study conditions, a blended learning approach was more cost-effective to operate and resulted in improved value for the institution after the third year iteration, when compared to the traditional face-to-face model. The wider applicability of the findings are dependent on the type of blended learning utilized, staffing expertise, and educational context.
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, Diana
2014-01-01
Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.
Claims, errors, and compensation payments in medical malpractice litigation.
Studdert, David M; Mello, Michelle M; Gawande, Atul A; Gandhi, Tejal K; Kachalia, Allen; Yoon, Catherine; Puopolo, Ann Louise; Brennan, Troyen A
2006-05-11
In the current debate over tort reform, critics of the medical malpractice system charge that frivolous litigation--claims that lack evidence of injury, substandard care, or both--is common and costly. Trained physicians reviewed a random sample of 1452 closed malpractice claims from five liability insurers to determine whether a medical injury had occurred and, if so, whether it was due to medical error. We analyzed the prevalence, characteristics, litigation outcomes, and costs of claims that lacked evidence of error. For 3 percent of the claims, there were no verifiable medical injuries, and 37 percent did not involve errors. Most of the claims that were not associated with errors (370 of 515 [72 percent]) or injuries (31 of 37 [84 percent]) did not result in compensation; most that involved injuries due to error did (653 of 889 [73 percent]). Payment of claims not involving errors occurred less frequently than did the converse form of inaccuracy--nonpayment of claims associated with errors. When claims not involving errors were compensated, payments were significantly lower on average than were payments for claims involving errors (313,205 dollars vs. 521,560 dollars, P=0.004). Overall, claims not involving errors accounted for 13 to 16 percent of the system's total monetary costs. For every dollar spent on compensation, 54 cents went to administrative expenses (including those involving lawyers, experts, and courts). Claims involving errors accounted for 78 percent of total administrative costs. Claims that lack evidence of error are not uncommon, but most are denied compensation. The vast majority of expenditures go toward litigation over errors and payment of them. The overhead costs of malpractice litigation are exorbitant. Copyright 2006 Massachusetts Medical Society.
Estimating the Imputed Social Cost of Errors of Measurement.
1983-10-01
social cost of an error of measurement in the score on a unidimensional test, an asymptotic method, based on item response theory, is developed for...11111111 ij MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963-A.5. ,,, I v.P I RR-83-33-ONR 4ESTIMATING THE IMPUTED SOCIAL COST S OF... SOCIAL COST OF ERRORS OF MEASUREMENT Frederic M. Lord This research was sponsored in part by the Personnel and Training Research Programs Psychological
Karnon, Jonathan; Campbell, Fiona; Czoski-Murray, Carolyn
2009-04-01
Medication errors can lead to preventable adverse drug events (pADEs) that have significant cost and health implications. Errors often occur at care interfaces, and various interventions have been devised to reduce medication errors at the point of admission to hospital. The aim of this study is to assess the incremental costs and effects [measured as quality adjusted life years (QALYs)] of a range of such interventions for which evidence of effectiveness exists. A previously published medication errors model was adapted to describe the pathway of errors occurring at admission through to the occurrence of pADEs. The baseline model was populated using literature-based values, and then calibrated to observed outputs. Evidence of effects was derived from a systematic review of interventions aimed at preventing medication error at hospital admission. All five interventions, for which evidence of effectiveness was identified, are estimated to be extremely cost-effective when compared with the baseline scenario. Pharmacist-led reconciliation intervention has the highest expected net benefits, and a probability of being cost-effective of over 60% by a QALY value of pound10 000. The medication errors model provides reasonably strong evidence that some form of intervention to improve medicines reconciliation is a cost-effective use of NHS resources. The variation in the reported effectiveness of the few identified studies of medication error interventions illustrates the need for extreme attention to detail in the development of interventions, but also in their evaluation and may justify the primary evaluation of more than one specification of included interventions.
2006-10-01
Colby made CoRDS and pacification the principal effort. a rejuvenated civil and rural development program provided in- creased support, advisers, and...deep into the well of the local populace for a fight- ing force. average approximate ages of fighters had sunk to 13-15 years. But rather than 6...feed. immunizations, coupled with rejuvenating the irrigation apparatus around Baghdad, created conditions for eco- nomic independence
2000-04-01
excitement came in responding to a distress call from Lusitania , sunk by a U-boat in April. Herbert and his crew were deeply affected by this incident as...they saw first- hand the recovered bodies of Lusitania passengers that had been laid out on the Queenstown jetty.5 At mid-day 19 August, Baralong was...Germans. Baralong‘s crew reveled in its actions, having avenged Lusitania and Arabic. Although accounts vary, eyewitnesses reported that Wegener was
Worldwide Emerging Environmental Issues Affecting the U.S. Military. November 2005 Report
2005-11-01
rapid development. At the program’s launch festivity, the need for developing an international e- waste recycling systems along with transparent...electronic equipment. Sources: Roadmap Set for the Environmentally Sound Management of Electronic Waste in Asia-Pacific under the Basel Convention...34 Tom Dunne, of the agency’s Office of Solid Waste and Emergency Response, wrote in an e-mail message. 4.5 Sunk Weapons Represent a Growing
Design principles in telescope development: invariance, innocence, and the costs
NASA Astrophysics Data System (ADS)
Steinbach, Manfred
1997-03-01
Instrument design is, for the most part, a battle against errors and costs. Passive methods of error damping are in many cases effective and inexpensive. This paper shows examples of error minimization in our design of telescopes, instrumentation and evaluation instruments.
Economic measurement of medical errors using a hospital claims database.
David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S
2013-01-01
The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
The hidden traps in decision making.
Hammond, J S; Keeney, R L; Raiffa, H
1998-01-01
Bad decisions can often be traced back to the way the decisions were made--the alternatives were not clearly defined, the right information was not collected, the costs and benefits were not accurately weighted. But sometimes the fault lies not in the decision-making process but rather in the mind of the decision maker. The way the human brain works can sabotage the choices we make. John Hammond, Ralph Keeney, and Howard Raiffa examine eight psychological traps that are particularly likely to affect the way we make business decisions: The anchoring trap leads us to give disproportionate weight to the first information we receive. The statusquo trap biases us toward maintaining the current situation--even when better alternatives exist. The sunk-cost trap inclines us to perpetuate the mistakes of the past. The confirming-evidence trap leads us to seek out information supporting an existing predilection and to discount opposing information. The framing trap occurs when we misstate a problem, undermining the entire decision-making process. The overconfidence trap makes us overestimate the accuracy of our forecasts. The prudence trap leads us to be overcautious when we make estimates about uncertain events. And the recallability trap leads us to give undue weight to recent, dramatic events. The best way to avoid all the traps is awareness--forewarned is forearmed. But executives can also take other simple steps to protect themselves and their organizations from the various kinds of mental lapses. The authors show how to take action to ensure that important business decisions are sound and reliable.
Porter, K.; Jones, Lucile M.; Ross, Stephanie L.; Borrero, J.; Bwarie, J.; Dykstra, D.; Geist, Eric L.; Johnson, L.; Kirby, Stephen H.; Long, K.; Lynett, P.; Miller, K.; Mortensen, Carl E.; Perry, S.; Plumlee, G.; Real, C.; Ritchie, L.; Scawthorn, C.; Thio, H.K.; Wein, Anne; Whitmore, P.; Wilson, R.; Wood, Nathan J.; Ostbo, Bruce I.; Oates, Don
2013-01-01
The U.S. Geological Survey and several partners operate a program called Science Application for Risk Reduction (SAFRR) that produces (among other things) emergency planning scenarios for natural disasters. The scenarios show how science can be used to enhance community resiliency. The SAFRR Tsunami Scenario describes potential impacts of a hypothetical, but realistic, tsunami affecting California (as well as the west coast of the United States, Alaska, and Hawaii) for the purpose of informing planning and mitigation decisions by a variety of stakeholders. The scenario begins with an Mw 9.1 earthquake off the Alaska Peninsula. With Pacific basin-wide modeling, we estimate up to 5m waves and 10 m/sec currents would strike California 5 hours later. In marinas and harbors, 13,000 small boats are damaged or sunk (1 in 3) at a cost of $350 million, causing navigation and environmental problems. Damage in the Ports of Los Angeles and Long Beach amount to $110 million, half of it water damage to vehicles and containerized cargo. Flooding of coastal communities affects 1800 city blocks, resulting in $640 million in damage. The tsunami damages 12 bridge abutments and 16 lane-miles of coastal roadway, costing $85 million to repair. Fire and business interruption losses will substantially add to direct losses. Flooding affects 170,000 residents and workers. A wide range of environmental impacts could occur. An extensive public education and outreach program is underway, as well as an evaluation of the overall effort.
Nicklen, Peter; Rivers, George; Foo, Jonathan; Ooi, Ying Ying; Reeves, Scott; Walsh, Kieran; Ilic, Dragan
2015-01-01
Background Blended learning describes a combination of teaching methods, often utilizing digital technologies. Research suggests that learner outcomes can be improved through some blended learning formats. However, the cost-effectiveness of delivering blended learning is unclear. Objective This study aimed to determine the cost-effectiveness of a face-to-face learning and blended learning approach for evidence-based medicine training within a medical program. Methods The economic evaluation was conducted as part of a randomized controlled trial (RCT) comparing the evidence-based medicine (EBM) competency of medical students who participated in two different modes of education delivery. In the traditional face-to-face method, students received ten 2-hour classes. In the blended learning approach, students received the same total face-to-face hours but with different activities and additional online and mobile learning. Online activities utilized YouTube and a library guide indexing electronic databases, guides, and books. Mobile learning involved self-directed interactions with patients in their regular clinical placements. The attribution and differentiation of costs between the interventions within the RCT was measured in conjunction with measured outcomes of effectiveness. An incremental cost-effectiveness ratio was calculated comparing the ongoing operation costs of each method with the level of EBM proficiency achieved. Present value analysis was used to calculate the break-even point considering the transition cost and the difference in ongoing operation cost. Results The incremental cost-effectiveness ratio indicated that it costs 24% less to educate a student to the same level of EBM competency via the blended learning approach used in the study, when excluding transition costs. The sunk cost of approximately AUD $40,000 to transition to the blended model exceeds any savings from using the approach within the first year of its implementation; however, a break-even point is achieved within its third iteration and relative savings in the subsequent years. The sensitivity analysis indicates that approaches with higher transition costs, or staffing requirements over that of a traditional method, are likely to result in negative value propositions. Conclusions Under the study conditions, a blended learning approach was more cost-effective to operate and resulted in improved value for the institution after the third year iteration, when compared to the traditional face-to-face model. The wider applicability of the findings are dependent on the type of blended learning utilized, staffing expertise, and educational context. PMID:26197801
1998-09-01
sunk An. trinkae as its junior synonym. Recent regional keys for identifying anopheline species have followed Peyton (1993) and regarded An. trinkae...as a synonym of An. dunhami ( Calderon-Falero 1994). In recent publications we also accepted Peyton’s (1993) use of An. dunhumi as a senior synonym...1995)) and a 1-S distance matrix was generated using the similarity option in the RAPDPLOT program 832 JOURNAL OF MEDICAL ENTOMOLOGY Vol. 35, no. 5
Defeating the U-boat. Inventing Antisubmarine Warfare (Newport Papers Number 36)
2010-08-01
did not explain how this number had been arrived at but claimed it could be achieved by no more than four cruisers and twelve armed liners in the...cover of darkness, ambush and destroy the largest unarmed liners afloat.33 Aube made clear that unlike in the past, ships, their crews, and cargoes...would not be captured but sunk without warning: “Having fol- lowed the liner from afar, come nightfall, the torpedo-boat will, perfectly silently and
British and German Logistics Support during the World War 2 North African Campaign
1990-02-05
situation was now approaching disaster. The tanker, Prosperina, which we had hoped would bring some relief in the petrol situation, had been bombed...and sunk outside Tobruk. There was only enough petrol left to keep supply traffic going between Tripoli and the front for another three days, and that...had not the petrol to do It. So we were compelled to allow the armored formations In the northern part of our line to assault the British salient
Shahly, Victoria; Berglund, Patricia A; Coulouvrat, Catherine; Fitzgerald, Timothy; Hajak, Goeran; Roth, Thomas; Shillington, Alicia C; Stephenson, Judith J; Walsh, James K; Kessler, Ronald C
2012-10-01
Insomnia is a common and seriously impairing condition that often goes unrecognized. To examine associations of broadly defined insomnia (ie, meeting inclusion criteria for a diagnosis from International Statistical Classification of Diseases, 10th Revision, DSM-IV, or Research Diagnostic Criteria/International Classification of Sleep Disorders, Second Edition) with costly workplace accidents and errors after excluding other chronic conditions among workers in the America Insomnia Survey (AIS). A national cross-sectional telephone survey (65.0% cooperation rate) of commercially insured health plan members selected from the more than 34 million in the HealthCore Integrated Research Database. Four thousand nine hundred ninety-one employed AIS respondents. Costly workplace accidents or errors in the 12 months before the AIS interview were assessed with one question about workplace accidents "that either caused damage or work disruption with a value of $500 or more" and another about other mistakes "that cost your company $500 or more." Current insomnia with duration of at least 12 months was assessed with the Brief Insomnia Questionnaire, a validated (area under the receiver operating characteristic curve, 0.86 compared with diagnoses based on blinded clinical reappraisal interviews), fully structured diagnostic interview. Eighteen other chronic conditions were assessed with medical/pharmacy claims records and validated self-report scales. Insomnia had a significant odds ratio with workplace accidents and/or errors controlled for other chronic conditions (1.4). The odds ratio did not vary significantly with respondent age, sex, educational level, or comorbidity. The average costs of insomnia-related accidents and errors ($32 062) were significantly higher than those of other accidents and errors ($21 914). Simulations estimated that insomnia was associated with 7.2% of all costly workplace accidents and errors and 23.7% of all the costs of these incidents. These proportions are higher than for any other chronic condition, with annualized US population projections of 274 000 costly insomnia-related workplace accidents and errors having a combined value of US $31.1 billion. Effectiveness trials are needed to determine whether expanded screening, outreach, and treatment of workers with insomnia would yield a positive return on investment for employers.
Does the cost function matter in Bayes decision rule?
Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann
2012-02-01
In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
Disclosure of Medical Errors: What Factors Influence How Patients Respond?
Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H
2006-01-01
BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770
Cyclonic entrainment of preconditioned shelf waters into a frontal eddy
NASA Astrophysics Data System (ADS)
Everett, J. D.; Macdonald, H.; Baird, M. E.; Humphries, J.; Roughan, M.; Suthers, I. M.
2015-02-01
The volume transport of nutrient-rich continental shelf water into a cyclonic frontal eddy (entrainment) was examined from satellite observations, a Slocum glider and numerical simulation outputs. Within the frontal eddy, parcels of water with temperature/salinity signatures of the continental shelf (18-19°C and >35.5, respectively) were recorded. The distribution of patches of shelf water observed within the eddy was consistent with the spiral pattern shown within the numerical simulations. A numerical dye tracer experiment showed that the surface waters (≤50 m depth) of the frontal eddy are almost entirely (≥95%) shelf waters. Particle tracking experiments showed that water was drawn into the eddy from over 4° of latitude (30-34.5°S). Consistent with the glider observations, the modeled particles entrained into the eddy sunk relative to their initial position. Particles released south of 33°S, where the waters are cooler and denser, sunk 34 m deeper than their release position. Distance to the shelf was a critical factor in determining the volume of shelf water entrained into the eddy. Entrainment reduced to 0.23 Sv when the eddy was furthest from the shelf, compared to 0.61 Sv when the eddy was within 10 km of the shelf. From a biological perspective, quantifying the entrainment of shelf water into frontal eddies is important, as it is thought to play a significant role in providing an offshore nursery habitat for coastally spawned larval fish.
The impact of estimation errors on evaluations of timber production opportunities.
Dennis L. Schweitzer
1970-01-01
Errors in estimating costs and return, the timing of harvests, and the cost of using funds can greatly affect the apparent desirability of investments in timber production. Partial derivatives are used to measure the impact of these errors on the predicted present net worth of potential investments in timber production. Graphs that illustrate the impact of each type...
Human-Agent Teaming for Multi-Robot Control: A Literature Review
2013-02-01
neurophysiological devices are becoming more cost effective and less invasive, future systems will most likely take advantage of this technology to monitor...Parasuraman et al., 1993). It has also been reported that both the cost of automation errors and the cost of verification affect humans’ reliance on...decision aids, and the effects are also moderated by age (Ezer et al., 2008). Generally, reliance is reduced as the cost of error increases and it
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Piekarski, Patrick K.; Carpenter, James M.; Sharanowski, Barbara J.
2017-01-01
Abstract A new species of potter wasp from South America, Ancistrocerus sur sp. n., is described. A species key and checklist for all described Ancistrocerus that occur south of the Rio Grande are provided. New synonymy includes Odynerus bolivianus Brèthes = Ancistrocerus pilosus (de Saussure), while the subspecies bustamente discopictus Bequaert, lineativentris kamloopsensis Bequaert, lineativentris sinopis Bohart, tuberculocephalussutterianus (de Saussure), and pilosus ecuadorianus Bertoni, are all sunk under their respective nominotypical taxa. PMID:29290718
1990-12-01
submarines. The British liner Lusitania , sunk by the German submarine U-20 on 07 May, 1915, brought to a boil the issues concerning American neutrality...and Germany’s decision to employ the submarine to counter Britain’s dominance of the sea. The sinking of the Lusitania caused the loss of 1195 lives...from the Entente by August. sinking of the Lusitania was centered in the eastern United States, while the western part of the country reacted much
Al-lela, Omer Qutaiba B; Bahari, Mohd Baidi; Al-abbassi, Mustafa G; Salih, Muhannad R M; Basher, Amena Y
2012-06-06
The immunization status of children is improved by interventions that increase community demand for compulsory and non-compulsory vaccines, one of the most important interventions related to immunization providers. The aim of this study is to evaluate the activities of immunization providers in terms of activities time and cost, to calculate the immunization doses cost, and to determine the immunization dose errors cost. Time-motion and cost analysis study design was used. Five public health clinics in Mosul-Iraq participated in the study. Fifty (50) vaccine doses were required to estimate activities time and cost. Micro-costing method was used; time and cost data were collected for each immunization-related activity performed by the clinic staff. A stopwatch was used to measure the duration of activity interactions between the parents and clinic staff. The immunization service cost was calculated by multiplying the average salary/min by activity time per minute. 528 immunization cards of Iraqi children were scanned to determine the number and the cost of immunization doses errors (extraimmunization doses and invalid doses). The average time for child registration was 6.7 min per each immunization dose, and the physician spent more than 10 min per dose. Nurses needed more than 5 min to complete child vaccination. The total cost of immunization activities was 1.67 US$ per each immunization dose. Measles vaccine (fifth dose) has a lower price (0.42 US$) than all other immunization doses. The cost of a total of 288 invalid doses was 744.55 US$ and the cost of a total of 195 extra immunization doses was 503.85 US$. The time spent on physicians' activities was longer than that spent on registrars' and nurses' activities. Physician total cost was higher than registrar cost and nurse cost. The total immunization cost will increase by about 13.3% owing to dose errors. Copyright © 2012 Elsevier Ltd. All rights reserved.
Surface Coatings for Gas Detection via Porous Silicon
NASA Astrophysics Data System (ADS)
Ozdemir, Serdar; Li, Ji-Guang; Gole, James
2009-03-01
Nanopore covered microporous silicon interfaces have been formed via an electrochemical etch for gas sensor applications. Rapid reversible and sensitive gas sensors have been fabricated. The fabricated porous silicon (PS) gas sensors display the advantages of operation at room temperature as well as at a single, readily accessible temperature with an insensitivity to temperature drift; operation in a heat-sunk configuration, ease of coating with gas-selective materials; low cost of fabrication and operation, and the ability to rapidly assess false positives by operating the sensor in a pulsed mode. The PS surface has been modified with unique coatings on the basis of a general theory in order to achieve maximum sensitivity and selectivity. Sensing of NH3, NOx and PH3 at or below the ppm level have been observed. A typical PS nanostructure coated microstructured hybrid configuration when coated with tin oxide (NOx, CO) and gold nanostructures (NH3) provides a greatly increased sensitivity to the indicated gases. Al2O3 coating of the porous silicon using atomic layer deposition and its effect on PH3 sensing has been investigated. 20-100 nm TiO2 nanoparticles have been produced using sol-gel methods to coat PS surfaces and the effects on the selectivity and the sensitivity have been studied.
Nair, Vinit; Salmon, J Warren; Kaul, Alan F
2007-12-01
Disease Management (DM) programs have advanced to address costly chronic disease patterns in populations. This is in part due to the programs' significant clinical and economical value, coupled with interest by pharmaceutical manufacturers, managed care organizations, and pharmacy benefit management firms. While cost containment realizations for many such interventions have been less than anticipated, this article explores potentials in marrying Medication Error Risk Reduction into DM programs within managed care environments. Medication errors are an emergent serious problem now gaining attention in US health policy. They represent a failure within population-based health programs because they remain significant cost drivers. Therefore, medication errors should be addressed in an organized fashion, with DM being a worthy candidate for piggybacking such programs to achieve the best synergistic effects.
NASA Astrophysics Data System (ADS)
Colins, Karen; Li, Liqian; Liu, Yu
2017-05-01
Mass production of widely used semiconductor digital integrated circuits (ICs) has lowered unit costs to the level of ordinary daily consumables of a few dollars. It is therefore reasonable to contemplate the idea of an engineered system that consumes unshielded low-cost ICs for the purpose of measuring gamma radiation dose. Underlying the idea is the premise of a measurable correlation between an observable property of ICs and radiation dose. Accumulation of radiation-damage-induced state changes or error events is such a property. If correct, the premise could make possible low-cost wide-area radiation dose measurement systems, instantiated as wireless sensor networks (WSNs) with unshielded consumable ICs as nodes, communicating error events to a remote base station. The premise has been investigated quantitatively for the first time in laboratory experiments and related analyses performed at the Canadian Nuclear Laboratories. State changes or error events were recorded in real time during irradiation of samples of ICs of different types in a 60Co gamma cell. From the error-event sequences, empirical distribution functions of dose were generated. The distribution functions were inverted and probabilities scaled by total error events, to yield plots of the relationship between dose and error tallies. Positive correlation was observed, and discrete functional dependence of dose quantiles on error tallies was measured, demonstrating the correctness of the premise. The idea of an engineered system that consumes unshielded low-cost ICs in a WSN, for the purpose of measuring gamma radiation dose over wide areas, is therefore tenable.
Three essays on access pricing
NASA Astrophysics Data System (ADS)
Sydee, Ahmed Nasim
In the first essay, a theoretical model is developed to determine the time path of optimal access price in the telecommunications industry. Determining the optimal access price is an important issue in the economics of telecommunications. Setting a high access price discourages potential entrants; a low access price, on the other hand, amounts to confiscation of private property because the infrastructure already built by the incumbent is sunk. Furthermore, a low access price does not give the incumbent incentives to maintain the current network and to invest in new infrastructures. Much of the existing literature on access pricing suffers either from the limitations of a static framework or from the assumption that all costs are avoidable. The telecommunications industry is subject to high stranded costs and, therefore, to address this issue a dynamic model is imperative. This essay presents a dynamic model of one-way access pricing in which the compensation involved in deregulatory taking is formalized and then analyzed. The short run adjustment after deregulatory taking has occurred is carried out and discussed. The long run equilibrium is also analyzed. A time path for the Ramsey price is shown as the correct dynamic price of access. In the second essay, a theoretical model is developed to determine the time path of optimal access price for an infrastructure that is characterized by congestion and lumpy investment. Much of the theoretical literature on access pricing of infrastructure prescribes that the access price be set at the marginal cost of the infrastructure. In proposing this rule of access pricing, the conventional analysis assumes that infrastructure investments are infinitely divisible so that it makes sense to talk about the marginal cost of investment. Often it is the case that investments in infrastructure are lumpy and can only be made in large chunks, and this renders the marginal cost concept meaningless. In this essay, we formalize a model of access pricing with congestion and in which investments in infrastructure are lumpy. To fix ideas, the model is formulated in the context of airport infrastructure investments, which captures both the element of congestion and the lumpiness involved in infrastructure investments. The optimal investment program suggests how many units of capacity should be installed and at which times. Because time is continuous in the model, the discounted cost -- despite the lumpiness of capacity additions -- can be made to vary continuously by varying the time a capacity addition is made. The main results that emerge from the analysis can be described as follows: First, the global demand for air travel rises with time and experiences an upward jump whenever a capacity addition is made. Second, the access price is constant and stays at the basic level when the system is not congested. When the system is congested, a congestion surcharge is imposed on top of the basic level, and the congestion surcharge rises with the level of congestion until the next capacity addition is made at which time the access price takes a downward jump. Third, the individual demand for air travel is constant before congestion sets in and after the last capacity addition takes place. During a time interval in which congestion rises, the individual demand for travel is below the level that prevails when there is no congestion and declines as congestion worsens. The third essay contains a model of access pricing for natural gas transmission pipelines, both when pipeline operators are regulated and when they behave strategically. The high sunk costs involved in building a pipeline network constitute a serious barrier of entry, and competitive behaviour in the transmission pipeline sector cannot be expected. Most of the economic analyses of access pricing for natural gas transmission pipelines are carried out from the regulatory perspective, and the access price paid by shippers are cost-based. The model formalized is intended to capture some essential characteristics of networks in which components interact with one another when combined into an integrated system. The model shows how the topology of the network determines the access prices in different components of the network. The general results that emerge from the analysis can be summarized as follows. First, the monopoly power of a pipeline operator is reduced by the entry of a new pipeline supply connected in parallel to the same demand node. When the pipelines are connected in series, the one upstream enjoys a first-move advantage over the one downstream, and the toll set by the upstream pipeline operator after entry by the downstream pipeline operator will rise above the original monopoly level. The equilibrium prices of natural gas at the various nodes of the network are also discussed. (Abstract shortened by UMI.)
Introduction to the Application of Web-Based Surveys.
ERIC Educational Resources Information Center
Timmerman, Annemarie
This paper discusses some basic assumptions and issues concerning web-based surveys. Discussion includes: assumptions regarding cost and ease of use; disadvantages of web-based surveys, concerning the inability to compensate for four common errors of survey research: coverage error, sampling error, measurement error and nonresponse error; and…
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
Automation for Air Traffic Control: The Rise of a New Discipline
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Tobias, Leonard (Technical Monitor)
1997-01-01
The current debate over the concept of Free Flight has renewed interest in automated conflict detection and resolution in the enroute airspace. An essential requirement for effective conflict detection is accurate prediction of trajectories. Trajectory prediction is, however, an inexact process which accumulates errors that grow in proportion to the length of the prediction time interval. Using a model of prediction errors for the trajectory predictor incorporated in the Center-TRACON Automation System (CTAS), a computationally fast algorithm for computing conflict probability has been derived. Furthermore, a method of conflict resolution has been formulated that minimizes the average cost of resolution, when cost is defined as the increment in airline operating costs incurred in flying the resolution maneuver. The method optimizes the trade off between early resolution at lower maneuver costs but higher prediction error on the one hand and late resolution with higher maneuver costs but lower prediction errors on the other. The method determines both the time to initiate the resolution maneuver as well as the characteristics of the resolution trajectory so as to minimize the cost of the resolution. Several computational examples relevant to the design of a conflict probe that can support user-preferred trajectories in the enroute airspace will be presented.
The impact of using an intravenous workflow management system (IVWMS) on cost and patient safety.
Lin, Alex C; Deng, Yihong; Thaibah, Hilal; Hingl, John; Penm, Jonathan; Ivey, Marianne F; Thomas, Mark
2018-07-01
The aim of this study was to determine the financial costs associated with wasted and missing doses before and after the implementation of an intravenous workflow management system (IVWMS) and to quantify the number and the rate of detected intravenous (IV) preparation errors. A retrospective analysis of the sample hospital information system database was conducted using three months of data before and after the implementation of an IVWMS System (DoseEdge ® ) which uses barcode scanning and photographic technologies to track and verify each step of the preparation process. The financial impact associated with wasted and missing >IV doses was determined by combining drug acquisition, labor, accessory, and disposal costs. The intercepted error reports and pharmacist detected error reports were drawn from the IVWMS to quantify the number of errors by defined error categories. The total number of IV doses prepared before and after the implementation of the IVWMS system were 110,963 and 101,765 doses, respectively. The adoption of the IVWMS significantly reduced the amount of wasted and missing IV doses by 14,176 and 2268 doses, respectively (p < 0.001). The overall cost savings of using the system was $144,019 over 3 months. The total number of errors detected was 1160 (1.14%) after using the IVWMS. The implementation of the IVWMS facilitated workflow changes that led to a positive impact on cost and patient safety. The implementation of the IVWMS increased patient safety by enforcing standard operating procedures and bar code verifications. Published by Elsevier B.V.
Cost-effectiveness of an electronic medication ordering system (CPOE/CDSS) in hospitalized patients.
Vermeulen, K M; van Doormaal, J E; Zaal, R J; Mol, P G M; Lenderink, A W; Haaijer-Ruskamp, F M; Kosterink, J G W; van den Bemt, P M L A
2014-08-01
Prescribing medication is an important aspect of almost all in-hospital treatment regimes. Besides their obviously beneficial effects, medicines can also cause adverse drug events (ADE), which increase morbidity, mortality and health care costs. Partially, these ADEs arise from medication errors, e.g. at the prescribing stage. ADEs caused by medication errors are preventable ADEs. Until now, medication ordering was primarily a paper-based process and consequently, it was error prone. Computerized Physician Order Entry, combined with basic Clinical Decision Support System (CPOE/CDSS) is considered to enhance patient safety. Limited information is available on the balance between the health gains and the costs that need to be invested in order to achieve these positive effects. Aim of this study was to study the balance between the effects and costs of CPOE/CDSS compared to the traditional paper-based medication ordering. The economic evaluation was performed alongside a clinical study (interrupted time series design) on the effectiveness of CPOE/CDSS, including a cost minimization and a cost-effectiveness analysis. Data collection took place between 2005 and 2008. Analyses were performed from a hospital perspective. The study was performed in a general teaching hospital and a University Medical Centre on general internal medicine, gastroenterology and geriatric wards. Computerized Physician Order Entry, combined with basic Clinical Decision Support System (CPOE/CDSS) was compared to a traditional paper based system. All costs of both medication ordering systems are based on resources used and time invested. Prices were expressed in Euros (price level 2009). Effectiveness outcomes were medication errors and preventable adverse drug events. During the paper-based prescribing period 592 patients were included, and during the CPOE/CDSS period 603. Total costs of the paper-based system and CPOE/CDSS amounted to €12.37 and €14.91 per patient/day respectively. The Incremental Cost-Effectiveness Ratio (ICER) for medication errors was 3.54 and for preventable adverse drug events 322.70, indicating the extra amount (€) that has to be invested in order to prevent one medication error or one pADE. CPOE with basic CDSS contributes to a decreased risk of preventable harm. Overall, the extra costs of CPOE/CDSS needed to prevent one ME or one pADE seem to be acceptable. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The effect of misclassification errors on case mix measurement.
Sutherland, Jason M; Botz, Chas K
2006-12-01
Case mix systems have been implemented for hospital reimbursement and performance measurement across Europe and North America. Case mix categorizes patients into discrete groups based on clinical information obtained from patient charts in an attempt to identify clinical or cost difference amongst these groups. The diagnosis related group (DRG) case mix system is the most common methodology, with variants adopted in many countries. External validation studies of coding quality have confirmed that widespread variability exists between originally recorded diagnoses and re-abstracted clinical information. DRG assignment errors in hospitals that share patient level cost data for the purpose of establishing cost weights affects cost weight accuracy. The purpose of this study is to estimate bias in cost weights due to measurement error of reported clinical information. DRG assignment error rates are simulated based on recent clinical re-abstraction study results. Our simulation study estimates that 47% of cost weights representing the least severe cases are over weight by 10%, while 32% of cost weights representing the most severe cases are under weight by 10%. Applying the simulated weights to a cross-section of hospitals, we find that teaching hospitals tend to be under weight. Since inaccurate cost weights challenges the ability of case mix systems to accurately reflect patient mix and may lead to potential distortions in hospital funding, bias in hospital case mix measurement highlights the role clinical data quality plays in hospital funding in countries that use DRG-type case mix systems. Quality of clinical information should be carefully considered from hospitals that contribute financial data for establishing cost weights.
Hanford isotope project strategic business analysis yttrium-90 (Y-90)
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-10-01
The purpose of this analysis is to address the short-term direction for the Hanford yttrium-90 (Y-90) project. Hanford is the sole DOE producer of Y-90, and is the largest repository for its source in this country. The production of Y-90 is part of the DOE Isotope Production and Distribution (IP and D) mission. The Y-90 is ``milked`` from strontium-90 (Sr-90), a byproduct of the previous Hanford missions. The use of Sr-90 to produce Y-90 could help reduce the amount of waste material processed and the related costs incurred by the clean-up mission, while providing medical and economic benefits. The costmore » of producing Y-90 is being subsidized by DOE-IP and D due to its use for research, and resultant low production level. It is possible that the sales of Y-90 could produce full cost recovery within two to three years, at two curies per week. Preliminary projections place the demand at between 20,000 and 50,000 curies per year within the next ten years, assuming FDA approval of one or more of the current therapies now in clinical trials. This level of production would incentivize private firms to commercialize the operation, and allow the government to recover some of its sunk costs. There are a number of potential barriers to the success of the Y-90 project, outside the control of the Hanford Site. The key issues include: efficacy, Food and Drug Administration (FDA) approval and medical community acceptance. There are at least three other sources for Y-90 available to the US users, but they appear to have limited resources to produce the isotope. Several companies have communicated interest in entering into agreements with Hanford for the processing and distribution of Y-90, including some of the major pharmaceutical firms in this country.« less
Shaw, Andrew J; Ingham, Stephen A; Fudge, Barry W; Folland, Jonathan P
2013-12-01
This study assessed the between-test reliability of oxygen cost (OC) and energy cost (EC) in distance runners, and contrasted it with the smallest worthwhile change (SWC) of these measures. OC and EC displayed similar levels of within-subject variation (typical error < 3.85%). However, the typical error (2.75% vs 2.74%) was greater than the SWC (1.38% vs 1.71%) for both OC and EC, respectively, indicating insufficient sensitivity to confidently detect small, but meaningful, changes in OC and EC.
Response cost, reinforcement, and children's Porteus Maze qualitative performance.
Neenan, D M; Routh, D K
1986-09-01
Sixty fourth-grade children were given two different series of the Porteus Maze Test. The first series was given as a baseline, and the second series was administered under one of four different experimental conditions: control, response cost, positive reinforcement, or negative verbal feedback. Response cost and positive reinforcement, but not negative verbal feedback, led to significant decreases in the number of all types of qualitative errors in relation to the control group. The reduction of nontargeted as well as targeted errors provides evidence for the generalized effects of response cost and positive reinforcement.
Perceived Cost and Intrinsic Motor Variability Modulate the Speed-Accuracy Trade-Off
Bertucco, Matteo; Bhanpuri, Nasir H.; Sanger, Terence D.
2015-01-01
Fitts’ Law describes the speed-accuracy trade-off of human movements, and it is an elegant strategy that compensates for random and uncontrollable noise in the motor system. The control strategy during targeted movements may also take into account the rewards or costs of any outcomes that may occur. The aim of this study was to test the hypothesis that movement time in Fitts’ Law emerges not only from the accuracy constraints of the task, but also depends on the perceived cost of error for missing the targets. Subjects were asked to touch targets on an iPad® screen with different costs for missed targets. We manipulated the probability of error by comparing children with dystonia (who are characterized by increased intrinsic motor variability) to typically developing children. The results show a strong effect of the cost of error on the Fitts’ Law relationship characterized by an increase in movement time as cost increased. In addition, we observed a greater sensitivity to increased cost for children with dystonia, and this behavior appears to minimize the average cost. The findings support a proposed mathematical model that explains how movement time in a Fitts-like task is related to perceived risk. PMID:26447874
Cost comparison of unit dose and traditional drug distribution in a long-term-care facility.
Lepinski, P W; Thielke, T S; Collins, D M; Hanson, A
1986-11-01
Unit dose and traditional drug distribution systems were compared in a 352-bed long-term-care facility by analyzing nursing time, medication-error rate, medication costs, and waste. Time spent by nurses in preparing, administering, charting, and other tasks associated with medications was measured with a stop-watch on four different nursing units during six-week periods before and after the nursing home began using unit dose drug distribution. Medication-error rate before and after implementation of the unit dose system was determined by patient profile audits and medication inventories. Medication costs consisted of patient billing costs (acquisition cost plus fee) and cost of medications destroyed. The unit dose system required a projected 1507.2 hours less nursing time per year. Mean medication-error rates were 8.53% and 0.97% for the traditional and unit dose systems, respectively. Potential annual savings because of decreased medication waste with the unit dose system were $2238.72. The net increase in cost for the unit dose system was estimated at $615.05 per year, or approximately $1.75 per patient. The unit dose system appears safer and more time-efficient than the traditional system, although its costs are higher.
A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories
NASA Astrophysics Data System (ADS)
Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon
BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.
1953-01-01
probably will be inexperienced in cowrnand in war. Finally. all comments and criticL3,ns are designed to be constructive. By indicating what appear to be...CofS, Combined Fleet estimate *more than six ships sunk cr afire" 329 Final probable estimate 329 Learns submarine I-: 5 had departed Kure for des 329...defensive opera- tions, known as the "SHO" (Victory) operations, which were designed to deny to the Allies a "oothold in the "iast ditch" island
1991-11-01
Just above Cornay’s Bridge they sunk the steamer Flycatcher and a schooner loaded with bricks, plus live oak trees were cut down and thrown into the...contour level) (Feet) Single Objects Engine camshaft 20 fi x 2 m 45 45 x 50 feet 15 Cas’ Iron soil pipe 10 ft long. 100 lbs 1407 45 x 65 feet 4 Iron...hitting any of the numerous fallen trees , snags, submerged logs, shallow sand bars, etc., 52 Chapter 3. Remote-Sensing Survey which occur along much of the
Shackleton's men: life on Elephant Island.
Piggott, Jan R
2004-09-01
The experiences of the 22 men from Ernest Shackleton's Endurance expedition of 1914-1916 who were marooned on Elephant Island during the Antarctic winter are not as well known as the narrative of the ship being beset and sunk, and Shackleton's open boat journey to South Georgia to rescue them. Frank Wild was left in charge of the marooned men by Shackleton and saved them from starvation and despair. The morale of the men in the face of extreme exposure to the elements, the ingenuity of their devices for survival and their diet, conversation and entertainments all reveal heroic qualities of Shackletonian endurance.
1986-06-01
Introduction I Project Location 1 Project History 3 Environment 3 The Relict Braided Surface 3 The Old Meander Belt 5 Soils and Biotic Communities 6 Macrobiotic...Project Area and the Sunk Lands (after Saucier 1970 and USGS Evadale Quad) 4 The Old Meander Belt The Old Meander Belt was incised into the Relict Braided...that the silting of the Old Meander Belt by the Mississippi River started in the Late Archaic period (ca. 3000 - 500 BC). It appears likely that this
Assessing and Valuing Historical Geospatial Data for Decisions
NASA Astrophysics Data System (ADS)
Sylak-Glassman, E.; Gallo, J.
2016-12-01
We will present a method for assessing the use and valuation of historical geospatial data and information products derived from Earth observations (EO). Historical data is widely used in the establishment of baseline reference cases, time-series analysis, and Earth system modeling. Historical geospatial data is used in diverse application areas, such as risk assessment in the insurance and reinsurance industry, disaster preparedness and response planning, historical demography, land-use change analysis, and paleoclimate research, among others. Establishing the current value of previously collected data, often from EO systems that are no longer operating, is difficult since the costs associated with their preservation, maintenance, and dissemination are current, while the costs associated with their original collection are sunk. Understanding their current use and value can aid in funding decisions about the data management infrastructure and workforce allocation required to maintain their availability. Using a value-tree framework to trace the application of data from EO systems, sensors, networks, and surveys, to weighted key Federal objectives, we are able to estimate relative contribution of individual EO systems, sensors, networks, and surveys to meeting those objectives. The analysis relies on a modified Delphi method to elicit relative levels of reliance on individual EO data inputs, including historical data, from subject matter experts. This results in the identification of a representative portfolio of all EO data used to meet key Federal objectives. Because historical data is collected in conjunction with all other EO data within a weighted framework, its contribution to meeting key Federal objectives can be specifically identified and evaluated in relationship to other EO data. The results of this method could be applied better understanding and projecting the long-term value of data from current and future EO systems.
48 CFR 36.608 - Liability for Government costs resulting from design errors or deficiencies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Liability for Government costs resulting from design errors or deficiencies. 36.608 Section 36.608 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service...
Lahue, Betsy J; Pyenson, Bruce; Iwasaki, Kosuke; Blumen, Helen E; Forray, Susan; Rothschild, Jeffrey M
2012-11-01
Harmful medication errors, or preventable adverse drug events (ADEs), are a prominent quality and cost issue in healthcare. Injectable medications are important therapeutic agents, but they are associated with a greater potential for serious harm than oral medications. The national burden of preventable ADEs associated with inpatient injectable medications and the associated medical professional liability (MPL) costs have not been previously described in the literature. To quantify the economic burden of preventable ADEs related to inpatient injectable medications in the United States. Medical error data (MedMarx 2009-2011) were utilized to derive the distribution of errors by injectable medication types. Hospital data (Premier 2010-2011) identified the numbers and the types of injections per hospitalization. US payer claims (2009-2010 MarketScan Commercial and Medicare 5% Sample) were used to calculate the incremental cost of ADEs by payer and by diagnosis-related group (DRG). The incremental cost of ADEs was defined as inclusive of the time of inpatient admission and the following 4 months. Actuarial calculations, assumptions based on published literature, and DRG proportions from 17 state discharge databases were used to derive the probability of preventable ADEs per hospitalization and their annual costs. MPL costs were assessed from state- and national-level industry reports, premium rates, and from closed claims databases between 1990 and 2011. The 2010 American Hospital Association database was used for hospital-level statistics. All costs were adjusted to 2013 dollars. Based on this medication-level analysis of reported harmful errors and the frequency of inpatient administrations with actuarial projections, we estimate that preventable ADEs associated with injectable medications impact 1.2 million hospitalizations annually. Using a matched cohort analysis of healthcare claims as a basis for evaluating incremental costs, we estimate that inpatient preventable ADEs associated with injectable medications increase the annual US payer costs by $2.7 billion to $5.1 billion, averaging $600,000 in extra costs per hospital. Across categories of injectable drugs, insulin had the highest risk per administration for a preventable ADE, although errors in the higher-volume categories of anti-infective, narcotic/analgesic, anticoagulant/thrombolytic and anxiolytic/sedative injectable medications harmed more patients. Our analysis of liability claims estimates that MPL associated with injectable medications totals $300 million to $610 million annually, with an average cost of $72,000 per US hospital. The incremental healthcare and MPL costs of preventable ADEs resulting from inpatient injectable medications are substantial. The data in this study strongly support the clinical and business cases of investing in efforts to prevent errors related to injectable medications.
Lahue, Betsy J.; Pyenson, Bruce; Iwasaki, Kosuke; Blumen, Helen E.; Forray, Susan; Rothschild, Jeffrey M.
2012-01-01
Background Harmful medication errors, or preventable adverse drug events (ADEs), are a prominent quality and cost issue in healthcare. Injectable medications are important therapeutic agents, but they are associated with a greater potential for serious harm than oral medications. The national burden of preventable ADEs associated with inpatient injectable medications and the associated medical professional liability (MPL) costs have not been previously described in the literature. Objective To quantify the economic burden of preventable ADEs related to inpatient injectable medications in the United States. Methods Medical error data (MedMarx 2009–2011) were utilized to derive the distribution of errors by injectable medication types. Hospital data (Premier 2010–2011) identified the numbers and the types of injections per hospitalization. US payer claims (2009–2010 MarketScan Commercial and Medicare 5% Sample) were used to calculate the incremental cost of ADEs by payer and by diagnosis-related group (DRG). The incremental cost of ADEs was defined as inclusive of the time of inpatient admission and the following 4 months. Actuarial calculations, assumptions based on published literature, and DRG proportions from 17 state discharge databases were used to derive the probability of preventable ADEs per hospitalization and their annual costs. MPL costs were assessed from state- and national-level industry reports, premium rates, and from closed claims databases between 1990 and 2011. The 2010 American Hospital Association database was used for hospital-level statistics. All costs were adjusted to 2013 dollars. Results Based on this medication-level analysis of reported harmful errors and the frequency of inpatient administrations with actuarial projections, we estimate that preventable ADEs associated with injectable medications impact 1.2 million hospitalizations annually. Using a matched cohort analysis of healthcare claims as a basis for evaluating incremental costs, we estimate that inpatient preventable ADEs associated with injectable medications increase the annual US payer costs by $2.7 billion to $5.1 billion, averaging $600,000 in extra costs per hospital. Across categories of injectable drugs, insulin had the highest risk per administration for a preventable ADE, although errors in the higher-volume categories of anti-infective, narcotic/analgesic, anticoagulant/thrombolytic and anxiolytic/sedative injectable medications harmed more patients. Our analysis of liability claims estimates that MPL associated with injectable medications totals $300 million to $610 million annually, with an average cost of $72,000 per US hospital. Conclusion The incremental healthcare and MPL costs of preventable ADEs resulting from inpatient injectable medications are substantial. The data in this study strongly support the clinical and business cases of investing in efforts to prevent errors related to injectable medications. PMID:24991335
Improving laboratory data entry quality using Six Sigma.
Elbireer, Ali; Le Chasseur, Julie; Jackson, Brooks
2013-01-01
The Uganda Makerere University provides clinical laboratory support to over 70 clients in Uganda. With increased volume, manual data entry errors have steadily increased, prompting laboratory managers to employ the Six Sigma method to evaluate and reduce their problems. The purpose of this paper is to describe how laboratory data entry quality was improved by using Six Sigma. The Six Sigma Quality Improvement (QI) project team followed a sequence of steps, starting with defining project goals, measuring data entry errors to assess current performance, analyzing data and determining data-entry error root causes. Finally the team implemented changes and control measures to address the root causes and to maintain improvements. Establishing the Six Sigma project required considerable resources and maintaining the gains requires additional personnel time and dedicated resources. After initiating the Six Sigma project, there was a 60.5 percent reduction in data entry errors from 423 errors a month (i.e. 4.34 Six Sigma) in the first month, down to an average 166 errors/month (i.e. 4.65 Six Sigma) over 12 months. The team estimated the average cost of identifying and fixing a data entry error to be $16.25 per error. Thus, reducing errors by an average of 257 errors per month over one year has saved the laboratory an estimated $50,115 a year. The Six Sigma QI project provides a replicable framework for Ugandan laboratory staff and other resource-limited organizations to promote quality environment. Laboratory staff can deliver excellent care at a lower cost, by applying QI principles. This innovative QI method of reducing data entry errors in medical laboratories may improve the clinical workflow processes and make cost savings across the health care continuum.
NASA Astrophysics Data System (ADS)
Bybee, G. M.; Ashwal, L. D.; Shirey, S. B.; Horan, M.; Mock, T.; Andersen, T. B.
2014-03-01
Proterozoic anorthosites from the 1630-1650 Ma Mealy Mountains Intrusive Suite (Grenville Province, Canada), the 1289-1363 Ma Nain Plutonic Suite (Nain-Churchill Provinces, Canada) and the 920-949 Ma Rogaland Anorthosite Province (Sveconorwegian Province, Norway), all entrain comagmatic, cumulate, high-alumina orthopyroxene megacrysts (HAOMs). The orthopyroxene megacrysts range in size from 0.2 to 1 m and all contain exsolution lamellae of plagioclase that indicate the incorporation of an excess Ca-Al component inherited from the host magma at pressures in excess of 10 kbar at or near Moho depths (>30-40 km). Suites of HAOMs from each intrusion display a large range in 147Sm/144Nd (0.10 to 0.34) making them amenable for precise age dating with the Sm-Nd system. Sm-Nd isochrons for HAOMs give ages of 1765±12 Ma (Mealy Mountains), 1041±17 Ma (Rogaland) and 1444±100 Ma (Nain), all of them older by about 80 to 120 m.y. than the respective 1630-1650, 920-949 and 1289-1363 Ma crystallization ages of their host anorthosites. Internal mineral Sm-Nd isochrons between plagioclase exsolution lamellae and the orthopyroxene host for HAOMs from the Rogaland and Nain complexes yield ages of 968±43 and 1347±6 Ma, respectively - identical within error to the ages of the anorthosites themselves. This age concordance establishes that decompression exsolution in the HAOM was coincident with magmatic emplacement of the anorthosites, ∼100 m.y. after HAOMs crystallization at the Moho. Correspondence of Pb isotope ages (206Pb/204Pb vs. 207Pb/204Pb) with Sm-Nd ages and other strong lines of evidence indicate that the older megacryst ages represent true crystallization ages and not the effects of time-integrated mixing processes in the magmas. Nd isotopic evolution curves, AFC/mixing calculations and the age relations between the HOAMs and their anorthosite hosts show that the HAOMs are much less contaminated with crustal components and are an older part of the same magmatic system from which the anorthosites are derived. Modeling of these anorthositic magmas with MELTS indicates that their ultramafic cumulates would have sunk in the magma and been sequestered at the Moho, where they may have sunk deeper into the mantle resulting in large-scale compositional differentiation. The HAOMs thus represent a rare example of part of a cumulate assemblage that was carried to the upper crust during anorthosite emplacement and, together with the anorthosites, illustrate the dramatic influence that magma ponding and differentiation at the Moho has on residual magmas traveling towards the surface. The new geochronologic and isotopic data indicate that the magmas were derived by melting of the mantle, forming magmatic systems that could have been long-lived (e.g. 80-100 m.y.). A geologic setting that would fit these temporal constraints is a long-lived Andean-type margin.
The economics of health care quality and medical errors.
Andel, Charles; Davidow, Stephen L; Hollander, Mark; Moreno, David A
2012-01-01
Hospitals have been looking for ways to improve quality and operational efficiency and cut costs for nearly three decades, using a variety of quality improvement strategies. However, based on recent reports, approximately 200,000 Americans die from preventable medical errors including facility-acquired conditions and millions may experience errors. In 2008, medical errors cost the United States $19.5 billion. About 87 percent or $17 billion were directly associated with additional medical cost, including: ancillary services, prescription drug services, and inpatient and outpatient care, according to a study sponsored by the Society for Actuaries and conducted by Milliman in 2010. Additional costs of $1.4 billion were attributed to increased mortality rates with $1.1 billion or 10 million days of lost productivity from missed work based on short-term disability claims. The authors estimate that the economic impact is much higher, perhaps nearly $1 trillion annually when quality-adjusted life years (QALYs) are applied to those that die. Using the Institute of Medicine's (IOM) estimate of 98,000 deaths due to preventable medical errors annually in its 1998 report, To Err Is Human, and an average of ten lost years of life at $75,000 to $100,000 per year, there is a loss of $73.5 billion to $98 billion in QALYs for those deaths--conservatively. These numbers are much greater than those we cite from studies that explore the direct costs of medical errors. And if the estimate of a recent Health Affairs article is correct-preventable death being ten times the IOM estimate-the cost is $735 billion to $980 billion. Quality care is less expensive care. It is better, more efficient, and by definition, less wasteful. It is the right care, at the right time, every time. It should mean that far fewer patients are harmed or injured. Obviously, quality care is not being delivered consistently throughout U.S. hospitals. Whatever the measure, poor quality is costing payers and society a great deal. However, health care leaders and professionals are focusing on quality and patient safety in ways they never have before because the economics of quality have changed substantially.
Reducing Formation-Keeping Maneuver Costs for Formation Flying Satellites in Low-Earth Orbit
NASA Technical Reports Server (NTRS)
Hamilton, Nicholas
2001-01-01
Several techniques are used to synthesize the formation-keeping control law for a three-satellite formation in low-earth orbit. The objective is to minimize maneuver cost and position tracking error. Initial reductions are found for a one-satellite case by tuning the state-weighting matrix within the linear-quadratic-Gaussian framework. Further savings come from adjusting the maneuver interval. Scenarios examined include cases with and without process noise. These results are then applied to a three-satellite formation. For both the one-satellite and three-satellite cases, increasing the maneuver interval yields a decrease in maneuver cost and an increase in position tracking error. A maneuver interval of 8-10 minutes provides a good trade-off between maneuver cost and position tracking error. An analysis of the closed-loop poles with respect to varying maneuver intervals explains the effectiveness of the chosen maneuver interval.
NASA Technical Reports Server (NTRS)
Page, J.
1981-01-01
The effects of an independent verification and integration (V and I) methodology on one class of application are described. Resource profiles are discussed. The development environment is reviewed. Seven measures are presented to test the hypothesis that V and I improve the development and product. The V and I methodology provided: (1) a decrease in requirements ambiguities and misinterpretation; (2) no decrease in design errors; (3) no decrease in the cost of correcting errors; (4) a decrease in the cost of system and acceptance testing; (5) an increase in early discovery of errors; (6) no improvement in the quality of software put into operation; and (7) a decrease in productivity and an increase in cost.
NASA Technical Reports Server (NTRS)
Stewart, R. D.
1979-01-01
Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.
Real options analysis for land use management: Methods, application, and implications for policy.
Regan, Courtney M; Bryan, Brett A; Connor, Jeffery D; Meyer, Wayne S; Ostendorf, Bertram; Zhu, Zili; Bao, Chenming
2015-09-15
Discounted cash flow analysis, including net present value is an established way to value land use and management investments which accounts for the time-value of money. However, it provides a static view and assumes passive commitment to an investment strategy when real world land use and management investment decisions are characterised by uncertainty, irreversibility, change, and adaptation. Real options analysis has been proposed as a better valuation method under uncertainty and where the opportunity exists to delay investment decisions, pending more information. We briefly review the use of discounted cash flow methods in land use and management and discuss their benefits and limitations. We then provide an overview of real options analysis, describe the main analytical methods, and summarize its application to land use investment decisions. Real options analysis is largely underutilized in evaluating land use decisions, despite uncertainty in policy and economic drivers, the irreversibility and sunk costs involved. New simulation methods offer the potential for overcoming current technical challenges to implementation as demonstrated with a real options simulation model used to evaluate an agricultural land use decision in South Australia. We conclude that considering option values in future policy design will provide a more realistic assessment of landholder investment decision making and provide insights for improved policy performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Influence of Prior Choices on Current Choice
de la Piedad, Xochitl; Field, Douglas; Rachlin, Howard
2006-01-01
Three pigeons chose between random-interval (RI) and tandem, continuous-reinforcement, fixed-interval (crf-FI) reinforcement schedules by pecking either of two keys. As long as a pigeon pecked on the RI key, both keys remained available. If a pigeon pecked on the crf-FI key, then the RI key became unavailable and the crf-FI timer began to time out. With this procedure, once the RI key was initially pecked, the prospective value of both alternatives remained constant regardless of time spent pecking on the RI key without reinforcement (RI waiting time). Despite this constancy, the rate at which pigeons switched from the RI to the crf-FI decreased sharply as RI waiting time increased. That is, prior choices influenced current choice—an exercise effect. It is argued that such influence (independent of reinforcement contingencies) may serve as a sunk-cost commitment device in self-control situations. In a second experiment, extinction was programmed if RI waiting time exceeded a certain value. Rate of switching to the crf-FI first decreased and then increased as the extinction point approached, showing sensitivity to both prior choices and reinforcement contingencies. In a third experiment, crf-FI availability was limited to a brief window during the RI waiting time. When constrained in this way, switching occurred at a high rate regardless of when, during the RI waiting time, the crf-FI became available. PMID:16602373
NASA Astrophysics Data System (ADS)
Johnson, Timothy Lawrence
2002-09-01
Stabilization of atmospheric greenhouse gas concentrations will likely require significant cuts in electric sector carbon dioxide (CO2) emissions. The ability to capture and sequester CO2 in a manner compatible with today's fossil-fuel based power generating infrastructure offers a potentially low-cost contribution to a larger climate change mitigation strategy. This thesis fills a niche between economy-wide studies of CO 2 abatement and plant-level control technology assessments by examining the contribution that carbon capture and sequestration (CCS) might make toward reducing US electric sector CO2 emissions. The assessment's thirty year perspective ensures that costs sunk in current infrastructure remain relevant and allows time for technological diffusion, but remains free of assumptions about the emergence of unidentified radical innovations. The extent to which CCS might lower CO2 mitigation costs will vary directly with the dispatch of carbon capture plants in actual power-generating systems, and will depend on both the retirement of vintage capacity and competition from abatement alternatives such as coal-to-gas fuel switching and renewable energy sources. This thesis therefore adopts a capacity planning and dispatch model to examine how the current distribution of generating units, natural gas prices, and other industry trends affect the cost of CO2 control via CCS in an actual US electric market. The analysis finds that plants with CO2 capture consistently provide significant reductions in base-load emissions at carbon prices near 100 $/tC, but do not offer an economical means of meeting peak demand unless CO2 reductions in excess of 80 percent are required. Various scenarios estimate the amount by which turn-over of the existing generating infrastructure and the severity of criteria pollutant constraints reduce mitigation costs. A look at CO2 sequestration in the seabed beneath the US Outer Continental Shelf (OCS) complements this model-driven assessment by considering issues of risk, geological storage capacity, and regulation. Extensive experience with offshore oil and gas operations suggests that the technical uncertainties associated with OCS sequestration are not large. The legality of seabed CO 2 disposal under US law and international environmental agreements, however, is ambiguous, and the OCS may be the first region where these regulatory regimes clash over CO2 sequestration.
CLEAR: Cross-Layer Exploration for Architecting Resilience
2017-03-01
benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above
NASA Technical Reports Server (NTRS)
Gordon, Steven C.
1993-01-01
Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush
1999-01-01
The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.
Charles, Krista; Cannon, Margaret; Hall, Robert; Coustasse, Alberto
2014-01-01
Computerized provider order entry (CPOE) systems allow physicians to prescribe patient services electronically. In hospitals, CPOE essentially eliminates the need for handwritten paper orders and achieves cost savings through increased efficiency. The purpose of this research study was to examine the benefits of and barriers to CPOE adoption in hospitals to determine the effects on medical errors and adverse drug events (ADEs) and examine cost and savings associated with the implementation of this newly mandated technology. This study followed a methodology using the basic principles of a systematic review and referenced 50 sources. CPOE systems in hospitals were found to be capable of reducing medical errors and ADEs, especially when CPOE systems are bundled with clinical decision support systems designed to alert physicians and other healthcare providers of pending lab or medical errors. However, CPOE systems face major barriers associated with adoption in a hospital system, mainly high implementation costs and physicians' resistance to change.
Mistakes as Stepping Stones: Effects of Errors on Episodic Memory among Younger and Older Adults
ERIC Educational Resources Information Center
Cyr, Andrée-Ann; Anderson, Nicole D.
2015-01-01
The memorial costs and benefits of trial-and-error learning have clear pedagogical implications for students, and increasing evidence shows that generating errors during episodic learning can improve memory among younger adults. Conversely, the aging literature has found that errors impair memory among healthy older adults and has advocated for…
Benjamin, David M; Pendrak, Robert F
2003-07-01
Clinical pharmacologists are all dedicated to improving the use of medications and decreasing medication errors and adverse drug reactions. However, quality improvement requires that some significant parameters of quality be categorized, measured, and tracked to provide benchmarks to which future data (performance) can be compared. One of the best ways to accumulate data on medication errors and adverse drug reactions is to look at medical malpractice data compiled by the insurance industry. Using data from PHICO insurance company, PHICO's Closed Claims Data, and PHICO's Event Reporting Trending System (PERTS), this article examines the significance and trends of the claims and events reported between 1996 and 1998. Those who misread history are doomed to repeat the mistakes of the past. From a quality improvement perspective, the categorization of the claims and events is useful for reengineering integrated medication delivery, particularly in a hospital setting, and for redesigning drug administration protocols on low therapeutic index medications and "high-risk" drugs. Demonstrable evidence of quality improvement is being required by state laws and by accreditation agencies. The state of Florida requires that quality improvement data be posted quarterly on the Web sites of the health care facilities. Other states have followed suit. The insurance industry is concerned with costs, and medication errors cost money. Even excluding costs of litigation, an adverse drug reaction may cost up to $2500 in hospital resources, and a preventable medication error may cost almost $4700. To monitor costs and assess risk, insurance companies want to know what errors are made and where the system has broken down, permitting the error to occur. Recording and evaluating reliable data on adverse drug events is the first step in improving the quality of pharmacotherapy and increasing patient safety. Cost savings and quality improvement evolve on parallel paths. The PHICO data provide an excellent opportunity to review information that typically would not be in the public domain. The events captured by PHICO are similar to the errors and "high-risk" drugs described in the literature, the U.S. Pharmacopeia's MedMARx Reporting System, and the Sentinel Event reporting system maintained by the Joint Commission for the Accreditation of Healthcare Organizations. The information in this report serves to alert clinicians to the possibility of adverse events when treating patients with the reported drugs, thus allowing for greater care in their use and closer monitoring. Moreover, when using high-risk drugs, patients should be well informed of known risks, dosage should be titrated slowly, and therapeutic drug monitoring and laboratory monitoring should be employed to optimize therapy and minimize adverse effects.
Acceptance threshold theory can explain occurrence of homosexual behaviour.
Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra
2015-01-01
Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Eliminating US hospital medical errors.
Kumar, Sameer; Steinebach, Marc
2008-01-01
Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.
A systematic comparison of error correction enzymes by next-generation sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
A systematic comparison of error correction enzymes by next-generation sequencing
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...
2017-08-01
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
U.S. Navy Crisis Response Activity, 1946-1989: Preliminary Report
1989-11-29
177 La Belle Disco , Libya 4/10/86 6 A6 2 N N Y N 178 Pakistan Hijacking Sep-86 1 A6 1 N N N N 179 Persian Gulf Ops Jan-87 579 A7 2 Y Y Y Y 180...operations as no hostages were released. 177 La Belle Disco , Libya 4/10/86 6 A6 2 N N Y N On 5 April, the La Belle Discotheque in the Federal Republic of...naval units were damaged or sunk; and, on 3 July 1988, in the midst of a surface engagement, CG-49 Vincennes shot down an Iran Air Airbus , killing all
Analyses of battle casualties by weapon type aboard U.S. Navy warships.
Blood, C G
1992-03-01
The number of casualties was determined for 513 incidents involving U.S. Navy warships sunk or damaged during World War II. Ship type and weapon were significant factors in determining the numbers of wounded and killed. Multiple weapon attacks and kamikazes yielded more wounded in action than other weapon types. Multiple weapons and torpedos resulted in a higher incidence of killed in action than other weapons. Penetrating wounds and burns were the most prominent injury types. Kamikaze attacks yielded significantly more burns than incidents involving bombs, gunfire, torpedos, mines, and multiple weapons. Mine explosions were responsible for more strains, sprains, and dislocations than the other weapon types.
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-01-01
Introduction Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%–38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. Methods We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Results Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). Conclusion A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive. PMID:28874948
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-08-01
Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%-38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive.
Tolerance assignment in optical design
NASA Astrophysics Data System (ADS)
Youngworth, Richard Neil
2002-09-01
Tolerance assignment is necessary in any engineering endeavor because fabricated systems---due to the stochastic nature of manufacturing and assembly processes---necessarily deviate from the nominal design. This thesis addresses the problem of optical tolerancing. The work can logically be split into three different components that all play an essential role. The first part addresses the modeling of manufacturing errors in contemporary fabrication and assembly methods. The second component is derived from the design aspect---the development of a cost-based tolerancing procedure. The third part addresses the modeling of image quality in an efficient manner that is conducive to the tolerance assignment process. The purpose of the first component, modeling manufacturing errors, is twofold---to determine the most critical tolerancing parameters and to understand better the effects of fabrication errors. Specifically, mid-spatial-frequency errors, typically introduced in sub-aperture grinding and polishing fabrication processes, are modeled. The implication is that improving process control and understanding better the effects of the errors makes the task of tolerance assignment more manageable. Conventional tolerancing methods do not directly incorporate cost. Consequently, tolerancing approaches tend to focus more on image quality. The goal of the second part of the thesis is to develop cost-based tolerancing procedures that facilitate optimum system fabrication by generating the loosest acceptable tolerances. This work has the potential to impact a wide range of optical designs. The third element, efficient modeling of image quality, is directly related to the cost-based optical tolerancing method. Cost-based tolerancing requires efficient and accurate modeling of the effects of errors on the performance of optical systems. Thus it is important to be able to compute the gradient and the Hessian, with respect to the parameters that need to be toleranced, of the figure of merit that measures the image quality of a system. An algebraic method for computing the gradient and the Hessian is developed using perturbation theory.
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
The cost of misremembering: Inferring the loss function in visual working memory.
Sims, Chris R
2015-03-04
Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.
Cost effectiveness of the stream-gaging program in South Carolina
Barker, A.C.; Wright, B.C.; Bennett, C.S.
1985-01-01
The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)
Tanker Argus: Re-supply for a LEO Cryogenic Propellant Depot
NASA Astrophysics Data System (ADS)
St. Germain, B.; Olds, J.; Kokan, T.; Marcus, L.; Miller, J.
The Argus reusable launch vehicle (RLV) concept is a single-stage-to-orbit conical, winged bodied vehicle powered by two liquid hydrogen/liquid oxygen supercharged ejector ramjets. The 3rd generation Argus launch vehicle utilizes advanced vehicle technologies along with a Maglev launch assist track. A tanker version of the Argus RLV is envisioned to provide an economical means of providing liquid fuel and oxidizer to an orbiting low-Earth orbit (LEO) propellant depot. This depot could then provide propellant to various spacecraft, including reusable orbital transfer vehicles used to ferry space solar power satellites to geo-stationary orbit. Two different tanker Argus configurations were analyzed. The first simply places additional propellant tanks inside the payload bay of an existing Argus reusable launch vehicle. The second concept is a modified Argus RLV in which the payload bay is removed and the vehicle propellant tanks are stretched to hold extra propellant. An iterative conceptual design process was used to design both Argus vehicles. This process involves various disciplines including aerodynamics, trajectory analysis, weights &structures, propulsion, operations, safety, and cost/economics. The payload bay version of tanker Argus, which has a gross mass of 256.3MT, is designed to deliver a 9.07MT payload to LEO. This payload includes propellant and the tank structure required to secure this propellant in the payload bay. The modified, pure tanker version of Argus has a gross mass of 218.6MT and is sized to deliver a full 9.07MT of propellant to LEO. The economic analysis performed for this study involved the calculation of many factors including the design/development and recurring costs of each vehicle. These results were used along with other economic assumptions to determine the "per kilogram" cost of delivering propellant to orbit. The results show that for a given flight rate the "per kilogram" cost is cheaper for the pure tanker version of Argus. However, the main goal of this study was to determine at which flight rate would it be financially beneficial to spend more development money to modify an existing, sunk cost, payload bay version of Argus in order to create a more efficient pure tanker version. For flight rates greater than approximately 320 flights/year, there is only a small financial motivation to develop a pure tanker version. At this flight rate both versions of Argus are able to deliver propellant to LEO at an approximate cost of 375/kg.
Losses from effluent taxes and quotas under uncertainty
Watson, W.D.; Ridker, R.G.
1984-01-01
Recent theoretical papers by Adar and Griffin (J. Environ. Econ. Manag.3, 178-188 (1976)), Fishelson (J. Environ. Econ. Manag.3, 189-197 (1976)), and Weitzman (Rev. Econ. Studies41, 477-491 (1974)) show that,different expected social losses arise from using effluent taxes and quotas as alternative control instruments when marginal control costs are uncertain. Key assumptions in these analyses are linear marginal cost and benefit functions and an additive error for the marginal cost function (to reflect uncertainty). In this paper, empirically derived nonlinear functions and more realistic multiplicative error terms are used to estimate expected control and damage costs and to identify (empirically) the mix of control instruments that minimizes expected losses. ?? 1984.
Learning from Errors: Critical Incident Reporting in Nursing
ERIC Educational Resources Information Center
Gartmeier, Martin; Ottl, Eva; Bauer, Johannes; Berberat, Pascal Oliver
2017-01-01
Purpose: The purpose of this paper is to conceptualize error reporting as a strategy for informal workplace learning and investigate nurses' error reporting cost/benefit evaluations and associated behaviors. Design/methodology/approach: A longitudinal survey study was carried out in a hospital setting with two measurements (time 1 [t1]:…
Huff, Mark J.; Balota, David A.; Minear, Meredith; Aschenbrenner, Andrew J.; Duchek, Janet M.
2015-01-01
A task-switching paradigm was used to examine differences in attentional control across younger adults, middle-aged adults, healthy older adults, and individuals classified in the earliest detectable stage of Alzheimer's disease (AD). A large sample of participants (570) completed a switching task in which participants were cued to classify the letter (consonant/vowel) or number (odd/even) task-set dimension of a bivalent stimulus (e.g., A 14), respectively. A Pure block consisting of single-task trials and a Switch block consisting of nonswitch and switch trials were completed. Local (switch vs. nonswitch trials) and global (nonswitch vs. pure trials) costs in mean error rates, mean response latencies, underlying reaction time distributions, along with stimulus-response congruency effects were computed. Local costs in errors were group invariant, but global costs in errors systematically increased as a function of age and AD. Response latencies yielded a strong dissociation: Local costs decreased across groups whereas global costs increased across groups. Vincentile distribution analyses revealed that the dissociation of local and global costs primarily occurred in the slowest response latencies. Stimulus-response congruency effects within the Switch block were particularly robust in accuracy in the very mild AD group. We argue that the results are consistent with the notion that the impaired groups show a reduced local cost because the task sets are not as well tuned, and hence produce minimal cost on switch trials. In contrast, global costs increase because of the additional burden on working memory of maintaining two task sets. PMID:26652720
McQueen, Robert Brett; Breton, Marc D; Craig, Joyce; Holmes, Hayden; Whittington, Melanie D; Ott, Markus A; Campbell, Jonathan D
2018-04-01
The objective was to model clinical and economic outcomes of self-monitoring blood glucose (SMBG) devices with varying error ranges and strip prices for type 1 and insulin-treated type 2 diabetes patients in England. We programmed a simulation model that included separate risk and complication estimates by type of diabetes and evidence from in silico modeling validated by the Food and Drug Administration. Changes in SMBG error were associated with changes in hemoglobin A1c (HbA1c) and separately, changes in hypoglycemia. Markov cohort simulation estimated clinical and economic outcomes. A SMBG device with 8.4% error and strip price of £0.30 (exceeding accuracy requirements by International Organization for Standardization [ISO] 15197:2013/EN ISO 15197:2015) was compared to a device with 15% error (accuracy meeting ISO 15197:2013/EN ISO 15197:2015) and price of £0.20. Outcomes were lifetime costs, quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs). With SMBG errors associated with changes in HbA1c only, the ICER was £3064 per QALY in type 1 diabetes and £264 668 per QALY in insulin-treated type 2 diabetes for an SMBG device with 8.4% versus 15% error. With SMBG errors associated with hypoglycemic events only, the device exceeding accuracy requirements was cost-saving and more effective in insulin-treated type 1 and type 2 diabetes. Investment in devices with higher strip prices but improved accuracy (less error) appears to be an efficient strategy for insulin-treated diabetes patients at high risk of severe hypoglycemia.
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Nurhayati, R. A.; Wiyono, S. B.; Handajani, S. S.; Martini, T. S.
2017-01-01
In this paper, we develop an integrated inventory model considering the imperfect quality items, inspection error, controllable lead time, and budget capacity constraint. The imperfect items were uniformly distributed and detected on the screening process. However there are two types of possibilities. The first is type I of inspection error (when a non-defective item classified as defective) and the second is type II of inspection error (when a defective item classified as non-defective). The demand during the lead time is unknown, and it follows the normal distribution. The lead time can be controlled by adding the crashing cost. Furthermore, the existence of the budget capacity constraint is caused by the limited purchasing cost. The purposes of this research are: to modify the integrated vendor and buyer inventory model, to establish the optimal solution using Kuhn-Tucker’s conditions, and to apply the models. Based on the result of application and the sensitivity analysis, it can be obtained minimum integrated inventory total cost rather than separated inventory.
Lerch, Rachel A; Sims, Chris R
2016-06-01
Limitations in visual working memory (VWM) have been extensively studied in psychophysical tasks, but not well understood in terms of how these memory limits translate to performance in more natural domains. For example, in reaching to grasp an object based on a spatial memory representation, overshooting the intended target may be more costly than undershooting, such as when reaching for a cup of hot coffee. The current body of literature lacks a detailed account of how the costs or consequences of memory error influence what we encode in visual memory and how we act on the basis of remembered information. Here, we study how externally imposed monetary costs influence behavior in a motor decision task that involves reach planning based on recalled information from VWM. We approach this from a decision theoretic perspective, viewing decisions of where to aim in relation to the utility of their outcomes given the uncertainty of memory representations. Our results indicate that subjects accounted for the uncertainty in their visual memory, showing a significant difference in their reach planning when monetary costs were imposed for memory errors. However, our findings indicate that subjects memory representations per se were not biased by the imposed costs, but rather subjects adopted a near-optimal post-mnemonic decision strategy in their motor planning.
Hsueh, Ya-seng Arthur; Brando, Alex; Dunt, David; Anjou, Mitchell D; Boudville, Andrea; Taylor, Hugh
2013-12-01
To estimate the costs of the extra resources required to close the gap of vision between Indigenous and non-Indigenous Australians. Constructing comprehensive eye care pathways for Indigenous Australians with their related probabilities, to capture full eye care usage compared with current usage rate for cataract surgery, refractive error and diabetic retinopathy using the best available data. Urban and remote regions of Australia. The provision of eye care for cataract surgery, refractive error and diabetic retinopathy. Estimated cost needed for full access, estimated current spending and estimated extra cost required to close the gaps of cataract surgery, refractive error and diabetic retinopathy for Indigenous Australians. Total cost needed for full coverage of all three major eye conditions is $45.5 million per year in 2011 Australian dollars. Current annual spending is $17.4 million. Additional yearly cost required to close the gap of vision is $28 million. This includes extra-capped funds of $3 million from the Commonwealth Government and $2 million from the State and Territory Governments. Additional coordination costs per year are $13.3 million. Although available data are limited, this study has produced the first estimates that are indicative of the need for planning and provide equity in eye care. © 2013 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.
Class-specific Error Bounds for Ensemble Classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prenger, R; Lemmond, T; Varshney, K
2009-10-06
The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missedmore » detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.« less
Inspection error and its adverse effects - A model with implications for practitioners
NASA Technical Reports Server (NTRS)
Collins, R. D., Jr.; Case, K. E.; Bennett, G. K.
1978-01-01
Inspection error has clearly been shown to have adverse effects upon the results desired from a quality assurance sampling plan. These effects upon performance measures have been well documented from a statistical point of view. However, little work has been presented to convince the QC manager of the unfavorable cost consequences resulting from inspection error. This paper develops a very general, yet easily used, mathematical cost model. The basic format of the well-known Guthrie-Johns model is used. However, it is modified as required to assess the effects of attributes sampling errors of the first and second kind. The economic results, under different yet realistic conditions, will no doubt be of interest to QC practitioners who face similar problems daily. Sampling inspection plans are optimized to minimize economic losses due to inspection error. Unfortunately, any error at all results in some economic loss which cannot be compensated for by sampling plan design; however, improvements over plans which neglect the presence of inspection error are possible. Implications for human performance betterment programs are apparent, as are trade-offs between sampling plan modification and inspection and training improvements economics.
Generalized Variance Function Applications in Forestry
James Alegria; Charles T. Scott; Charles T. Scott
1991-01-01
Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
Annual Cost of U.S. Hospital Visits for Pediatric Abusive Head Trauma.
Peterson, Cora; Xu, Likang; Florence, Curtis; Parks, Sharyn E
2015-08-01
We estimated the frequency and direct medical cost from the provider perspective of U.S. hospital visits for pediatric abusive head trauma (AHT). We identified treat-and-release hospital emergency department (ED) visits and admissions for AHT among patients aged 0-4 years in the Nationwide Emergency Department Sample and Nationwide Inpatient Sample (NIS), 2006-2011. We applied cost-to-charge ratios and estimated professional fee ratios from Truven Health MarketScan(®) to estimate per-visit and total population costs of AHT ED visits and admissions. Regression models assessed cost differences associated with selected patient and hospital characteristics. AHT was diagnosed during 6,827 (95% confidence interval [CI] [6,072, 7,582]) ED visits and 12,533 (95% CI [10,395, 14,671]) admissions (28% originating in the same hospital's ED) nationwide over the study period. The average medical cost per ED visit and admission were US$2,612 (error bound: 1,644-3,581) and US$31,901 (error bound: 29,266-34,536), respectively (2012 USD). The average total annual nationwide medical cost of AHT hospital visits was US$69.6 million (error bound: 56.9-82.3 million) over the study period. Factors associated with higher per-visit costs included patient age <1 year, males, coexisting chronic conditions, discharge to another facility, death, higher household income, public insurance payer, hospital trauma level, and teaching hospitals in urban locations. Study findings emphasize the importance of focused interventions to reduce this type of high-cost child abuse. © The Author(s) 2015.
Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer
Andrade-Delgado, Laura; Telich-Tarriba, Jose E.; Fuente-del-Campo, Antonio; Altamirano-Arcos, Carlos A.
2018-01-01
Summary: Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies. PMID:29464171
Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.
Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A
2018-01-01
Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.
Committing to coal and gas: Long-term contracts, regulation, and fuel switching in power generation
NASA Astrophysics Data System (ADS)
Rice, Michael
Fuel switching in the electricity sector has important economic and environmental consequences. In the United States, the increased supply of gas during the last decade has led to substantial switching in the short term. Fuel switching is constrained, however, by the existing infrastructure. The power generation infrastructure, in turn, represents commitments to specific sources of energy over the long term. This dissertation explores fuel contracts as the link between short-term price response and long-term plant investments. Contracting choices enable power plant investments that are relationship-specific, often regulated, and face uncertainty. Many power plants are subject to both hold-up in investment and cost-of-service regulation. I find that capital bias is robust when considering either irreversibility or hold-up due to the uncertain arrival of an outside option. For sunk capital, the rental rate is inappropriate for determining capital bias. Instead, capital bias depends on the regulated rate of return, discount rate, and depreciation schedule. If policies such as emissions regulations increase fuel-switching flexibility, this can lead to capital bias. Cost-of-service regulation can shorten the duration of a long-term contract. From the firm's perspective, the existing literature provides limited guidance when bargaining and writing contracts for fuel procurement. I develop a stochastic programming framework to optimize long-term contracting decisions under both endogenous and exogenous sources of hold-up risk. These typically include policy changes, price shocks, availability of fuel, and volatility in derived demand. For price risks, the optimal contract duration is the moment when the expected benefits of the contract are just outweighed by the expected opportunity costs of remaining in the contract. I prove that imposing early renegotiation costs decreases contract duration. Finally, I provide an empirical approach to show how coal contracts can limit short-term fuel switching in power production. During the era prior to shale gas and electricity market deregulation, I do not find evidence that gas generation substituted for coal in response to fuel price changes. However, I do find evidence that coal plant operations are constrained by fuel contracts. As the min-take commitment to coal increases, changes to annual coal plant output decrease. My conclusions are robust in spite of bias due to the selective reporting of proprietary coal delivery contracts by utilities.
Chen, Chia-Chi; Hsiao, Fei-Yuan; Shen, Li-Jiuan; Wu, Chien-Chih
2017-08-01
Medication errors may lead to adverse drug events (ADEs), which endangers patient safety and increases healthcare-related costs. The on-ward deployment of clinical pharmacists has been shown to reduce preventable ADEs, and save costs. The purpose of this study was to evaluate the ADEs prevention and cost-saving effects by clinical pharmacist deployment in a nephrology ward.This was a retrospective study, which compared the number of pharmacist interventions 1 year before and after a clinical pharmacist was deployed in a nephrology ward. The clinical pharmacist attended ward rounds, reviewed and revised all medication orders, and gave active recommendations of medication use. For intervention analysis, the numbers and types of the pharmacist's interventions in medication orders and the active recommendations were compared. For cost analysis, both estimated cost saving and avoidance were calculated and compared.The total numbers of pharmacist interventions in medication orders were 824 in 2012 (preintervention), and 1977 in 2013 (postintervention). The numbers of active recommendation were 40 in 2012, and 253 in 2013. The estimated cost savings in 2012 and 2013 were NT$52,072 and NT$144,138, respectively. The estimated cost avoidances of preventable ADEs in 2012 and 2013 were NT$3,383,700 and NT$7,342,200, respectively. The benefit/cost ratio increased from 4.29 to 9.36, and average admission days decreased by 2 days after the on-ward deployment of a clinical pharmacist.The number of pharmacist's interventions increased dramatically after her on-ward deployment. This service could reduce medication errors, preventable ADEs, and costs of both medications and potential ADEs.
Aczel, Balazs; Bago, Bence; Szollosi, Aba; Foldes, Andrei; Lukacs, Bence
2015-01-01
The aim of this study was to initiate the exploration of debiasing methods applicable in real-life settings for achieving lasting improvement in decision making competence regarding multiple decision biases. Here, we tested the potentials of the analogical encoding method for decision debiasing. The advantage of this method is that it can foster the transfer from learning abstract principles to improving behavioral performance. For the purpose of the study, we devised an analogical debiasing technique for 10 biases (covariation detection, insensitivity to sample size, base rate neglect, regression to the mean, outcome bias, sunk cost fallacy, framing effect, anchoring bias, overconfidence bias, planning fallacy) and assessed the susceptibility of the participants (N = 154) to these biases before and 4 weeks after the training. We also compared the effect of the analogical training to the effect of ‘awareness training’ and a ‘no-training’ control group. Results suggested improved performance of the analogical training group only on tasks where the violations of statistical principles are measured. The interpretation of these findings require further investigation, yet it is possible that analogical training may be the most effective in the case of learning abstract concepts, such as statistical principles, which are otherwise difficult to master. The study encourages a systematic research of debiasing trainings and the development of intervention assessment methods to measure the endurance of behavior change in decision debiasing. PMID:26300816
What causes trainees to leave oral and maxillofacial surgery? A questionnaire survey.
Herbert, C; Kent, S; Magennis, P; Cleland, J
2017-01-01
Understanding what causes trainees to leave OMFS is essential if we are to retain them within the specialty. Although these factors have been defined for medicine, we know of no previous study for OMFS. An online survey was distributed to roughly 1500 people who had registered an interest in OMFS during the past seven years. Personal information and details of education and employment were gathered along with personal factors that attracted them to OMFS. Of 251 trainees who responded, 50 (30%) were no longer interested. Factors that significantly correlated with an interest in OMFS included male sex (p=0.020), dual qualification (p=0.024), and (only for women) being single (p=0.024) and having no dependants (p=0.005). We used qualitative analysis to identify work-life balance, duration of training, and financial implications, as significant factors. Identification of key factors that affect OMFS trainees allows us to develop ways to keep them in the specialty. The predominant factor is work-life balance, and for women this included having children and being married. Financial issues related to the junior doctors' contract and competition ratios to second degrees, are also factors for both sexes. Also important are the "sunk costs" fallacy that causes some trainees to stay in training. This information can be used to help develop higher training, in negotiations of contracts, and to attract and retain future OMFS trainees. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Baltussen, Rob; Naus, Jeroen; Limburg, Hans
2009-02-01
To estimate the costs and effects of alternative strategies for annual screening of school children for refractive errors, and the provision of spectacles, in different WHO sub-regions in Africa, Asia, America and Europe. We developed a mathematical simulation model for uncorrected refractive error, using prevailing prevalence and incidence rates. Remission rates reflected the absence or presence of screening strategies for school children. All screening strategies were implemented for a period of 10 years and were compared to a situation were no screening was implemented. Outcome measures were life years adjusted for disability (DALYs), costs of screening and provision of spectacles and follow-up for six different screening strategies, and cost-effectiveness in international dollars per DALY averted. Epidemiological information was derived from the burden of disease study from the World Health Organization (WHO). Cost data were derived from large databases from the WHO. Both univariate and multivariate sensitivity analyses were performed on key parameters to determine the robustness of the model results. In all regions, screening of 5-15 years old children yields most health effects, followed by screening of 11-15 years old, 5-10 years old, and screening of 8 and 13 years old. Screening of broad-age intervals is always more costly than screening of single-age intervals, and there are important economies of scale for simultaneous screening of both 5-10 and 11-15-year-old children. In all regions, screening of 11-15 years old is the most cost-effective intervention, with the cost per DALY averted ranging from I$67 per DALY averted in the Asian sub-region to I$458 per DALY averted in the European sub-region. The incremental cost per DALY averted of screening 5-15 years old ranges between I$111 in the Asian sub-region to I$672 in the European sub-region. Considering the conservative study assumptions and the robustness of study conclusions towards changes in these assumptions, screening of school children for refractive error is economically attractive in all regions in the world.
How to Correct a Task Error: Task-Switch Effects Following Different Types of Error Correction
ERIC Educational Resources Information Center
Steinhauser, Marco
2010-01-01
It has been proposed that switch costs in task switching reflect the strengthening of task-related associations and that strengthening is triggered by response execution. The present study tested the hypothesis that only task-related responses are able to trigger strengthening. Effects of task strengthening caused by error corrections were…
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Cost-benefit analysis: newborn screening for inborn errors of metabolism in Lebanon.
Khneisser, I; Adib, S; Assaad, S; Megarbane, A; Karam, P
2015-12-01
Few countries in the Middle East-North Africa region have adopted national newborn screening for inborn errors of metabolism by tandem mass spectrometry (MS/MS). We aimed to evaluate the cost-benefit of newborn screening for such disorders in Lebanon, as a model for other developing countries in the region. Average costs of expected care for inborn errors of metabolism cases as a group, between ages 0 and 18, early and late diagnosed, were calculated from 2007 to 2013. The monetary value of early detection using MS/MS was compared with that of clinical "late detection", including cost of diagnosis and hospitalizations. During this period, 126000 newborns were screened. Incidence of detected cases was 1/1482, which can be explained by high consanguinity rates in Lebanon. A reduction by half of direct cost of care, reaching on average 31,631 USD per detected case was shown. This difference more than covers the expense of starting a newborn screening programme. Although this model does not take into consideration the indirect benefits of the better quality of life of those screened early, it can be argued that direct and indirect costs saved through early detection of these disorders are important enough to justify universal publicly-funded screening, especially in developing countries with high consanguinity rates, as shown through this data from Lebanon. © The Author(s) 2015.
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.
Gadoury, R.A.; Smath, J.A.; Fontaine, R.A.
1985-01-01
The report documents the results of a study of the cost-effectiveness of the U.S. Geological Survey 's continuous-record stream-gaging programs in Massachusetts and Rhode Island. Data uses and funding sources were identified for 91 gaging stations being operated in Massachusetts are being operated to provide data for two special purpose hydrologic studies, and they are planned to be discontinued at the conclusion of the studies. Cost-effectiveness analyses were performed on 63 continuous-record gaging stations in Massachusetts and 15 stations in Rhode Island, at budgets of $353,000 and $60,500, respectively. Current operations policies result in average standard errors per station of 12.3% in Massachusetts and 9.7% in Rhode Island. Minimum possible budgets to maintain the present numbers of gaging stations in the two States are estimated to be $340,000 and $59,000, with average errors per station of 12.8% and 10.0%, respectively. If the present budget levels were doubled, average standards errors per station would decrease to 8.1% and 4.2%, respectively. Further budget increases would not improve the standard errors significantly. (USGS)
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
Quality Issues of Court Reporters and Transcriptionists for Qualitative Research
Hennink, Monique; Weber, Mary Beth
2015-01-01
Transcription is central to qualitative research, yet few researchers identify the quality of different transcription methods. We described the quality of verbatim transcripts from traditional transcriptionists and court reporters by reviewing 16 transcripts from 8 focus group discussions using four criteria: transcription errors, cost and time of transcription, and effect on study participants. Transcriptionists made fewer errors, captured colloquial dialogue, and errors were largely influenced by the quality of the recording. Court reporters made more errors, particularly in the omission of topical content and contextual detail and were less able to produce a verbatim transcript; however the potential immediacy of the transcript was advantageous. In terms of cost, shorter group discussions favored a transcriptionist and longer groups a court reporter. Study participants reported no effect by either method of recording. Understanding the benefits and limitations of each method of transcription can help researchers select an appropriate method for each study. PMID:23512435
NASA Astrophysics Data System (ADS)
González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro
2012-07-01
A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.
The Value of Certainty (Invited)
NASA Astrophysics Data System (ADS)
Barkstrom, B. R.
2009-12-01
It is clear that Earth science data are valued, in part, for their ability to provide some certainty about the past state of the Earth and about its probable future states. We can sharpen this notion by using seven categories of value ● Warning Service, requiring latency of three hours or less, as well as uninterrupted service ● Information Service, requiring latency less than about two weeks, as well as unterrupted service ● Process Information, requiring ability to distinguish between alternative processes ● Short-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of five years or less, e.g. crop insurance ● Mid-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of twenty-five years or less, e.g. power plant siting ● Long-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of a century or less, e.g. one hundred year flood planning ● Doomsday Statistics, requiring ability to construct a reliable statistical record that is useful for reducing the impact of `doomsday' scenarios While the first two of these categories place high value on having an uninterrupted flow of information, and the third places value on contributing to our understanding of physical processes, it is notable that the last four may be placed on a common footing by considering the ability of observations to reduce uncertainty. Quantitatively, we can often identify metrics for parameters of interest that are fairly simple. For example, ● Detection of change in the average value of a single parameter, such as global temperature ● Detection of a trend, whether linear or nonlinear, such as the trend in cloud forcing known as cloud feedback ● Detection of a change in extreme value statistics, such as flood frequency or drought severity For such quantities, we can quantify uncertainty in terms of the entropy which is calculated by creating a set of discrete bins for the value and then using error estimates to assign probabilities, pi, to each bin. The entropy, H, is simply H = ∑i pi log2(1/pi) The value of a new set of observations is the information gain, I, which is I = Hprior - Hposterior The probability distributions that appear in this calculation depend on rigorous evaluation of errors in the observations. While direct estimates of the monetary value of data that could be used in budget prioritizations may not capture the value of data to the scientific community, it appears that the information gain may be a useful start in providing a `common currency' for evaluating projects that serve very different communities. In addition, from the standpoint of governmental accounting, it appears reasonable to assume that much of the expense for scientific data become sunk costs shortly after operations begin and that the real, long-term value is created by the effort scientists expend in creating the software that interprets the data and in the effort expended in calibration and validation. These efforts are the ones that directly contribute to the information gain that provides the value of these data.
Error-Tolerant Quasi-Paraboloidal Solar Concentrator
NASA Technical Reports Server (NTRS)
Wagner, Howard A.
1988-01-01
Scalloping reflector surface reduces sensitivity to manufacturing and aiming errors. Contrary to intuition, most effective shape of concentrating reflector for solar heat engine is not perfect paraboloid. According to design studies for Space Station solar concentrator, scalloped, nonimaging approximation to perfect paraboloid offers better overall performance in view of finite apparent size of Sun, imperfections of real equipment, and cost of accommodating these complexities. Scalloped-reflector concept also applied to improve performance while reducing cost of manufacturing and operation of terrestrial solar concentrator.
Giduthuri, Joseph G.; Maire, Nicolas; Joseph, Saju; Kudale, Abhay; Schaetti, Christian; Sundaram, Neisha; Schindler, Christian; Weiss, Mitchell G.
2014-01-01
Background Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. Methodology We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. Results In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively). Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. Conclusion An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies. PMID:25233212
NASA Astrophysics Data System (ADS)
Disney, M. J.; Lang, R. H.
2012-11-01
The Hubble Space Telescope (HST) findsgalaxies whose Tolman dimming exceeds 10 mag. Could evolution alone explain these as our ancestor galaxies or could they be representatives of quite a different dynasty whose descendants are no longer prominent today? We explore the latter hypothesis and argue that surface brightness selection effects naturally bring into focus quite different dynasties from different redshifts. Thus, the HST z = 7 galaxies could be examples of galaxies whose descendants are both too small and too choked with dust to be recognizable in our neighbourhood easily today. Conversely, the ancestors of the Milky Way and its obvious neighbours would have completely sunk below the sky at z > 1.2, unless they were more luminous in the past, although their diffused light could account for the missing re-ionization flux. This Succeeding Prominent Dynasties Hypothesis (SPDH) fits the existing observations both naturally and well even without evolution, including the bizarre distributions of galaxy surface brightness found in deep fields, the angular size ˜(1 + z)-1 law, 'downsizing' which turns out to be an 'illusion' in the sense that it does not imply evolution, 'infant mortality', that is, the discrepancy between stars born and stars seen, the existence of 'red nuggets', and finally the recently discovered and unexpected excess of quasar absorption line damped Lyα systems at high redshift. If galaxies were not significantly brighter in the past and the SPDH were true, then a large proportion of galaxies could remain sunk from sight, possibly at all redshifts, and these sunken galaxies could supply the missing re-ionization flux. We show that fishing these sunken galaxies out of the sky by their optical emissions alone is practically impossible, even when they are nearby. More ingenious methods are needed to detect them. It follows that disentangling galaxy evolution through studying ever higher redshift galaxies may be a forlorn hope because one could be comparing young oranges with old apples, not ancestors with their true descendants.
An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression
Bhatt, Deepak; Aggarwal, Priyanka; Bhattacharya, Prabir; Devabhaktuni, Vijay
2012-01-01
Micro Electro Mechanical System (MEMS)-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN) is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM) based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches. PMID:23012552
DNA assembly with error correction on a droplet digital microfluidics platform.
Khilko, Yuliya; Weyman, Philip D; Glass, John I; Adams, Mark D; McNeil, Melanie A; Griffin, Peter B
2018-06-01
Custom synthesized DNA is in high demand for synthetic biology applications. However, current technologies to produce these sequences using assembly from DNA oligonucleotides are costly and labor-intensive. The automation and reduced sample volumes afforded by microfluidic technologies could significantly decrease materials and labor costs associated with DNA synthesis. The purpose of this study was to develop a gene assembly protocol utilizing a digital microfluidic device. Toward this goal, we adapted bench-scale oligonucleotide assembly methods followed by enzymatic error correction to the Mondrian™ digital microfluidic platform. We optimized Gibson assembly, polymerase chain reaction (PCR), and enzymatic error correction reactions in a single protocol to assemble 12 oligonucleotides into a 339-bp double- stranded DNA sequence encoding part of the human influenza virus hemagglutinin (HA) gene. The reactions were scaled down to 0.6-1.2 μL. Initial microfluidic assembly methods were successful and had an error frequency of approximately 4 errors/kb with errors originating from the original oligonucleotide synthesis. Relative to conventional benchtop procedures, PCR optimization required additional amounts of MgCl 2 , Phusion polymerase, and PEG 8000 to achieve amplification of the assembly and error correction products. After one round of error correction, error frequency was reduced to an average of 1.8 errors kb - 1 . We demonstrated that DNA assembly from oligonucleotides and error correction could be completely automated on a digital microfluidic (DMF) platform. The results demonstrate that enzymatic reactions in droplets show a strong dependence on surface interactions, and successful on-chip implementation required supplementation with surfactants, molecular crowding agents, and an excess of enzyme. Enzymatic error correction of assembled fragments improved sequence fidelity by 2-fold, which was a significant improvement but somewhat lower than expected compared to bench-top assays, suggesting an additional capacity for optimization.
Goulet, Eric D B; Baker, Lindsay B
2017-12-01
The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and -0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.
The fitness cost of mis-splicing is the main determinant of alternative splicing patterns.
Saudemont, Baptiste; Popa, Alexandra; Parmley, Joanna L; Rocher, Vincent; Blugeon, Corinne; Necsulea, Anamaria; Meyer, Eric; Duret, Laurent
2017-10-30
Most eukaryotic genes are subject to alternative splicing (AS), which may contribute to the production of protein variants or to the regulation of gene expression via nonsense-mediated messenger RNA (mRNA) decay (NMD). However, a fraction of splice variants might correspond to spurious transcripts and the question of the relative proportion of splicing errors to functional splice variants remains highly debated. We propose a test to quantify the fraction of AS events corresponding to errors. This test is based on the fact that the fitness cost of splicing errors increases with the number of introns in a gene and with expression level. We analyzed the transcriptome of the intron-rich eukaryote Paramecium tetraurelia. We show that in both normal and in NMD-deficient cells, AS rates strongly decrease with increasing expression level and with increasing number of introns. This relationship is observed for AS events that are detectable by NMD as well as for those that are not, which invalidates the hypothesis of a link with the regulation of gene expression. Our results show that in genes with a median expression level, 92-98% of observed splice variants correspond to errors. We observed the same patterns in human transcriptomes and we further show that AS rates correlate with the fitness cost of splicing errors. These observations indicate that genes under weaker selective pressure accumulate more maladaptive substitutions and are more prone to splicing errors. Thus, to a large extent, patterns of gene expression variants simply reflect the balance between selection, mutation, and drift.
The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control
ERIC Educational Resources Information Center
Page, A.; Moreno, R.; Candelas, P.; Belmar, F.
2008-01-01
In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…
Incorporating approximation error in surrogate based Bayesian inversion
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.; Li, W.; Wu, L.
2015-12-01
There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.
NASA Astrophysics Data System (ADS)
Sousa, Andre R.; Schneider, Carlos A.
2001-09-01
A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
Cost Risk Analysis Based on Perception of the Engineering Process
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Wood, Darrell A.; Moore, Arlene A.; Bogart, Edward H.
1986-01-01
In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering process parameters are elicited from the engineer/expert on the project and are based on that expert's technical knowledge. These are converted by a parametric cost model into a cost estimate. The method discussed makes no assumptions about the distribution underlying the distribution of possible costs, and is not tied to the analysis of previous projects, except through the expert calibrations performed by the parametric cost analyst.
Emulating DC constant power load: a robust sliding mode control approach
NASA Astrophysics Data System (ADS)
Singh, Suresh; Fulwani, Deepak; Kumar, Vinod
2017-09-01
This article presents emulation of a programmable power electronic, constant power load (CPL) using a dc/dc step-up (boost) converter. The converter is controlled by a robust sliding mode controller (SMC). A novel switching surface is proposed to ensure a required power sunk by the converter. The proposed dc CPL is simple in design, has fast dynamic response and high accuracy, and offers an inexpensive alternative to study converters for cascaded dc distribution power system applications. Furthermore, the proposed CPL is sufficiently robust against the input voltage variations. A laboratory prototype of the proposed dc CPL has been developed and validated with SMC realised through OPAL-RT platform. The capability of the proposed dc CPL is confirmed via experimentations in varied scenarios.
Improving hospital billing and receivables management: principles for profitability.
Hemmer, E
1992-01-01
For many hospitals, billing and receivables management are inefficient and costly. Economic recession, increasing costs for patient and provider alike, and cost-containment strategies will only compound difficulties. The author describes the foundations of an automated billing system that would save hospitals time, error, and, most importantly, money.
An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base
NASA Astrophysics Data System (ADS)
Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi
Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.
Mukasa, Oscar; Mushi, Hildegalda P; Maire, Nicolas; Ross, Amanda; de Savigny, Don
2017-01-01
Data entry at the point of collection using mobile electronic devices may make data-handling processes more efficient and cost-effective, but there is little literature to document and quantify gains, especially for longitudinal surveillance systems. To examine the potential of mobile electronic devices compared with paper-based tools in health data collection. Using data from 961 households from the Rufiji Household and Demographic Survey in Tanzania, the quality and costs of data collected on paper forms and electronic devices were compared. We also documented, using qualitative approaches, field workers, whom we called 'enumerators', and households' members on the use of both methods. Existing administrative records were combined with logistics expenditure measured directly from comparison households to approximate annual costs per 1,000 households surveyed. Errors were detected in 17% (166) of households for the paper records and 2% (15) for the electronic records (p < 0.001). There were differences in the types of errors (p = 0.03). Of the errors occurring, a higher proportion were due to accuracy in paper surveys (79%, 95% CI: 72%, 86%) compared with electronic surveys (58%, 95% CI: 29%, 87%). Errors in electronic surveys were more likely to be related to completeness (32%, 95% CI 12%, 56%) than in paper surveys (11%, 95% CI: 7%, 17%).The median duration of the interviews ('enumeration'), per household was 9.4 minutes (90% central range 6.4, 12.2) for paper and 8.3 (6.1, 12.0) for electronic surveys (p = 0.001). Surveys using electronic tools, compared with paper-based tools, were less costly by 28% for recurrent and 19% for total costs. Although there were technical problems with electronic devices, there was good acceptance of both methods by enumerators and members of the community. Our findings support the use of mobile electronic devices for large-scale longitudinal surveys in resource-limited settings.
Interdisciplinary Coordination Reviews: A Process to Reduce Construction Costs.
ERIC Educational Resources Information Center
Fewell, Dennis A.
1998-01-01
Interdisciplinary Coordination design review is instrumental in detecting coordination errors and omissions in construction documents. Cleansing construction documents of interdisciplinary coordination errors reduces time extensions, the largest source of change orders, and limits exposure to liability claims. Improving the quality of design…
Position Tracking During Human Walking Using an Integrated Wearable Sensing System.
Zizzo, Giulio; Ren, Lei
2017-12-10
Progress has been made enabling expensive, high-end inertial measurement units (IMUs) to be used as tracking sensors. However, the cost of these IMUs is prohibitive to their widespread use, and hence the potential of low-cost IMUs is investigated in this study. A wearable low-cost sensing system consisting of IMUs and ultrasound sensors was developed. Core to this system is an extended Kalman filter (EKF), which provides both zero-velocity updates (ZUPTs) and Heuristic Drift Reduction (HDR). The IMU data was combined with ultrasound range measurements to improve accuracy. When a map of the environment was available, a particle filter was used to impose constraints on the possible user motions. The system was therefore composed of three subsystems: IMUs, ultrasound sensors, and a particle filter. A Vicon motion capture system was used to provide ground truth information, enabling validation of the sensing system. Using only the IMU, the system showed loop misclosure errors of 1% with a maximum error of 4-5% during walking. The addition of the ultrasound sensors resulted in a 15% reduction in the total accumulated error. Lastly, the particle filter was capable of providing noticeable corrections, which could keep the tracking error below 2% after the first few steps.
Cost-effectiveness of the stream-gaging program in North Carolina
Mason, R.R.; Jackson, N.M.
1985-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.
Partially Overlapping Mechanisms of Language and Task Control in Young and Older Bilinguals
Weissberger, Gali H.; Wierenga, Christina E.; Bondi, Mark W.; Gollan, Tamar H.
2012-01-01
The current study tested the hypothesis that bilinguals rely on domain-general mechanisms of executive control to achieve language control by asking if linguistic and nonlinguistic switching tasks exhibit similar patterns of aging-related decline. Thirty young and 30 aging bilinguals completed a cued language-switching task and a cued color-shape switching task. Both tasks demonstrated significant aging effects, but aging-related slowing and the aging-related increase in errors were significantly larger on the color-shape than on the language task. In the language task, aging increased language-switching costs in both response times and errors, and language-mixing costs only in response times. In contrast, the color-shape task exhibited an aging-related increase in costs only in mixing errors. Additionally, a subset of the older bilinguals could not do the color-shape task, but were able to do the language task, and exhibited significantly larger language-switching costs than matched controls. These differences, and some subtle similarities, in aging effects observed across tasks imply that mechanisms of nonlinguistic task and language control are only partly shared and demonstrate relatively preserved language control in aging. More broadly, these data suggest that age deficits in switching and mixing costs may depend on task expertise, with mixing deficits emerging for less-practiced tasks and switching deficits for highly practiced, possibly “expert” tasks (i.e., language). PMID:22582883
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo
A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.
Human Error and the International Space Station: Challenges and Triumphs in Science Operations
NASA Technical Reports Server (NTRS)
Harris, Samantha S.; Simpson, Beau C.
2016-01-01
Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.
da Silva, Brianna A; Krishnamurthy, Mahesh
2016-01-01
A 71-year-old female accidentally received thiothixene (Navane), an antipsychotic, instead of her anti-hypertensive medication amlodipine (Norvasc) for 3 months. She sustained physical and psychological harm including ambulatory dysfunction, tremors, mood swings, and personality changes. Despite the many opportunities for intervention, multiple health care providers overlooked her symptoms. Errors occurred at multiple care levels, including prescribing, initial pharmacy dispensation, hospitalization, and subsequent outpatient follow-up. This exemplifies the Swiss Cheese Model of how errors can occur within a system. Adverse drug events (ADEs) account for more than 3.5 million physician office visits and 1 million emergency department visits each year. It is believed that preventable medication errors impact more than 7 million patients and cost almost $21 billion annually across all care settings. About 30% of hospitalized patients have at least one discrepancy on discharge medication reconciliation. Medication errors and ADEs are an underreported burden that adversely affects patients, providers, and the economy. Medication reconciliation including an 'indication review' for each prescription is an important aspect of patient safety. The decreasing frequency of pill bottle reviews, suboptimal patient education, and poor communication between healthcare providers are factors that threaten patient safety. Medication error and ADEs cost billions of health care dollars and are detrimental to the provider-patient relationship.
NASA Astrophysics Data System (ADS)
Greenough, J. A.; Rider, W. J.
2004-05-01
A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.
Cost effectiveness of ergonomic redesign of electronic motherboard.
Sen, Rabindra Nath; Yeow, Paul H P
2003-09-01
A case study to illustrate the cost effectiveness of ergonomic redesign of electronic motherboard was presented. The factory was running at a loss due to the high costs of rejects and poor quality and productivity. Subjective assessments and direct observations were made on the factory. Investigation revealed that due to motherboard design errors, the machine had difficulty in placing integrated circuits onto the pads, the operators had much difficulty in manual soldering certain components and much unproductive manual cleaning (MC) was required. Consequently, there were high rejects and occupational health and safety (OHS) problems, such as, boredom and work discomfort. Also, much labour and machine costs were spent on repairs. The motherboard was redesigned to correct the design errors, to allow more components to be machine soldered and to reduce MC. This eliminated rejects, reduced repairs, saved US dollars 581495/year and improved operators' OHS. The customer also saved US dollars 142105/year on loss of business.
Forecasting Construction Cost Index based on visibility graph: A network approach
NASA Astrophysics Data System (ADS)
Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong
2018-03-01
Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.
NASA Astrophysics Data System (ADS)
Saga, R. S.; Jauhari, W. A.; Laksono, P. W.
2017-11-01
This paper presents an integrated inventory model which consists of single vendor and buyer. The buyer managed its inventory periodically and orders products from the vendor to satisfy the end customer’s demand, where the annual demand and the ordering cost were in the fuzzy environment. The buyer used a service level constraint instead of the stock-out cost term, so that the stock-out level per cycle was bounded. Then, the vendor produced and delivered products to the buyer. The vendor had a choice to commit an investment to reduce the setup cost. However, the vendor’s production process was imperfect, thus the lot delivered contained some defective products. Moreover, the buyer’s inspection process was not error-free since the inspector could be mistaken in categorizing the product’s quality. The objective was to find the optimum value for the review period, the setup cost, and the number of deliveries in one production cycle which might minimize the joint total cost. Furthermore, the algorithm and numerical example were provided to illustrate the application of the model.
The Effect of N-3 on N-2 Repetition Costs in Task Switching
ERIC Educational Resources Information Center
Schuch, Stefanie; Grange, James A.
2015-01-01
N-2 task repetition cost is a response time and error cost returning to a task recently performed after one intervening trial (i.e., an ABA task sequence) compared with returning to a task not recently performed (i.e., a CBA task sequence). This cost is considered a robust measure of inhibitory control during task switching. The present article…
Continuous Process Improvement Transformation Guidebook
2006-05-01
except full-scale im- plementation. Error Proofing ( Poka Yoke ) Finding and correcting defects caused by errors costs more and more as a system or...proofing. Shigeo Shingo introduced the concept of Poka - Yoke at Toyota Motor Corporation. Poka Yoke (pronounced “poh-kah yoh-kay”) translates to “avoid
A Kalman Filter Implementation for Precision Improvement in Low-Cost GPS Positioning of Tractors
Gomez-Gil, Jaime; Ruiz-Gonzalez, Ruben; Alonso-Garcia, Sergio; Gomez-Gil, Francisco Javier
2013-01-01
Low-cost GPS receivers provide geodetic positioning information using the NMEA protocol, usually with eight digits for latitude and nine digits for longitude. When these geodetic coordinates are converted into Cartesian coordinates, the positions fit in a quantization grid of some decimeters in size, the dimensions of which vary depending on the point of the terrestrial surface. The aim of this study is to reduce the quantization errors of some low-cost GPS receivers by using a Kalman filter. Kinematic tractor model equations were employed to particularize the filter, which was tuned by applying Monte Carlo techniques to eighteen straight trajectories, to select the covariance matrices that produced the lowest Root Mean Square Error in these trajectories. Filter performance was tested by using straight tractor paths, which were either simulated or real trajectories acquired by a GPS receiver. The results show that the filter can reduce the quantization error in distance by around 43%. Moreover, it reduces the standard deviation of the heading by 75%. Data suggest that the proposed filter can satisfactorily preprocess the low-cost GPS receiver data when used in an assistance guidance GPS system for tractors. It could also be useful to smooth tractor GPS trajectories that are sharpened when the tractor moves over rough terrain. PMID:24217355
Impact of Robotic Antineoplastic Preparation on Safety, Workflow, and Costs
Seger, Andrew C.; Churchill, William W.; Keohane, Carol A.; Belisle, Caryn D.; Wong, Stephanie T.; Sylvester, Katelyn W.; Chesnick, Megan A.; Burdick, Elisabeth; Wien, Matt F.; Cotugno, Michael C.; Bates, David W.; Rothschild, Jeffrey M.
2012-01-01
Purpose: Antineoplastic preparation presents unique safety concerns and consumes significant pharmacy staff time and costs. Robotic antineoplastic and adjuvant medication compounding may provide incremental safety and efficiency advantages compared with standard pharmacy practices. Methods: We conducted a direct observation trial in an academic medical center pharmacy to compare the effects of usual/manual antineoplastic and adjuvant drug preparation (baseline period) with robotic preparation (intervention period). The primary outcomes were serious medication errors and staff safety events with the potential for harm of patients and staff, respectively. Secondary outcomes included medication accuracy determined by gravimetric techniques, medication preparation time, and the costs of both ancillary materials used during drug preparation and personnel time. Results: Among 1,421 and 972 observed medication preparations, we found nine (0.7%) and seven (0.7%) serious medication errors (P = .8) and 73 (5.1%) and 28 (2.9%) staff safety events (P = .007) in the baseline and intervention periods, respectively. Drugs failed accuracy measurements in 12.5% (23 of 184) and 0.9% (one of 110) of preparations in the baseline and intervention periods, respectively (P < .001). Mean drug preparation time increased by 47% when using the robot (P = .009). Labor costs were similar in both study periods, although the ancillary material costs decreased by 56% in the intervention period (P < .001). Conclusion: Although robotically prepared antineoplastic and adjuvant medications did not reduce serious medication errors, both staff safety and accuracy of medication preparation were improved significantly. Future studies are necessary to address the overall cost effectiveness of these robotic implementations. PMID:23598843
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
McCabe, Patricia J
2018-06-01
Evidence-based practice (EBP) is a well-accepted theoretical framework around which speech-language pathologists strive to build their clinical decisions. The profession's conceptualisation of EBP has been evolving over the last 20 years with the practice of EBP now needing to balance research evidence, clinical data and informed patient choices. However, although EBP is not a new concept, as a profession, we seem to be no closer to closing the gap between research evidence and practice than we were at the start of the movement toward EBP in the late 1990s. This paper examines why speech-language pathologists find it difficult to change our own practice when we are experts in changing the behaviour of others. Using the lens of behavioural economics to examine the heuristics and cognitive processes which facilitate and inhibit change, the paper explores research showing how inconsistency of belief and action, or cognitive dissonance, is inevitable unless we act reflectively instead of automatically. The paper argues that heuristics that prevent us changing our practice toward EBP include the sunk cost fallacy, loss aversion, social desirability bias, choice overload and inertia. These automatic cognitive processes work to inhibit change and may partially account for the slow translation of research into practice. Fortunately, understanding and using other heuristics such as the framing effect, reciprocity, social proof, consistency and commitment may help us to understand our own behaviour as speech-language pathologists and help the profession, and those we work with, move towards EBP.
A current review of high speed railways experiences in Asia and Europe
NASA Astrophysics Data System (ADS)
Purba, Aleksander; Nakamura, Fumihiko; Dwsbu, Chatarina Niken; Jafri, Muhammad; Pratomo, Priyo
2017-11-01
High-Speed Railways (HSR) is currently regarded as one of the most significant technological breakthroughs in passenger transportation developed in the second half of the 20th century. At the beginning of 2008, there were about 10,000 kilometers of new high-speed lines in operation in Asia and Europe regions to provide high-speed services to passengers willing to pay for lower travel time and quality improvement in rail transport. And since 2010, HSR itself has received a great deal of attention in Indonesia. Some transportation analysts contend that Indonesia, particularly Java and Sumatera islands need a high-speed rail network to be economically competitive with countries in Asia and Europe. On April 2016, Indonesia-China consortium Kereta Cepat Indonesia China (KCIC) signed an engineering, procurement, and construction contract to build the HSR with a consortium of seven companies called the High-Speed Railway Contractor Consortium. The HSR is expected to debut by May 2019, offering a 45-minute trip covering a roughly 150 km route. However, building, maintaining and operating HSR line is expensive; it involves a significant amount of sunk costs and may substantially compromise both the transport policy of a country and the development of its transport sector for decades. The main objective of this paper is to discuss some characteristics of the HSR services from an economic viewpoint, while simultaneously developing an empirical framework that should help us to understand, in more detail, the factors determining the success of the HSR as transport alternative based on current experiences of selected Asian and European countries.
Effect of co-payment on behavioral response to consumer genomic testing.
Liu, Wendy; Outlaw, Jessica J; Wineinger, Nathan; Boeldt, Debra; Bloss, Cinnamon S
2018-01-29
Existing research in consumer behavior suggests that perceptions and usage of a product post-purchase depends, in part, on how the product was marketed, including price paid. In the current study, we examine the effect of providing an out-of-pocket co-payment for consumer genomic testing (CGT) on consumer post-purchase behavior using both correlational field evidence and a hypothetical online experiment. Participants were enrolled in a longitudinal cohort study of the impact of CGT and completed behavioral assessments before and after receipt of CGT results. Most participants provided a co-payment for the test (N = 1668), while others (N = 369) received fully subsidized testing. The two groups were compared regarding changes in health behaviors and post-test use of health care resources. Participants who paid were more likely to share results with their physician (p = .012) and obtain follow-up health screenings (p = .005) relative to those who received fully subsidized testing. A follow-up online experiment in which participants (N = 303) were randomized to a "fully-subsidized" versus "co-payment" condition found that simulating provision of a co-payment significantly increased intentions to seek follow-up screening tests (p = .050) and perceptions of the test results as more trustworthy (p = .02). Provision of an out-of-pocket co-payment for CGT may influence consumer's post-purchase behavior consistent with a price placebo effect. Cognitive dissonance or sunk cost may help explain the increase in screening propensity among paying consumers. Such individuals may obtain follow-up screenings to validate their initial decision to expend personal resources to obtain CGT. © Society of Behavioral Medicine 2018.
Cost-effectiveness of the stream-gaging program in Nebraska
Engel, G.B.; Wahl, K.L.; Boohar, J.A.
1984-01-01
This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)
Emergency nurse practitioners: a three part study in clinical and cost effectiveness
Sakr, M; Kendall, R; Angus, J; Saunders, A; Nicholl, J; Wardrope, J
2003-01-01
Aims: To compare the clinical effectiveness and costs of minor injury services provided by nurse practitioners with minor injury care provided by an accident and emergency (A&E) department. Methods: A three part prospective study in a city where an A&E department was closing and being replaced by a nurse led minor injury unit (MIU). The first part of the study took a sample of patients attending the A&E department. The second part of the study was a sample of patients from a nurse led MIU that had replaced the A&E department. In each of these samples the clinical effectiveness was judged by comparing the "gold standard" of a research assessment with the clinical assessment. Primary outcome measures were the number of errors in clinical assessment, treatment, and disposal. The third part of the study used routine data whose collection had been prospectively configured to assess the costs and cost consequences of both models of care. Results: The minor injury unit produced a safe service where the total package of care was equal to or in some cases better than the A&E care. Significant process errors were made in 191 of 1447 (13.2%) patients treated by medical staff in the A&E department and 126 of 1313 (9.6%) of patients treated by nurse practitioners in the MIU. Very significant errors were rare (one error). Waiting times were much better at the MIU (mean MIU 19 minutes, A&E department 56.4 minutes). The revenue costs were greater in the MIU (MIU £41.1, A&E department £40.01) and there was a great difference in the rates of follow up and with the nurses referring 47% of patients for follow up and the A&E department referring only 27%. Thus the costs and cost consequences were greater for MIU care compared with A&E care (MIU £12.7 per minor injury case, A&E department £9.66 per minor injury case). Conclusion: A nurse practitioner minor injury service can provide a safe and effective service for the treatment of minor injury. However, the costs of such a service are greater and there seems to be an increased use of outpatient services. PMID:12642530
Rhythmic chaos: irregularities of computer ECG diagnosis.
Wang, Yi-Ting Laureen; Seow, Swee-Chong; Singh, Devinder; Poh, Kian-Keong; Chai, Ping
2017-09-01
Diagnostic errors can occur when physicians rely solely on computer electrocardiogram interpretation. Cardiologists often receive referrals for computer misdiagnoses of atrial fibrillation. Patients may have been inappropriately anticoagulated for pseudo atrial fibrillation. Anticoagulation carries significant risks, and such errors may carry a high cost. Have we become overreliant on machines and technology? In this article, we illustrate three such cases and briefly discuss how we can reduce these errors. Copyright: © Singapore Medical Association.
Cost-effective surgical registration using consumer depth cameras
NASA Astrophysics Data System (ADS)
Potter, Michael; Yaniv, Ziv
2016-03-01
The high costs associated with technological innovation have been previously identified as both a major contributor to the rise of health care expenses, and as a limitation for widespread adoption of new technologies. In this work we evaluate the use of two consumer grade depth cameras, the Microsoft Kinect v1 and 3DSystems Sense, as a means for acquiring point clouds for registration. These devices have the potential to replace professional grade laser range scanning devices in medical interventions that do not require sub-millimetric registration accuracy, and may do so at a significantly reduced cost. To facilitate the use of these devices we have developed a near real-time (1-4 sec/frame) rigid registration framework combining several alignment heuristics with the Iterative Closest Point (ICP) algorithm. Using nearest neighbor registration error as our evaluation criterion we found the optimal scanning distances for the Sense and Kinect to be 50-60cm and 70-80cm respectively. When imaging a skull phantom at these distances, RMS error values of 1.35mm and 1.14mm were obtained. The registration framework was then evaluated using cranial MR scans of two subjects. For the first subject, the RMS error using the Sense was 1.28 +/- 0.01 mm. Using the Kinect this error was 1.24 +/- 0.03 mm. For the second subject, whose MR scan was significantly corrupted by metal implants, the errors increased to 1.44 +/- 0.03 mm and 1.74 +/- 0.06 mm but the system nonetheless performed within acceptable bounds.
NASA Technical Reports Server (NTRS)
Remer, D. S.
1977-01-01
The described mathematical model calculates life-cycle costs for projects with operating costs increasing or decreasing linearly with time. The cost factors involved in the life-cycle cost are considered, and the errors resulting from the assumption of constant rather than uniformly varying operating costs are examined. Parameters in the study range from 2 to 30 years, for project life; 0 to 15% per year, for interest rate; and 5 to 90% of the initial operating cost, for the operating cost gradient. A numerical example is presented.
Stereotype threat can reduce older adults' memory errors.
Barber, Sarah J; Mather, Mara
2013-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research, we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment. Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 and 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well.
2016-04-30
costs of new defense systems. An inappropriate price index can introduce errors in both development of cost estimating relationships ( CERs ) and in...indexes derived from CERs . These indexes isolate changes in price due to factors other than changes in quality over time. We develop a “Baseline” CER ...The hedonic index application has commonalities with cost estimating relationships ( CERs ), which also model system costs as a function of quality
Qureshi, N A; Neyaz, Y; Khoja, T; Magzoub, M A; Haycox, A; Walley, T
2011-02-01
Medication errors are globally huge in magnitude and associated with high morbidity and mortality together with high costs and legal problems. Medication errors are caused by multiple factors related to health providers, consumers and health system, but most prescribing errors are preventable. This paper is the third of 3 review articles that form the background for a series of 5 interconnected studies of prescribing patterns and medication errors in the public and private primary health care sectors of Saudi Arabia. A MEDLINE search was conducted to identify papers published in peer-reviewed journals over the previous 3 decades. The paper reviews the etiology, prevention strategies, reporting mechanisms and the myriad consequences of medication errors.
An Evaluation of the Utility and Cost of Computerized Library Catalogs. Final Report.
ERIC Educational Resources Information Center
Dolby, J.L.; And Others
This study analyzes the basic cost factors in the automation of library catalogs, with a separate examination of the influence of typography on the cost of printed catalogs and the use of efficient automatic error detection procedures in processing bibliographic records. The utility of automated catalogs is also studied, based on data from a…
The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.
Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius
2017-05-01
To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less
Gonzalez, Claudia C; Mon-Williams, Mark; Burke, Melanie R
2015-01-01
Numerous activities require an individual to respond quickly to the correct stimulus. The provision of advance information allows response priming but heightened responses can cause errors (responding too early or reacting to the wrong stimulus). Thus, a balance is required between the online cognitive mechanisms (inhibitory and anticipatory) used to prepare and execute a motor response at the appropriate time. We investigated the use of advance information in 71 participants across four different age groups: (i) children, (ii) young adults, (iii) middle-aged adults, and (iv) older adults. We implemented 'cued' and 'non-cued' conditions to assess age-related changes in saccadic and touch responses to targets in three movement conditions: (a) Eyes only; (b) Hands only; (c) Eyes and Hand. Children made less saccade errors compared to young adults, but they also exhibited longer response times in cued versus non-cued conditions. In contrast, older adults showed faster responses in cued conditions but exhibited more errors. The results indicate that young adults (18-25 years) achieve an optimal balance between anticipation and execution. In contrast, children show benefits (few errors) and costs (slow responses) of good inhibition when preparing a motor response based on advance information; whilst older adults show the benefits and costs associated with a prospective response strategy (i.e., good anticipation).
The Treatment of Capital Costs in Educational Projects
ERIC Educational Resources Information Center
Bezeau, Lawrence
1975-01-01
Failure to account for the cost and depreciation of capital leads to suboptimal investments in education, specifically to excessively capital intensive instructional technologies. This type of error, which is particularly serious when planning for developing countries, can be easily avoided. (Author)
NASA Astrophysics Data System (ADS)
Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.
2017-12-01
The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.
Electrolytic cell-free 57Co deposition for emission Mössbauer spectroscopy
NASA Astrophysics Data System (ADS)
Zyabkin, Dmitry V.; Procházka, Vít; Miglierini, Marcel; Mašláň, Miroslav
2018-05-01
We have developed a simple, inexpensive and efficient method for an electrochemical preparation of samples for emission Mössbauer spectroscopy (EMS) and Mössbauer sources. The proposed electrolytic deposition procedure does not require any special setup, not even an electrolytic cell. It utilizes solely an electrode with a droplet of electrolyte on its surface and the second electrode sunk into the droplet. Its performance is demonstrated using two examples, a metallic glass and a Cu stripe. We present a detailed description of the deposition procedure and resulting emission Mössbauer spectra for both samples. In the case of a Cu stripe, we have performed EMS measurements at different stages of heat-treatment, which are required for the production of Mössbauer sources with the copper matrix.
NASA Astrophysics Data System (ADS)
Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang
2018-01-01
With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.
Round-off error in long-term orbital integrations using multistep methods
NASA Technical Reports Server (NTRS)
Quinlan, Gerald D.
1994-01-01
Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.
An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida
Gain, W.S.
1997-01-01
Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
Taming the Hurricane of Acquisition Cost Growth - Or at Least Predicting It
2015-01-01
the practice of generating two different cost estimates dubbed Will Cost and Should Cost. The Should Cost estimate is “based on realistic tech...to predict estimate error in similar future programs. This method is dubbed “macro-stochastic” estimation (Ryan, Schubert Kabban, Jacques...mph Potential Day 1-3 Track Area Tropical Storm Warning OK AR TN AL FL Mexico MS LA TX 30 N 35 N 25 N 95 W 90 W 85 W 80 W True at 30.00N Approx
Cost effectiveness of stream-gaging program in Michigan
Holtschlag, D.J.
1985-01-01
This report documents the results of a study of the cost effectiveness of the stream-gaging program in Michigan. Data uses and funding sources were identified for the 129 continuous gaging stations being operated in Michigan as of 1984. One gaging station was identified as having insufficient reason to continue its operation. Several stations were identified for reactivation, should funds become available, because of insufficiencies in the data network. Alternative methods of developing streamflow information based on routing and regression analyses were investigated for 10 stations. However, no station records were reproduced with sufficient accuracy to replace conventional gaging practices. A cost-effectiveness analysis of the data-collection procedure for the ice-free season was conducted using a Kalman-filter analysis. To define missing-record characteristics, cross-correlation coefficients and coefficients of variation were computed at stations on the basis of daily mean discharge. Discharge-measurement data were used to describe the gage/discharge rating stability at each station. The results of the cost-effectiveness analysis for a 9-month ice-free season show that the current policy of visiting most stations on a fixed servicing schedule once every 6 weeks results in an average standard error of 12.1 percent for the current $718,100 budget. By adopting a flexible servicing schedule, the average standard error could be reduced to 11.1 percent. Alternatively, the budget could be reduced to $700,200 while maintaining the current level of accuracy. A minimum budget of $680,200 is needed to operate the 129-gaging-station program; a budget less than this would not permit proper service and maintenance of stations. At the minimum budget, the average standard error would be 14.4 percent. A budget of $789,900 (the maximum analyzed) would result in a decrease in the average standard error to 9.07 percent. Owing to continual changes in the composition of the network and the changes in the uncertainties of streamflow accuracy at individual stations, the cost-effectiveness analysis will need to be updated regularly if it is to be used as a management tool. Cost of these updates need to be considered in decisions concerning the feasibility of flexible servicing schedules.
Colen, Hadewig B; Neef, Cees; Schuring, Roel W
2003-06-01
Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incident reports involving medication errors. Medisch Spectrum Twente is a top primary-care, clinical, teaching hospital. The hospital pharmacy takes care of 1070 internal beds and 1120 beds in an affiliated psychiatric hospital and nursing homes. In the beginning of 1999, our pharmacy group started a large interdisciplinary research project to develop a safe, effective and efficient drug distribution system by using systematic process redesign. The process redesign includes both organisational and technological components. This article describes the identification and verification of critical performance dimensions for the design of drug distribution processes in hospitals (phase 1 of the systematic process redesign of drug distribution). Based on reported errors and related causes, we suggested six generic performance domains. To assess the role of the performance dimensions, we used three approaches: flowcharts, interviews with stakeholders and review of the existing performance using time studies and medication error studies. We were able to set targets for costs, quality of information, responsiveness, employee satisfaction, and degree of innovation. We still have to establish what drug distribution system, in respect of quality and cost-effectiveness, represents the best and most cost-effective way of preventing medication errors. We intend to develop an evaluation model, using the critical performance dimensions as a starting point. This model can be used as a simulation template to compare different drug distribution concepts in order to define the differences in quality and cost-effectiveness.
Urban rail transit projects : forecast versus actual ridership and costs. final report
DOT National Transportation Integrated Search
1989-10-01
Substantial errors in forecasting ridership and costs for the ten rail transit projects reviewed in this report, put forth the possibility that more accurate forecasts would have led decision-makers to select projects other than those reviewed in thi...
Evaluation of structure from motion for soil microtopography measurement
USDA-ARS?s Scientific Manuscript database
Recent developments in low cost structure from motion (SFM) technologies offer new opportunities for geoscientists to acquire high resolution soil microtopography data at a fraction of the cost of conventional techniques. However, these new methodologies often lack easily accessible error metrics an...
25+ Years of the Hubble Space Telescope and a Simple Error That Cost Millions
ERIC Educational Resources Information Center
Shakerin, Said
2016-01-01
A simple mistake in properly setting up a measuring device caused millions of dollars to be spent in correcting the initial optical failure of the Hubble Space Telescope (HST). This short article is intended as a lesson for a physics laboratory and discussion of errors in measurement.
On the role of cost-sensitive learning in multi-class brain-computer interfaces.
Devlaminck, Dieter; Waegeman, Willem; Wyns, Bart; Otte, Georges; Santens, Patrick
2010-06-01
Brain-computer interfaces (BCIs) present an alternative way of communication for people with severe disabilities. One of the shortcomings in current BCI systems, recently put forward in the fourth BCI competition, is the asynchronous detection of motor imagery versus resting state. We investigated this extension to the three-class case, in which the resting state is considered virtually lying between two motor classes, resulting in a large penalty when one motor task is misclassified into the other motor class. We particularly focus on the behavior of different machine-learning techniques and on the role of multi-class cost-sensitive learning in such a context. To this end, four different kernel methods are empirically compared, namely pairwise multi-class support vector machines (SVMs), two cost-sensitive multi-class SVMs and kernel-based ordinal regression. The experimental results illustrate that ordinal regression performs better than the other three approaches when a cost-sensitive performance measure such as the mean-squared error is considered. By contrast, multi-class cost-sensitive learning enables us to control the number of large errors made between two motor tasks.
Liability claims and costs before and after implementation of a medical error disclosure program.
Kachalia, Allen; Kaufman, Samuel R; Boothman, Richard; Anderson, Susan; Welch, Kathleen; Saint, Sanjay; Rogers, Mary A M
2010-08-17
Since 2001, the University of Michigan Health System (UMHS) has fully disclosed and offered compensation to patients for medical errors. To compare liability claims and costs before and after implementation of the UMHS disclosure-with-offer program. Retrospective before-after analysis from 1995 to 2007. Public academic medical center and health system. Inpatients and outpatients involved in claims made to UMHS. Number of new claims for compensation, number of claims compensated, time to claim resolution, and claims-related costs. After full implementation of a disclosure-with-offer program, the average monthly rate of new claims decreased from 7.03 to 4.52 per 100,000 patient encounters (rate ratio [RR], 0.64 [95% CI, 0.44 to 0.95]). The average monthly rate of lawsuits decreased from 2.13 to 0.75 per 100,000 patient encounters (RR, 0.35 [CI, 0.22 to 0.58]). Median time from claim reporting to resolution decreased from 1.36 to 0.95 years. Average monthly cost rates decreased for total liability (RR, 0.41 [CI, 0.26 to 0.66]), patient compensation (RR, 0.41 [CI, 0.26 to 0.67]), and non-compensation-related legal costs (RR, 0.39 [CI, 0.22 to 0.67]). The study design cannot establish causality. Malpractice claims generally declined in Michigan during the latter part of the study period. The findings might not apply to other health systems, given that UMHS has a closed staff model covered by a captive insurance company and often assumes legal responsibility. The UMHS implemented a program of full disclosure of medical errors with offers of compensation without increasing its total claims and liability costs. Blue Cross Blue Shield of Michigan Foundation.
Evaluation of The Operational Benefits Versus Costs of An Automated Cargo Mover
2016-12-01
logistics footprint and life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically...life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically significant differences...Error of Estimation. Source: Eskew and Lawler (1994). ...........................75 Figure 24. Load Results (100 Runs per Scenario
Integrating Solar PV in Utility System Operations: Analytical Framework and Arizona Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jing; Botterud, Audun; Mills, Andrew
2015-06-01
A systematic framework is proposed to estimate the impact on operating costs due to uncertainty and variability in renewable resources. The framework quantifies the integration costs associated with subhourly variability and uncertainty as well as day-ahead forecasting errors in solar PV (photovoltaics) power. A case study illustrates how changes in system operations may affect these costs for a utility in the southwestern United States (Arizona Public Service Company). We conduct an extensive sensitivity analysis under different assumptions about balancing reserves, system flexibility, fuel prices, and forecasting errors. We find that high solar PV penetrations may lead to operational challenges, particularlymore » during low-load and high solar periods. Increased system flexibility is essential for minimizing integration costs and maintaining reliability. In a set of sensitivity cases where such flexibility is provided, in part, by flexible operations of nuclear power plants, the estimated integration costs vary between $1.0 and $4.4/MWh-PV for a PV penetration level of 17%. The integration costs are primarily due to higher needs for hour-ahead balancing reserves to address the increased sub-hourly variability and uncertainty in the PV resource. (C) 2015 Elsevier Ltd. All rights reserved.« less
Therrell, Bradford L.; Lloyd-Puryear, Michele A.; Camp, Kathryn M.; Mann, Marie Y.
2014-01-01
Inborn errors of metabolism (IEM) are genetic disorders in which specific enzyme defects interfere with the normal metabolism of exogenous (dietary) or endogenous protein, carbohydrate, or fat. In the U.S., many IEM are detected through state newborn screening (NBS) programs. To inform research on IEM and provide necessary resources for researchers, we are providing: tabulation of ten-year state NBS data for selected IEM detected through NBS; costs of medical foods used in the management of IEM; and an assessment of corporate policies regarding provision of nutritional interventions at no or reduced cost to individuals with IEM. The calculated IEM incidences are based on analyses of ten-year data (2001–2011) from the National Newborn Screening Information System (NNSIS). Costs to feed an average person with an IEM were approximated by determining costs to feed an individual with an IEM, minus the annual expenditure for food for an individual without an IEM. Both the incidence and costs of nutritional intervention data will be useful in future research concerning the impact of IEM disorders on families, individuals and society. PMID:25085281
Wang, Wansheng; Chen, Long; Zhou, Jie
2015-01-01
A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063
Stereotype threat can reduce older adults' memory errors
Barber, Sarah J.; Mather, Mara
2014-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment (Seibt & Förster, 2004). Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 & 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well. PMID:24131297
Clinical laboratory: bigger is not always better.
Plebani, Mario
2018-06-27
Laboratory services around the world are undergoing substantial consolidation and changes through mechanisms ranging from mergers, acquisitions and outsourcing, primarily based on expectations to improve efficiency, increasing volumes and reducing the cost per test. However, the relationship between volume and costs is not linear and numerous variables influence the end cost per test. In particular, the relationship between volumes and costs does not span the entire platter of clinical laboratories: high costs are associated with low volumes up to a threshold of 1 million test per year. Over this threshold, there is no linear association between volumes and costs, as laboratory organization rather than test volume more significantly affects the final costs. Currently, data on laboratory errors and associated diagnostic errors and risk for patient harm emphasize the need for a paradigmatic shift: from a focus on volumes and efficiency to a patient-centered vision restoring the nature of laboratory services as an integral part of the diagnostic and therapy process. Process and outcome quality indicators are effective tools to measure and improve laboratory services, by stimulating a competition based on intra- and extra-analytical performance specifications, intermediate outcomes and customer satisfaction. Rather than competing with economic value, clinical laboratories should adopt a strategy based on a set of harmonized quality indicators and performance specifications, active laboratory stewardship, and improved patient safety.
7 CFR 272.10 - ADP/CIS Model Plan.
Code of Federal Regulations, 2011 CFR
2011-01-01
... those which result in effective programs or in cost effective reductions in errors and improvements in management efficiency, such as decreases in program administrative costs. Thus, for those State agencies which operate exceptionally efficient and effective programs, a lesser degree of automation may be...
Cost effectiveness of the stream-gaging program in Pennsylvania
Flippo, H.N.; Behrendt, T.E.
1985-01-01
This report documents a cost-effectiveness study of the stream-gaging program in Pennsylvania. Data uses and funding were identified for 223 continuous-record stream gages operated in 1983; four are planned for discontinuance at the close of water-year 1985; two are suggested for conversion, at the beginning of the 1985 water year, for the collection of only continuous stage records. Two of 11 special-purpose short-term gages are recommended for continuation when the supporting project ends; eight of these gages are to be discontinued and the other will be converted to a partial-record type. Current operation costs for the 212 stations recommended for continued operation is $1,199,000 per year in 1983. The average standard error of estimation for instantaneous streamflow is 15.2%. An overall average standard error of 9.8% could be attained on a budget of $1,271,000, which is 6% greater than the 1983 budget, by adopted cost-effective stream-gaging operations. (USGS)
An Application of Linear Covariance Analysis to the Design of Responsive Near-Rendezvous Missions
2007-06-01
accurately before making large ma- neuvers. A fifth type of error is maneuver knowledge error (MKER). This error accounts for how well a spacecraft is able...utilized due in a large part to the cost of designing and launching spacecraft , in a market where currently there are not many options for launching...is then ordered to fire its thrusters to increase its orbital altitude to 800 km. Before the maneuver the spacecraft is moving with some velocity, V
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
A Compact VLSI System for Bio-Inspired Visual Motion Estimation.
Shi, Cong; Luo, Gang
2018-04-01
This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
[Macroeconomic costs of eye diseases].
Hirneiß, C; Kampik, A; Neubauer, A S
2014-05-01
Eye diseases that are relevant regarding their macroeconomic costs and their impact on society include cataract, diabetic retinopathy, age-related maculopathy, glaucoma and refractive errors. The aim of this article is to provide a comprehensive overview of direct and indirect costs for major eye disease categories for Germany, based on existing literature and data sources. A semi-structured literature search was performed in the databases Medline and Embase and in the search machine Google for relevant original papers and reviews on costs of eye diseases with relevance for or transferability to Germany (last research date October 2013). In addition, manual searching was performed in important national databases and information sources, such as the Federal Office of Statistics and scientific societies. The direct costs for these diseases add up to approximately 2.6 billion Euros yearly for the Federal Republic of Germany, including out of the pocket payments from patients but excluding optical aids (e.g. glasses). In addition to those direct costs there are also indirect costs which are caused e.g. by loss of employment or productivity or by a reduction in health-related quality of life. These indirect costs can only be roughly estimated. Including the indirect costs for the eye diseases investigated, a total yearly macroeconomic cost ranging between 4 and 12 billion Euros is estimated for Germany. The costs for the eye diseases cataract, diabetic retinopathy, age-related maculopathy, glaucoma and refractive errors have a macroeconomic relevant dimension. Based on the predicted demographic changes with an ageing society an increase of the prevalence and thus also an increase of costs for eye diseases is expected in the future.
Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh
2017-03-01
Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
Cost effectiveness of the stream-gaging program in Nevada
Arteaga, F.E.
1990-01-01
The stream-gaging network in Nevada was evaluated as part of a nationwide effort by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. Specifically, the study dealt with 79 streamflow gages and 2 canal-flow gages that were under the direct operation of Nevada personnel as of 1983. Cost-effective allocations of resources, including budget and operational criteria, were studied using statistical procedures known as Kalman-filtering techniques. The possibility of developing streamflow data at ungaged sites was evaluated using flow-routing and statistical regression analyses. Neither of these methods provided sufficiently accurate results to warrant their use in place of stream gaging. The 81 gaging stations were being operated in 1983 with a budget of $465,500. As a result of this study, all existing stations were determined to be necessary components of the program for the foreseeable future. At the 1983 funding level, the average standard error of streamflow records was nearly 28%. This same overall level of accuracy could have been maintained with a budget of approximately $445,000 if the funds were redistributed more equitably among the gages. The maximum budget analyzed, $1,164 ,000 would have resulted in an average standard error of 11%. The study indicates that a major source of error is lost data. If perfectly operating equipment were available, the standard error for the 1983 program and budget could have been reduced to 21%. (Thacker-USGS, WRD)
Cost effectiveness of the US Geological Survey's stream-gaging program in New York
Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.
1986-01-01
The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)
Interspecific song imitation by a Prairie Warbler
Bruce E. Byers; Brodie A. Kramer; Michael E. Akresh; David I. King
2013-01-01
Song development in oscine songbirds relies on imitation of adult singers and thus leaves developing birds vulnerable to potentially costly errors caused by imitation of inappropriate models, such as the songs of other species. In May and June 2012, we recorded the songs of a bird that made such an error: a male Prairie Warbler (Setophaga discolor)...
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
NASA Technical Reports Server (NTRS)
1987-01-01
In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
Aziz, Muhammad Tahir; Ur-Rehman, Tofeeq; Qureshi, Sadia; Bukhari, Nadeem Irfan
Medication errors in chemotherapy are frequent and lead to patient morbidity and mortality, as well as increased rates of re-admission and length of stay, and considerable extra costs. Objective: This study investigated the proposition that computerised chemotherapy ordering reduces the incidence and severity of chemotherapy protocol errors. A computerised physician order entry of chemotherapy order (C-CO) with clinical decision support system was developed in-house, including standardised chemotherapy protocol definitions, automation of pharmacy distribution, clinical checks, labeling and invoicing. A prospective study was then conducted in a C-CO versus paper based chemotherapy order (P-CO) in a 30-bed chemotherapy bay of a tertiary hospital. Both C-CO and P-CO orders, including pharmacoeconomic analysis and the severity of medication errors, were checked and validated by a clinical pharmacist. A group analysis and field trial were also conducted to assess clarity, feasibility and decision making. The C-CO was very usable in terms of its clarity and feasibility. The incidence of medication errors was significantly lower in the C-CO compared with the P-CO (10/3765 [0.26%] versus 134/5514 [2.4%]). There was also a reduction in dispensing time of chemotherapy protocols in the C-CO. The chemotherapy computerisation with clinical decision support system resulted in a significant decrease in the occurrence and severity of medication errors, improvements in chemotherapy dispensing and administration times, and reduction of chemotherapy cost.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Auger, Ludovic
2003-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. This scheme projects the discretized covariance propagation equations and covariance matrix onto an orthogonal set of compactly supported wavelets. Wavelet representation is localized in both location and scale, which allows for efficient representation of the inherently anisotropic structure of the error covariances. The truncation is carried out in such a way that the resolution of the error covariance is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance size by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the constituent field. This results indicate that propagation of error covariances for a global two-dimensional data assimilation system are currently feasible. Recommendations for further reduction in computational cost are made with the goal of extending this technique to three-dimensional global assimilation systems.
Flexible reserve markets for wind integration
NASA Astrophysics Data System (ADS)
Fernandez, Alisha R.
The increased interconnection of variable generation has motivated the use of improved forecasting to more accurately predict future production with the purpose to lower total system costs for balancing when the expected output exceeds or falls short of the actual output. Forecasts are imperfect, and the forecast errors associated with utility-scale generation from variable generators need new balancing capabilities that cannot be handled by existing ancillary services. Our work focuses on strategies for integrating large amounts of wind generation under the flex reserve market, a market that would called upon for short-term energy services during an under or oversupply of wind generation to maintain electric grid reliability. The flex reserve market would be utilized for time intervals that fall in-between the current ancillary services markets that would be longer than second-to-second energy services for maintaining system frequency and shorter than reserve capacity services that are called upon for several minutes up to an hour during an unexpected contingency on the grid. In our work, the wind operator would access the flex reserve market as an energy service to correct for unanticipated forecast errors, akin to paying the generators participating in the market to increase generation during a shortfall or paying the other generators to decrease generation during an excess of wind generation. Such a market does not currently exist in the Mid-Atlantic United States. The Pennsylvania-New Jersey-Maryland Interconnection (PJM) is the Mid-Atlantic electric grid case study that was used to examine if a flex reserve market can be utilized for integrating large capacities of wind generation in a lowcost manner for those providing, purchasing and dispatching these short-term balancing services. The following work consists of three studies. The first examines the ability of a hydroelectric facility to provide short-term forecast error balancing services via a flex reserve market, identifying the operational constraints that inhibit a multi-purpose dam facility to meet the desired flexible energy demand. The second study transitions from the hydroelectric facility as the decision maker providing flex reserve services to the wind plant as the decision maker purchasing these services. In this second study, methods for allocating the costs of flex reserve services under different wind policy scenarios are explored that aggregate farms into different groupings to identify the least-cost strategy for balancing the costs of hourly day-ahead forecast errors. The least-cost strategy may be different for an individual wind plant and for the system operator, noting that the least-cost strategy is highly sensitive to cost allocation and aggregation schemes. The latter may also cause cross-subsidies in the cost for balancing wind forecast errors among the different wind farms. The third study builds from the second, with the objective to quantify the amount of flex reserves needed for balancing future forecast errors using a probabilistic approach (quantile regression) to estimating future forecast errors. The results further examine the usefulness of separate flexible markets PJM could use for balancing oversupply and undersupply events, similar to the regulation up and down markets used in Europe. These three studies provide the following results and insights to large-scale wind integration using actual PJM wind farm data that describe the markets and generators within PJM. • Chapter 2 provides an in-depth analysis of the valuable, yet highly-constrained, energy services multi-purpose hydroelectric facilities can provide, though the opportunity cost for providing these services can result in large deviations from the reservoir policies with minimal revenue gain in comparison to dedicating the whole of dam capacity to providing day-ahead, baseload generation. • Chapter 3 quantifies the system-wide efficiency gains and the distributive effects of PJM's decision to act as a single balancing authority, which means that it procures ancillary services across its entire footprint simultaneously. This can be contrasted to Midwest Independent System Operator (MISO), which has several balancing authorities operating under its footprint. • Chapter 4 uses probabilistic methods to estimate the uncertainty in the forecast errors and the quantity of energy needed to balance these forecast errors at a certain percentile. Current practice is to use a point forecast that describes the conditional expectation of the dependent variable at each time step. The approach here uses quantile regression to describe the relationship between independent variable and the conditional quantiles (equivalently the percentiles) of the dependent variable. An estimate of the conditional density is performed, which contains information about the covariate relationship of the sign of the forecast errors (negative for too much wind generation and positive for too little wind generation) and the wind power forecast. This additional knowledge may be implemented in the decision process to more accurately schedule day-ahead wind generation bids and provide an example for using separate markets for balancing an oversupply and undersupply of generation. Such methods are currently used for coordinating large footprints of wind generation in Europe.
MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery
2016-04-01
The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process: prescribing, transcription, preparation, and administration. There were no transcription errors, and most (95%) errors occurred during administration. We conclude that PN practices that conferred a meaningful cost reduction and a lower error rate (2.7/1000 PN) than reported in the literature (15.6/1000 PN) were ascribed to the development and implementation of practices that conform to national PN guidelines and recommendations. Electronic ordering and compounding programs eliminated all transcription and related opportunities for errors. © 2015 American Society for Parenteral and Enteral Nutrition.
Baltussen, Rob; Smith, Andrew
2012-03-02
To determine the relative costs, effects, and cost effectiveness of selected interventions to control cataract, trachoma, refractive error, hearing loss, meningitis and chronic otitis media. Cost effectiveness analysis of or combined strategies for controlling vision and hearing loss by means of a lifetime population model. Two World Health Organization sub-regions of the world where vision and hearing loss are major burdens: sub-Saharan Africa and South East Asia. Biological and behavioural parameters from clinical and observational studies and population based surveys. Intervention effects and resource inputs based on published reports, expert opinion, and the WHO-CHOICE database. Cost per disability adjusted life year (DALY) averted, expressed in international dollars ($Int) for the year 2005. Treatment of chronic otitis media, extracapsular cataract surgery, trichiasis surgery, treatment for meningitis, and annual screening of schoolchildren for refractive error are among the most cost effective interventions to control hearing and vision impairment, with the cost per DALY averted <$Int285 in both regions. Screening of both schoolchildren (annually) and adults (every five years) for hearing loss costs around $Int1000 per DALY averted. These interventions can be considered highly cost effective. Mass treatment with azithromycin to control trachoma can be considered cost effective in the African but not the South East Asian sub-region. Vision and hearing impairment control interventions are generally cost effective. To decide whether substantial investments in these interventions is warranted, this finding should be considered in relation to the economic attractiveness of other, existing or new, interventions in health.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions
Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele
2016-01-01
To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration. Given the ease-of-use and cost benefits of test strips, we recommend further development of test strips robust to pH variation and appropriate for Ebola-relevant chlorine solution concentrations. PMID:27243817
[Refugees at Malmö Epidemic Hospital in 1945].
Cronberg, S
1993-01-01
In 1945, 423 refugees were admitted because of contagious disease at Malmö Epidemic Hospital. Of these refugees 159 men and 167 women arrived from the German concentration camps in Ravensbrück, Buchenwald, Bergen-Belsen, Neuengamme and others. Others arrived in a boat destined to be sunk when peace came and the crew changed mind, letting the boat board at Malmö harbour. Thus life was saved to more than 95% of its passengers. Of the refugees 31% came from Poland, 24% from Scandinavian countries, 12% from Benelux and 10% from France. Louse-borne typhus was the most frequent diagnosis that occurred in 35%. Other common disorders were diphtheria, scarlet fever, enteric fever and tuberculosis. Almost all prisoners from concentration camps were malnourished and had sustained severe cruelty. Most of them recovered rapidly when given food and vitamins.
Remote-Sensing Data Distribution and Processing in the Cloud at the ASF DAAC
NASA Astrophysics Data System (ADS)
Stoner, C.; Arko, S. A.; Nicoll, J. B.; Labelle-Hamer, A. L.
2016-12-01
The Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) has been tasked to archive and distribute data from both SENTINEL-1 satellites and from the NASA-ISRO Synthetic Aperture Radar (NISAR) satellite in a cost effective manner. In order to best support processing and distribution of these large data sets for users, the ASF DAAC enhanced our data system in a number of ways that will be detailed in this presentation.The SENTINEL-1 mission comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band Synthetic Aperture Radar (SAR) imaging, enabling them to acquire imagery regardless of the weather. SENTINEL-1A was launched by the European Space Agency (ESA) in April 2014. SENTINEL-1B is scheduled to launch in April 2016.The NISAR satellite is designed to observe and take measurements of some of the planet's most complex processes, including ecosystem disturbances, ice-sheet collapse, and natural hazards such as earthquakes, tsunamis, volcanoes and landslides. NISAR will employ radar imaging, polarimetry, and interferometry techniques using the SweepSAR technology employed for full-resolution wide-swath imaging. NISAR data files are large, making storage and processing a challenge for conventional store and download systems.To effectively process, store, and distribute petabytes of data in a High-performance computing environment, ASF took a long view with regard to technology choices and picked a path of most flexibility and Software re-use. To that end, this Software tools and services presentation will cover Web Object Storage (WOS) and the ability to seamlessly move from local sunk cost hardware to public cloud, such as Amazon Web Services (AWS). A prototype of SENTINEL-1A system that is in AWS, as well as a local hardware solution, will be examined to explain the pros and cons of each. In preparation for NISAR files which will be even larger than SENTINEL-1A, ASF has embarked on a number of cloud initiatives, including processing in the cloud at scale, processing data on-demand, and processing end-user computations on DAAC data in the cloud.
Essays on regulation, institutions, and industrial organization
NASA Astrophysics Data System (ADS)
Bergara, Mario Esteban
Essay I develops a comparative institutional analysis of network access price regulation and "light-handed" regulation. While the former is a specific-agency-based arrangement with higher political influence, the latter is a court-based system. Consequently, the main trade-off between both frameworks reflects the merits of having efficient political and judicial institutions. Price regulation is superior when distributional concerns are irrelevant and information asymmetries are lower. Poorly functioning political systems and high welfare costs of raising funds make price regulation less attractive. Light regulation is more attractive when potential rents are smaller, the monopolist is more risk averse, the judicial system is more efficient, and the threat of government intervention is more credible. The possibility of private transfers makes price regulation more advantageous. Higher information asymmetries among firms makes light-handed regulation more attractive. The main results are consistent with a plausible interpretation of the drastic deregulatory process in New Zealand. Essay II studies the preliminary effects of the deregulation of direct access in the New Zealand's electricity market. A slight improvement in quality standards and an overall efficiency increase took place after two years of deregulation. Retailers were able to successfully enter in large demand, dense areas, with a large proportion of industrial and commercial users, where incumbents were not distributing electricity efficiently. Pricing policies appears to be influenced by market forces (associated to economic and demographic characteristics) as expected in a light regulatory framework. Essay III focuses on the possibility of endogenous sunk costs and the introduction of new products. Firms that exert some monopoly power in one market and introduce a new good whose demand is determined by a broader set of consumers might be forced to change their competing strategies. If the new product is a "quality" good, the resulting competitive process may include advertising outlays, affecting the degree of competition in the old market. In the Uruguayan private banking sector, larger institutions pursued more aggressive advertising strategies to maintain or improve their market positions than smaller firms. Market power in the financial intermediation market has considerably declined after the introduction of new products in the early nineties.
A risk-based prospective payment system that integrates patient, hospital and national costs.
Siegel, C; Jones, K; Laska, E; Meisner, M; Lin, S
1992-05-01
We suggest that a desirable form for prospective payment for inpatient care is hospital average cost plus a linear combination of individual patient and national average cost. When the coefficients are chosen to minimize mean squared error loss between payment and costs, the payment has efficiency and access incentives. The coefficient multiplying patient costs is a hospital specific measure of financial risk of the patient. Access is promoted since providers receive higher reimbursements for risky, high cost patients. Historical cost data can be used to obtain estimates of payment parameters. The method is applied to Medicare data on psychiatric inpatients.
Bubalo, Joseph; Warden, Bruce A; Wiegel, Joshua J; Nishida, Tess; Handel, Evelyn; Svoboda, Leanne M; Nguyen, Lam; Edillo, P Neil
2014-12-01
Medical errors, in particular medication errors, continue to be a troublesome factor in the delivery of safe and effective patient care. Antineoplastic agents represent a group of medications highly susceptible to medication errors due to their complex regimens and narrow therapeutic indices. As the majority of these medication errors are frequently associated with breakdowns in poorly defined systems, developing technologies and evolving workflows seem to be a logical approach to provide added safeguards against medication errors. This article will review both the pros and cons of today's technologies and their ability to simplify the medication use process, reduce medication errors, improve documentation, improve healthcare costs and increase provider efficiency as relates to the use of antineoplastic therapy throughout the medication use process. Several technologies, mainly computerized provider order entry (CPOE), barcode medication administration (BCMA), smart pumps, electronic medication administration record (eMAR), and telepharmacy, have been well described and proven to reduce medication errors, improve adherence to quality metrics, and/or improve healthcare costs in a broad scope of patients. The utilization of these technologies during antineoplastic therapy is weak at best and lacking for most. Specific to the antineoplastic medication use system, the only technology with data to adequately support a claim of reduced medication errors is CPOE. In addition to the benefits these technologies can provide, it is also important to recognize their potential to induce new types of errors and inefficiencies which can negatively impact patient care. The utilization of technology reduces but does not eliminate the potential for error. The evidence base to support technology in preventing medication errors is limited in general but even more deficient in the realm of antineoplastic therapy. Though CPOE has the best evidence to support its use in the antineoplastic population, benefit from many other technologies may have to be inferred based on data from other patient populations. As health systems begin to widely adopt and implement new technologies it is important to critically assess their effectiveness in improving patient safety. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
NASA Technical Reports Server (NTRS)
Rango, A.
1981-01-01
Both LANDSAT and NOAA satellite data were used in improving snowmelt runoff forecasts. When the satellite snow cover data were tested in both empirical seasonal runoff estimation and short term modeling approaches, a definite potential for reducing forecast error was evident. A cost benefit analysis run in conjunction with the snow mapping indicated a $36.5 million annual benefit accruing from a one percent improvement in forecast accuracy using the snow cover data for the western United States. The annual cost of employing the system would be $505,000. The snow mapping has proven that satellite snow cover data can be used to reduce snowmelt runoff forecast error in a cost effective manner once all operational satellite data are available within 72 hours after acquisition. Executive summaries of the individual snow mapping projects are presented.
A reformulation of the Cost Plus Net Value Change (C+NVC) model of wildfire economics
Geoffrey H. Donovan; Douglas B. Rideout
2003-01-01
The Cost plus Net Value Change (C+NVC) model provides the theoretical foundation for wildland fire economics and provides the basis for the National Fire Management Analysis System (NFMAS). The C+NVC model is based on the earlier least Cost plus Loss model (LC+L) expressed by Sparhawk (1925). Mathematical and graphical analysis of the LC+L model illustrates two errors...
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Effect of Job Satisfaction and Motivation towards Employee's Performance in XYZ Shipping Company
ERIC Educational Resources Information Center
Octaviannand, Ramona; Pandjaitan, Nurmala K.; Kuswanto, Sadikin
2017-01-01
In the digital and globalization era which are demanding for tech progress. Human resources need to work more closely and concentration. Small errors can lead to fatal errors that result in high costs for the company. The loss of motivation at work influences employee satisfaction and have a negative impact on employee performance. Research was…
A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.
2007-01-01
Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.
Cost-effectiveness of the streamflow-gaging program in Wyoming
Druse, S.A.; Wahl, K.L.
1988-01-01
This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)
Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui
2015-01-01
Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months. PMID:26213941
Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui
2015-07-24
Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.
EPDM Based Double Slope Triangular Enclosure Solar Collector: A Novel Approach
Qureshi, Shafiq R.; Khan, Waqar A.
2014-01-01
Solar heating is one of the important utilities of solar energy both in domestic and industrial sectors. Evacuated tube heaters are a commonly used technology for domestic water heating. However, increasing cost of copper and nickel has resulted in huge initial cost for these types of heaters. Utilizing solar energy more economically for domestic use requires new concept which has low initial and operating costs together with ease of maintainability. As domestic heating requires only nominal heating temperature to the range of 60–90°C, therefore replacing nickel coated copper pipes with any cheap alternate can drastically reduce the cost of solar heater. We have proposed a new concept which utilizes double slope triangular chamber with EPDM based synthetic rubber pipes. This has reduced the initial and operating costs substantially. A detailed analytical study was carried out to design a novel solar heater. On the basis of analytical design, a prototype was manufactured. Results obtained from the experiments were found to be in good agreement with the analytical study. A maximum error of 10% was recorded at noon. However, results show that error is less than 5% in early and late hours. PMID:24688407
EPDM based double slope triangular enclosure solar collector: a novel approach.
Qureshi, Shafiq R; Khan, Waqar A; Sarwar, Waqas
2014-01-01
Solar heating is one of the important utilities of solar energy both in domestic and industrial sectors. Evacuated tube heaters are a commonly used technology for domestic water heating. However, increasing cost of copper and nickel has resulted in huge initial cost for these types of heaters. Utilizing solar energy more economically for domestic use requires new concept which has low initial and operating costs together with ease of maintainability. As domestic heating requires only nominal heating temperature to the range of 60-90 °C, therefore replacing nickel coated copper pipes with any cheap alternate can drastically reduce the cost of solar heater. We have proposed a new concept which utilizes double slope triangular chamber with EPDM based synthetic rubber pipes. This has reduced the initial and operating costs substantially. A detailed analytical study was carried out to design a novel solar heater. On the basis of analytical design, a prototype was manufactured. Results obtained from the experiments were found to be in good agreement with the analytical study. A maximum error of 10% was recorded at noon. However, results show that error is less than 5% in early and late hours.
More attention when speaking: does it help or does it hurt?
Nozari, Nazbanou; Thompson-Schill, Sharon L.
2013-01-01
Paying selective attention to a word in a multi-word utterance results in a decreased probability of error on that word (benefit), but an increased probability of error on the other words (cost). We ask whether excitation of the prefrontal cortex helps or hurts this cost. One hypothesis (the resource hypothesis) predicts a decrease in the cost due to the deployment of more attentional resources, while another (the focus hypothesis) predicts even greater costs due to further fine-tuning of selective attention. Our results are more consistent with the focus hypothesis: prefrontal stimulation caused a reliable increase in the benefit and a marginal increase in the cost of selective attention. To ensure that the effects are due to changes to the prefrontal cortex, we provide two checks: We show that the pattern of results is quite different if, instead, the primary motor cortex is stimulated. We also show that the stimulation-related benefits in the verbal task correlate with the stimulation-related benefits in an N-back task, which is known to tap into a prefrontal function. Our results shed light on how selective attention affects language production, and more generally, on how selective attention affects production of a sequence over time. PMID:24012690
Fifolt, Matthew; Blackburn, Justin; Rhodes, David J; Gillespie, Shemeka; Bennett, Aleena; Wolff, Paul; Rucks, Andrew
Historically, double data entry (DDE) has been considered the criterion standard for minimizing data entry errors. However, previous studies considered data entry alternatives through the limited lens of data accuracy. This study supplies information regarding data accuracy, operational efficiency, and cost for DDE and Optical Mark Recognition (OMR) for processing the Consumer Assessment of Healthcare Providers and Systems 5.0 survey. To assess data accuracy, we compared error rates for DDE and OMR by dividing the number of surveys that were arbitrated by the total number of surveys processed for each method. To assess operational efficiency, we tallied the cost of data entry for DDE and OMR after survey receipt. Costs were calculated on the basis of personnel, depreciation for capital equipment, and costs of noncapital equipment. The cost savings attributed to this method were negated by the operational efficiency of OMR. There was a statistical significance between rates of arbitration between DDE and OMR; however, this statistical significance did not create a practical significance. The potential benefits of DDE in terms of data accuracy did not outweigh the operational efficiency and thereby financial savings of OMR.
Chen, Cong; Beckman, Robert A
2009-01-01
This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.
A Novel Low-Cost, Large Curvature Bend Sensor Based on a Bowden-Cable
Jeong, Useok; Cho, Kyu-Jin
2016-01-01
Bend sensors have been developed based on conductive ink, optical fiber, and electronic textiles. Each type has advantages and disadvantages in terms of performance, ease of use, and cost. This study proposes a new and low-cost bend sensor that can measure a wide range of accumulated bend angles with large curvatures. This bend sensor utilizes a Bowden-cable, which consists of a coil sheath and an inner wire. Displacement changes of the Bowden-cable’s inner wire, when the shape of the sheath changes, have been considered to be a position error in previous studies. However, this study takes advantage of this position error to detect the bend angle of the sheath. The bend angle of the sensor can be calculated from the displacement measurement of the sensing wire using a Hall-effect sensor or a potentiometer. Simulations and experiments have shown that the accumulated bend angle of the sensor is linearly related to the sensor signal, with an R-square value up to 0.9969 and a root mean square error of 2% of the full sensing range. The proposed sensor is not affected by a bend curvature of up to 80.0 m−1, unlike previous bend sensors. The proposed sensor is expected to be useful for various applications, including motion capture devices, wearable robots, surgical devices, or generally any device that requires an affordable and low-cost bend sensor. PMID:27347959
Cost-effectiveness of the stream-gaging program in Missouri
Waite, L.A.
1987-01-01
This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Almalaq, Yasser; Matin, Mohammad A.
2014-09-01
The broadband passive optical network (BPON) has the ability to support high-speed data, voice, and video services to home and small businesses customers. In this work, the performance of bi-directional BPON is analyzed for both down and up streams traffic cases by the help of erbium doped fiber amplifier (EDFA). The importance of BPON is reduced cost. Because PBON uses a splitter the cost of the maintenance between the providers and the customers side is suitable. In the proposed research, BPON has been tested by the use of bit error rate (BER) analyzer. BER analyzer realizes maximum Q factor, minimum bit error rate, and eye height.
Global cost of correcting vision impairment from uncorrected refractive error.
Fricke, T R; Holden, B A; Wilson, D A; Schlenther, G; Naidoo, K S; Resnikoff, S; Frick, K D
2012-10-01
To estimate the global cost of establishing and operating the educational and refractive care facilities required to provide care to all individuals who currently have vision impairment resulting from uncorrected refractive error (URE). The global cost of correcting URE was estimated using data on the population, the prevalence of URE and the number of existing refractive care practitioners in individual countries, the cost of establishing and operating educational programmes for practitioners and the cost of establishing and operating refractive care facilities. The assumptions made ensured that costs were not underestimated and an upper limit to the costs was derived using the most expensive extreme for each assumption. There were an estimated 158 million cases of distance vision impairment and 544 million cases of near vision impairment caused by URE worldwide in 2007. Approximately 47 000 additional full-time functional clinical refractionists and 18 000 ophthalmic dispensers would be required to provide refractive care services for these individuals. The global cost of educating the additional personnel and of establishing, maintaining and operating the refractive care facilities needed was estimated to be around 20 000 million United States dollars (US$) and the upper-limit cost was US$ 28 000 million. The estimated loss in global gross domestic product due to distance vision impairment caused by URE was US$ 202 000 million annually. The cost of establishing and operating the educational and refractive care facilities required to deal with vision impairment resulting from URE was a small proportion of the global loss in productivity associated with that vision impairment.
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Valuing urban open space using the travel-cost method and the implications of measurement error.
Hanauer, Merlin M; Reid, John
2017-08-01
Urbanization has placed pressure on open space within and adjacent to cities. In recent decades, a greater awareness has developed to the fact that individuals derive multiple benefits from urban open space. Given the location, there is often a high opportunity cost to preserving urban open space, thus it is important for both public and private stakeholders to justify such investments. The goals of this study are twofold. First, we use detailed surveys and precise, accessible, mapping methods to demonstrate how travel-cost methods can be applied to the valuation of urban open space. Second, we assess the degree to which typical methods of estimating travel times, and thus travel costs, introduce bias to the estimates of welfare. The site we study is Taylor Mountain Regional Park, a 1100-acre space located immediately adjacent to Santa Rosa, California, which is the largest city (∼170,000 population) in Sonoma County and lies 50 miles north of San Francisco. We estimate that the average per trip access value (consumer surplus) is $13.70. We also demonstrate that typical methods of measuring travel costs significantly understate these welfare measures. Our study provides policy-relevant results and highlights the sensitivity of urban open space travel-cost studies to bias stemming from travel-cost measurement error. Copyright © 2017 Elsevier Ltd. All rights reserved.
Task switching costs in preschool children and adults.
Peng, Anna; Kirkham, Natasha Z; Mareschal, Denis
2018-08-01
Past research investigating cognitive flexibility has shown that preschool children make many perseverative errors in tasks that require switching between different sets of rules. However, this inflexibility might not necessarily hold with easier tasks. The current study investigated the developmental differences in cognitive flexibility using a task-switching procedure that compared reaction times and accuracy in 4- and 6-year-olds with those in adults. The experiment involved simple target detection tasks and was intentionally designed in a way that the stimulus and response conflicts were minimal together with a long preparation window. Global mixing costs (performance costs when multiple tasks are relevant in a context), and local switch costs (performance costs due to switching to an alternative task) are typically thought to engage endogenous control processes. If this is the case, we should observe developmental differences with both of these costs. Our results show, however, that when the accuracy was good, there were no age differences in cognitive flexibility (i.e., the ability to manage multiple tasks and to switch between tasks) between children and adults. Even though preschool children had slower reaction times and were less accurate, the mixing and switch costs associated with task switching were not reliably larger for preschool children. Preschool children did, however, show more commission errors and greater response repetition effects than adults, which may reflect differences in inhibitory control. Copyright © 2018 Elsevier Inc. All rights reserved.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
A comparison between different error modeling of MEMS applied to GPS/INS integrated systems.
Quinchia, Alex G; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles
2013-07-24
Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.
A Comparison between Different Error Modeling of MEMS Applied to GPS/INS Integrated Systems
Quinchia, Alex G.; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles
2013-01-01
Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways. PMID:23887084
Performability modeling based on real data: A case study
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1988-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.
Performability modeling based on real data: A casestudy
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1987-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.
NASA Astrophysics Data System (ADS)
Smith, Gennifer T.; Dwork, Nicholas; Khan, Saara A.; Millet, Matthew; Magar, Kiran; Javanmard, Mehdi; Bowden, Audrey K.
2017-03-01
Urinalysis dipsticks were designed to revolutionize urine-based medical diagnosis. They are cheap, extremely portable, and have multiple assays patterned on a single platform. They were also meant to be incredibly easy to use. Unfortunately, there are many aspects in both the preparation and the analysis of the dipsticks that are plagued by user error. This high error is one reason that dipsticks have failed to flourish in both the at-home market and in low-resource settings. Sources of error include: inaccurate volume deposition, varying lighting conditions, inconsistent timing measurements, and misinterpreted color comparisons. We introduce a novel manifold and companion software for dipstick urinalysis that eliminates the aforementioned error sources. A micro-volume slipping manifold ensures precise sample delivery, an opaque acrylic box guarantees consistent lighting conditions, a simple sticker-based timing mechanism maintains accurate timing, and custom software that processes video data captured by a mobile phone ensures proper color comparisons. We show that the results obtained with the proposed device are as accurate and consistent as a properly executed dip-and-wipe method, the industry gold-standard, suggesting the potential for this strategy to enable confident urinalysis testing. Furthermore, the proposed all-acrylic slipping manifold is reusable and low in cost, making it a potential solution for at-home users and low-resource settings.
Karapinar-Çarkit, Fatma; Borgsteede, Sander D; Zoer, Jan; Egberts, Toine C G; van den Bemt, Patricia M L A; van Tulder, Maurits
2012-03-01
Medication reconciliation aims to correct discrepancies in medication use between health care settings and to check the quality of pharmacotherapy to improve effectiveness and safety. In addition, medication reconciliation might also reduce costs. To evaluate the effect of medication reconciliation on medication costs after hospital discharge in relation to hospital pharmacy labor costs. A prospective observational study was performed. Patients discharged from the pulmonology department were included. A pharmacy team assessed medication errors prevented by medication reconciliation. Interventions were classified into 3 categories: correcting hospital formulary-induced medication changes (eg, reinstating less costly generic drugs used before admission), optimizing pharmacotherapy (eg, discontinuing unnecessary laxative), and eliminating discrepancies (eg, restarting omitted preadmission medication). Because eliminating discrepancies does not represent real costs to society (before hospitalization, the patient was also using the medication), these medication costs were not included in the cost calculation. Medication costs at 1 month and 6 months after hospital discharge and the associated labor costs were assessed using descriptive statistics and scenario analyses. For the 6-month extrapolation, only medication intended for chronic use was included. Two hundred sixty-two patients were included. Correcting hospital formulary changes saved €1.63/patient (exchange rate: EUR 1 = USD 1.3443) in medication costs at 1 month after discharge and €9.79 at 6 months. Optimizing pharmacotherapy saved €20.13/patient in medication costs at 1 month and €86.86 at 6 months. The associated labor costs for performing medication reconciliation were €41.04/patient. Medication cost savings from correcting hospital formulary-induced changes and optimizing of pharmacotherapy (€96.65/patient) outweighed the labor costs at 6 months extrapolation by €55.62/patient (sensitivity analysis €37.25-71.10). Preventing medication errors through medication reconciliation results in higher benefits than the costs related to the net time investment.
The development of a public optometry system in Mozambique: a Cost Benefit Analysis.
Thompson, Stephen; Naidoo, Kovin; Harris, Geoff; Bilotto, Luigi; Ferrão, Jorge; Loughman, James
2014-09-23
The economic burden of uncorrected refractive error (URE) is thought to be high in Mozambique, largely as a consequence of the lack of resources and systems to tackle this largely avoidable problem. The Mozambique Eyecare Project (MEP) has established the first optometry training and human resource deployment initiative to address the burden of URE in Lusophone Africa. The nature of the MEP programme provides the opportunity to determine, using Cost Benefit Analysis (CBA), whether investing in the establishment and delivery of a comprehensive system for optometry human resource development and public sector deployment is economically justifiable for Lusophone Africa. A CBA methodology was applied across the period 2009-2049. Costs associated with establishing and operating a school of optometry, and a programme to address uncorrected refractive error, were included. Benefits were calculated using a human capital approach to valuing sight. Disability weightings from the Global Burden of Disease study were applied. Costs were subtracted from benefits to provide the net societal benefit, which was discounted to provide the net present value using a 3% discount rate. Using the most recently published disability weightings, the potential exists, through the correction of URE in 24.3 million potentially economically productive persons, to achieve a net present value societal benefit of up to $1.1 billion by 2049, at a Benefit-Cost ratio of 14:1. When CBA assumptions are varied as part of the sensitivity analysis, the results suggest the societal benefit could lie in the range of $649 million to $9.6 billion by 2049. This study demonstrates that a programme designed to address the burden of refractive error in Mozambique is economically justifiable in terms of the increased productivity that would result due to its implementation.
Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.
2002-01-01
Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
NASA Astrophysics Data System (ADS)
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2014-01-01
Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.
Hessian matrix approach for determining error field sensitivity to coil deviations
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi
2018-05-01
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2009-01-01
In this paper we show by means of numerical experiments that the error introduced in a numerical domain because of a Perfectly Matched Layer or Damping Layer boundary treatment can be controlled. These experimental demonstrations are for acoustic propagation with the Linearized Euler Equations with both uniform and steady jet flows. The propagating signal is driven by a time harmonic pressure source. Combinations of Perfectly Matched and Damping Layers are used with different damping profiles. These layer and profile combinations allow the relative error introduced by a layer to be kept as small as desired, in principle. Tradeoffs between error and cost are explored.
Simplification of the Kalman filter for meteorological data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1991-01-01
The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.
A Robust Nonlinear Observer for Real-Time Attitude Estimation Using Low-Cost MEMS Inertial Sensors
Guerrero-Castellanos, José Fermi; Madrigal-Sastre, Heberto; Durand, Sylvain; Torres, Lizeth; Muñoz-Hernández, German Ardul
2013-01-01
This paper deals with the attitude estimation of a rigid body equipped with angular velocity sensors and reference vector sensors. A quaternion-based nonlinear observer is proposed in order to fuse all information sources and to obtain an accurate estimation of the attitude. It is shown that the observer error dynamics can be separated into two passive subsystems connected in “feedback”. Then, this property is used to show that the error dynamics is input-to-state stable when the measurement disturbance is seen as an input and the error as the state. These results allow one to affirm that the observer is “robustly stable”. The proposed observer is evaluated in real-time with the design and implementation of an Attitude and Heading Reference System (AHRS) based on low-cost MEMS (Micro-Electro-Mechanical Systems) Inertial Measure Unit (IMU) and magnetic sensors and a 16-bit microcontroller. The resulting estimates are compared with a high precision motion system to demonstrate its performance. PMID:24201316
Tuning support vector machines for minimax and Neyman-Pearson classification.
Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D
2010-10-01
This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.
Riga, Marina; Vozikis, Athanassios; Pollalis, Yannis; Souliotis, Kyriakos
2015-04-01
The economic crisis in Greece poses the necessity to resolve problems concerning both the spiralling cost and the quality assurance in the health system. The detection and the analysis of patient adverse events and medical errors are considered crucial elements of this course. The implementation of MERIS embodies a mandatory module, which adopts the trigger tool methodology for measuring adverse events and medical errors an intensive care unit [ICU] environment, and a voluntary one with web-based public reporting methodology. A pilot implementation of MERIS running in a public hospital identified 35 adverse events, with approx. 12 additional hospital days and an extra healthcare cost of €12,000 per adverse event or of about €312,000 per annum for ICU costs only. At the same time, the voluntary module unveiled 510 reports on adverse events submitted by citizens or patients. MERIS has been evaluated as a comprehensive and effective system; it succeeded in detecting the main factors that cause adverse events and discloses severe omissions of the Greek health system. MERIS may be incorporated and run efficiently nationally, adapted to the needs and peculiarities of each hospital or clinic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Process Approach to Determining Quality Inspection Deployment
2015-06-08
27 B.1 The Deming Rule...inspection is effective. The Deming rule is explained in more detail in Appendix B. Basically the probability of error is compared to the cost of...cost or risk to repair defects escaped from inspection justify reducing inspectors. (See Appendix B for Deming rule discussion.) Three quantities must
Pollution, Health, and Avoidance Behavior: Evidence from the Ports of Los Angeles
ERIC Educational Resources Information Center
Moretti, Enrico; Neidell, Matthew
2011-01-01
A pervasive problem in estimating the costs of pollution is that optimizing individuals may compensate for increases in pollution by reducing their exposure, resulting in estimates that understate the full welfare costs. To account for this issue, measurement error, and environmental confounding, we estimate the health effects of ozone using daily…
Empirical evidence for resource-rational anchoring and adjustment.
Lieder, Falk; Griffiths, Thomas L; M Huys, Quentin J; Goodman, Noah D
2018-04-01
People's estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people's rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people's knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.
Global optimization method based on ray tracing to achieve optimum figure error compensation
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin
2017-02-01
Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.
Health System Perspective: Variation, Costs, and Physician Behavior.
Jevsevar, David S
2015-01-01
With the generalized rise in cost of US health care, renewed emphasis has been placed on defining the relationship between costs and outcome. The quality of health care delivery has been uneven, as measured by existing quality and performance measures, significant variation in the delivery of care, lack of standardization of care leading to avoidable error, lack of meaningful outcomes measurement, and apparent disconnect with cost. Orthopaedic surgeons are at the nexus of implementing change, as we are the primary decision makers regarding care of our patients. This summary will review efforts that address quality, cost, outcome, safety, and variation in care.
Strain actuated aeroelastic control
NASA Technical Reports Server (NTRS)
Lazarus, Kenneth B.
1992-01-01
Viewgraphs on strain actuated aeroelastic control are presented. Topics covered include: structural and aerodynamic modeling; control law design methodology; system block diagram; adaptive wing test article; bench-top experiments; bench-top disturbance rejection: open and closed loop response; bench-top disturbance rejection: state cost versus control cost; wind tunnel experiments; wind tunnel gust alleviation: open and closed loop response at 60 mph; wind tunnel gust alleviation: state cost versus control cost at 60 mph; wind tunnel command following: open and closed loop error at 60 mph; wind tunnel flutter suppression: open loop flutter speed; and wind tunnel flutter suppression: closed loop state cost curves.
Integrated Blade Inspection System (IBIS) Upgrade Study
1992-12-01
the maintenance contract for the IBIS is equally divided among its 4-1 three major components, then the FPIM’s maintenance costs account for 33% of...network to the blade inspection problem, the output of the network must be interpreted in a slightly different manner to account for the cost of the...missed detections, respectively. The costs associated with the two types of error are drastically different and should be accounted for in the
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Li, Tao; Yuan, Gannan; Li, Wang
2016-03-15
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.
Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita
2013-03-01
Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.
Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation
Munoz Diaz, Estefania; Caamano, Maria; Fuentes Sánchez, Francisco Javier
2017-01-01
The navigation of pedestrians based on inertial sensors, i.e., accelerometers and gyroscopes, has experienced a great growth over the last years. However, the noise of medium- and low-cost sensors causes a high error in the orientation estimation, particularly in the yaw angle. This error, called drift, is due to the bias of the z-axis gyroscope and other slow changing errors, such as temperature variations. We propose a seamless landmark-based drift compensation algorithm that only uses inertial measurements. The proposed algorithm adds a great value to the state of the art, because the vast majority of the drift elimination algorithms apply corrections to the estimated position, but not to the yaw angle estimation. Instead, the presented algorithm computes the drift value and uses it to prevent yaw errors and therefore position errors. In order to achieve this goal, a detector of landmarks, i.e., corners and stairs, and an association algorithm have been developed. The results of the experiments show that it is possible to reliably detect corners and stairs using only inertial measurements eliminating the need that the user takes any action, e.g., pressing a button. Associations between re-visited landmarks are successfully made taking into account the uncertainty of the position. After that, the drift is computed out of all associations and used during a post-processing stage to obtain a low-drifted yaw angle estimation, that leads to successfully drift compensated trajectories. The proposed algorithm has been tested with quasi-error-free turn rate measurements introducing known biases and with medium-cost gyroscopes in 3D indoor and outdoor scenarios. PMID:28671622
Reward Pays the Cost of Noise Reduction in Motor and Cognitive Control.
Manohar, Sanjay G; Chong, Trevor T-J; Apps, Matthew A J; Batla, Amit; Stamelou, Maria; Jarman, Paul R; Bhatia, Kailash P; Husain, Masud
2015-06-29
Speed-accuracy trade-off is an intensively studied law governing almost all behavioral tasks across species. Here we show that motivation by reward breaks this law, by simultaneously invigorating movement and improving response precision. We devised a model to explain this paradoxical effect of reward by considering a new factor: the cost of control. Exerting control to improve response precision might itself come at a cost--a cost to attenuate a proportion of intrinsic neural noise. Applying a noise-reduction cost to optimal motor control predicted that reward can increase both velocity and accuracy. Similarly, application to decision-making predicted that reward reduces reaction times and errors in cognitive control. We used a novel saccadic distraction task to quantify the speed and accuracy of both movements and decisions under varying reward. Both faster speeds and smaller errors were observed with higher incentives, with the results best fitted by a model including a precision cost. Recent theories consider dopamine to be a key neuromodulator in mediating motivational effects of reward. We therefore examined how Parkinson's disease (PD), a condition associated with dopamine depletion, alters the effects of reward. Individuals with PD showed reduced reward sensitivity in their speed and accuracy, consistent in our model with higher noise-control costs. Including a cost of control over noise explains how reward may allow apparent performance limits to be surpassed. On this view, the pattern of reduced reward sensitivity in PD patients can specifically be accounted for by a higher cost for controlling noise. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
An Expert System for the Evaluation of Cost Models
1990-09-01
contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John
Morjaria, Priya; Murali, Kaushik; Evans, Jennifer; Gilbert, Clare
2016-01-19
Uncorrected refractive errors are the commonest cause of visual impairment in children, with myopia being the most frequent type. Myopia usually starts around 9 years of age and progresses throughout adolescence. Hyperopia usually affects younger children, and astigmatism affects all age groups. Many children have a combination of myopia and astigmatism. To correct refractive errors, the type and degree of refractive error are measured and appropriate corrective lenses prescribed and dispensed in the spectacle frame of choice. Custom spectacles (that is, with the correction specifically required for that individual) are required if astigmatism is present, and/or the refractive error differs between eyes. Spectacles without astigmatic correction and where the refractive error is the same in both eyes are straightforward to dispense. These are known as 'ready-made' spectacles. High-quality spectacles of this type can be produced in high volume at an extremely low cost. Although spectacle correction improves visual function, a high proportion of children do not wear their spectacles for a variety of reasons. The aim of this study is to compare spectacle wear at 3-4 months amongst school children aged 11 to 15 years who have significant, simple uncorrected refractive error randomised to ready-made or custom spectacles of equivalent quality, and to evaluate cost savings to programmes. The study will take place in urban and semi-urban government schools in Bangalore, India. The hypothesis is that similar proportions of children randomised to ready-made or custom spectacles will be wearing their spectacles at 3-4 months. The trial is a randomised, non-inferiority, double masked clinical trial of children with simple uncorrected refractive errors. After screening, children will be randomised to ready-made or custom spectacles. Children will choose their preferred frame design. After 3-4 months the children will be followed up to assess spectacle wear. Ready-made spectacles have benefits for providers as well as parents and children, as a wide range of prescriptions and frame types can be taken to schools and dispensed immediately. In contrast, custom spectacles have to be individually made up in optical laboratories, and taken back to the school and given to the correct child. ISRCTN14715120 (Controlled-Trials.com) Date registered: 04 February 2015.
Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie
2014-01-01
This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Cresswell, Kathrin; Eden, Martin; Elliott, Rachel A; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Prescott, Robin J; Swanwick, Glen; Franklin, Matthew; Putman, Koen; Boyd, Matthew; Sheikh, Aziz
2012-01-01
Summary Background Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-effectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months' follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0·58, 95% CI 0·38–0·89); a β blocker if they had asthma (0·73, 0·58–0·91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0·51, 0·34–0·78). PINCER has a 95% probability of being cost effective if the decision-maker's ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding Patient Safety Research Portfolio, Department of Health, England. PMID:22357106
Lost in Translation: the Case for Integrated Testing
NASA Technical Reports Server (NTRS)
Young, Aaron
2017-01-01
The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.
Ground support system methodology and architecture
NASA Technical Reports Server (NTRS)
Schoen, P. D.
1991-01-01
A synergistic approach to systems test and support is explored. A building block architecture provides transportability of data, procedures, and knowledge. The synergistic approach also lowers cost and risk for life cycle of a program. The determination of design errors at the earliest phase reduces cost of vehicle ownership. Distributed scaleable architecture is based on industry standards maximizing transparency and maintainability. Autonomous control structure provides for distributed and segmented systems. Control of interfaces maximizes compatibility and reuse, reducing long term program cost. Intelligent data management architecture also reduces analysis time and cost (automation).
Heuristic-driven graph wavelet modeling of complex terrain
NASA Astrophysics Data System (ADS)
Cioacǎ, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai-Sorin; Pǎtru-Stupariu, Ileana; Nǎpǎrus, Magdalena; Stoicescu, Ioana; Peringer, Alexander; Buttler, Alexandre; Golay, François
2015-03-01
We present a novel method for building a multi-resolution representation of large digital surface models. The surface points coincide with the nodes of a planar graph which can be processed using a critically sampled, invertible lifting scheme. To drive the lazy wavelet node partitioning, we employ an attribute aware cost function based on the generalized quadric error metric. The resulting algorithm can be applied to multivariate data by storing additional attributes at the graph's nodes. We discuss how the cost computation mechanism can be coupled with the lifting scheme and examine the results by evaluating the root mean square error. The algorithm is experimentally tested using two multivariate LiDAR sets representing terrain surface and vegetation structure with different sampling densities.
Interactive computer graphics - Why's, wherefore's and examples
NASA Technical Reports Server (NTRS)
Gregory, T. J.; Carmichael, R. L.
1983-01-01
The benefits of using computer graphics in design are briefly reviewed. It is shown that computer graphics substantially aids productivity by permitting errors in design to be found immediately and by greatly reducing the cost of fixing the errors and the cost of redoing the process. The possibilities offered by computer-generated displays in terms of information content are emphasized, along with the form in which the information is transferred. The human being is ideally and naturally suited to dealing with information in picture format, and the content rate in communication with pictures is several orders of magnitude greater than with words or even graphs. Since science and engineering involve communicating ideas, concepts, and information, the benefits of computer graphics cannot be overestimated.
[Clinical economics: a concept to optimize healthcare services].
Porzsolt, F; Bauer, K; Henne-Bruns, D
2012-03-01
Clinical economics strives to support healthcare decisions by economic considerations. Making economic decisions does not mean saving costs but rather comparing the gained added value with the burden which has to be accepted. The necessary rules are offered in various disciplines, such as economy, epidemiology and ethics. Medical doctors have recognized these rules but are not applying them in daily clinical practice. This lacking orientation leads to preventable errors. Examples of these errors are shown for diagnosis, screening, prognosis and therapy. As these errors can be prevented by application of clinical economic principles the possible consequences for optimization of healthcare are discussed.
Cao, Hui; Stetson, Peter; Hripcsak, George
2003-01-01
Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.
Wu, Li-Wei; Lin, Wen-Jie; Hsu, Yu-Feng
2018-01-01
Abstract The Tailed Punch, Dodona eugenes, is widely distributed in East Asia with seven subspecies currently recognized. However, two of them, namely ssp. formosana and ssp. esakii found in Taiwan, are hard to distinguish from each other due to ambiguous diagnostic characters. In this study, their taxonomic status is clarified by comparing genitalia characters and phylogenetic relationships based on mitochondrial sequences, COI and COII (total 2211 bps). Our results show that there is no reliable feature to separate these two subspecies. Surprisingly we found that Dodona in Taiwan is more closely related to the Orange Punch, D. egeon, than to other subspecies of D. eugenes. Therefore, the following nomenclatural changes are proposed: Dodona eugenes formosana is revised to specific status as Dodona formosana Matsumura, 1919, stat. rev, and ssp. esakii is sunk to a junior synonym of Dodona formosana syn. n. PMID:29674868
Role of the pharmacist in reducing healthcare costs: current insights
Dalton, Kieran; Byrne, Stephen
2017-01-01
Global healthcare expenditure is escalating at an unsustainable rate. Money spent on medicines and managing medication-related problems continues to grow. The high prevalence of medication errors and inappropriate prescribing is a major issue within healthcare systems, and can often contribute to adverse drug events, many of which are preventable. As a result, there is a huge opportunity for pharmacists to have a significant impact on reducing healthcare costs, as they have the expertise to detect, resolve, and prevent medication errors and medication-related problems. The development of clinical pharmacy practice in recent decades has resulted in an increased number of pharmacists working in clinically advanced roles worldwide. Pharmacist-provided services and clinical interventions have been shown to reduce the risk of potential adverse drug events and improve patient outcomes, and the majority of published studies show that these pharmacist activities are cost-effective or have a good cost:benefit ratio. This review demonstrates that pharmacists can contribute to substantial healthcare savings across a variety of settings. However, there is a paucity of evidence in the literature highlighting the specific aspects of pharmacists’ work which are the most effective and cost-effective. Future high-quality economic evaluations with robust methodologies and study design are required to investigate what pharmacist services have significant clinical benefits to patients and substantiate the greatest cost savings for healthcare budgets. PMID:29354549
Close the gap for vision: The key is to invest on coordination.
Hsueh, Ya-seng Arthur; Dunt, David; Anjou, Mitchell D; Boudville, Andrea; Taylor, Hugh
2013-12-01
The study aims to estimate costs required for coordination and case management activities support access to treatment for the three most common eye conditions among Indigenous Australians, cataract, refractive error and diabetic retinopathy. Coordination activities were identified using in-depth interviews, focus groups and face-to-face consultations. Data were collected at 21 sites across Australia. The estimation of costs used salary data from relevant government websites and was organised by diagnosis and type of coordination activity. Urban and remote regions of Australia. Needs-based provision support services to facilitate access to eye care for cataract, refractive error and diabetic retinopathy to Indigenous Australians. Cost (AUD$ in 2011) of equivalent full time (EFT) coordination staff. The annual coordination workforce required for the three eye conditions was 8.3 EFT staff per 10 000 Indigenous Australians. The annual cost of eye care coordination workforce is estimated to be AUD$21 337 012 in 2011. This innovative, 'activity-based' model identified the workforce required to support the provision of eye care for Indigenous Australians and estimated their costs. The findings are of clear value to government funders and other decision makers. The model can potentially be used to estimate staffing and associated costs for other Indigenous and non-Indigenous health needs. © 2013 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parks, K.; Wan, Y. H.; Wiener, G.
2011-10-01
The focus of this report is the wind forecasting system developed during this contract period with results of performance through the end of 2010. The report is intentionally high-level, with technical details disseminated at various conferences and academic papers. At the end of 2010, Xcel Energy managed the output of 3372 megawatts of installed wind energy. The wind plants span three operating companies1, serving customers in eight states2, and three market structures3. The great majority of the wind energy is contracted through power purchase agreements (PPAs). The remainder is utility owned, Qualifying Facilities (QF), distributed resources (i.e., 'behind the meter'),more » or merchant entities within Xcel Energy's Balancing Authority footprints. Regardless of the contractual or ownership arrangements, the output of the wind energy is balanced by Xcel Energy's generation resources that include fossil, nuclear, and hydro based facilities that are owned or contracted via PPAs. These facilities are committed and dispatched or bid into day-ahead and real-time markets by Xcel Energy's Commercial Operations department. Wind energy complicates the short and long-term planning goals of least-cost, reliable operations. Due to the uncertainty of wind energy production, inherent suboptimal commitment and dispatch associated with imperfect wind forecasts drives up costs. For example, a gas combined cycle unit may be turned on, or committed, in anticipation of low winds. The reality is winds stayed high, forcing this unit and others to run, or be dispatched, to sub-optimal loading positions. In addition, commitment decisions are frequently irreversible due to minimum up and down time constraints. That is, a dispatcher lives with inefficient decisions made in prior periods. In general, uncertainty contributes to conservative operations - committing more units and keeping them on longer than may have been necessary for purposes of maintaining reliability. The downside is costs are higher. In organized electricity markets, units that are committed for reliability reasons are paid their offer price even when prevailing market prices are lower. Often, these uplift charges are allocated to market participants that caused the inefficient dispatch in the first place. Thus, wind energy facilities are burdened with their share of costs proportional to their forecast errors. For Xcel Energy, wind energy uncertainty costs manifest depending on specific market structures. In the Public Service of Colorado (PSCo), inefficient commitment and dispatch caused by wind uncertainty increases fuel costs. Wind resources participating in the Midwest Independent System Operator (MISO) footprint make substantial payments in the real-time markets to true-up their day-ahead positions and are additionally burdened with deviation charges called a Revenue Sufficiency Guarantee (RSG) to cover out of market costs associated with operations. Southwest Public Service (SPS) wind plants cause both commitment inefficiencies and are charged Southwest Power Pool (SPP) imbalance payments due to wind uncertainty and variability. Wind energy forecasting helps mitigate these costs. Wind integration studies for the PSCo and Northern States Power (NSP) operating companies have projected increasing costs as more wind is installed on the system due to forecast error. It follows that reducing forecast error would reduce these costs. This is echoed by large scale studies in neighboring regions and states that have recommended adoption of state-of-the-art wind forecasting tools in day-ahead and real-time planning and operations. Further, Xcel Energy concluded reduction of the normalized mean absolute error by one percent would have reduced costs in 2008 by over $1 million annually in PSCo alone. The value of reducing forecast error prompted Xcel Energy to make substantial investments in wind energy forecasting research and development.« less
Errors prevention in manufacturing process through integration of Poka Yoke and TRIZ
NASA Astrophysics Data System (ADS)
Helmi, Syed Ahmad; Nordin, Nur Nashwa; Hisjam, Muhammad
2017-11-01
Integration of Poka Yoke and TRIZ is a method of solving problems by using a different approach. Poka Yoke is a trial and error method while TRIZ is using a systematic approach. The main purpose of this technique is to get rid of product defects by preventing or correcting errors as soon as possible. Blame the workers for their mistakes is not the best way, but the work process should be reviewed so that every workers behavior or movement may not cause errors. This study is to demonstrate the importance of using both of these methods in which everyone in the industry needs to improve quality, increase productivity and at the same time reducing production cost.
System of error detection in the manufacture of garments using artificial vision
NASA Astrophysics Data System (ADS)
Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.
2017-12-01
A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.
Long-term care physical environments--effect on medication errors.
Mahmood, Atiya; Chaudhury, Habib; Gaumont, Alana; Rust, Tiana
2012-01-01
Few studies examine physical environmental factors and their effects on staff health, effectiveness, work errors and job satisfaction. To address this gap, this study aims to examine environmental features and their role in medication and nursing errors in long-term care facilities. A mixed methodological strategy was used. Data were collected via focus groups, observing medication preparation and administration, and a nursing staff survey in four facilities. The paper reveals that, during the medication preparation phase, physical design, such as medication room layout, is a major source of potential errors. During medication administration, social environment is more likely to contribute to errors. Interruptions, noise and staff shortages were particular problems. The survey's relatively small sample size needs to be considered when interpreting the findings. Also, actual error data could not be included as existing records were incomplete. The study offers several relatively low-cost recommendations to help staff reduce medication errors. Physical environmental factors are important when addressing measures to reduce errors. The findings of this study underscore the fact that the physical environment's influence on the possibility of medication errors is often neglected. This study contributes to the scarce empirical literature examining the relationship between physical design and patient safety.
Human error and human factors engineering in health care.
Welch, D L
1997-01-01
Human error is inevitable. It happens in health care systems as it does in all other complex systems, and no measure of attention, training, dedication, or punishment is going to stop it. The discipline of human factors engineering (HFE) has been dealing with the causes and effects of human error since the 1940's. Originally applied to the design of increasingly complex military aircraft cockpits, HFE has since been effectively applied to the problem of human error in such diverse systems as nuclear power plants, NASA spacecraft, the process control industry, and computer software. Today the health care industry is becoming aware of the costs of human error and is turning to HFE for answers. Just as early experimental psychologists went beyond the label of "pilot error" to explain how the design of cockpits led to air crashes, today's HFE specialists are assisting the health care industry in identifying the causes of significant human errors in medicine and developing ways to eliminate or ameliorate them. This series of articles will explore the nature of human error and how HFE can be applied to reduce the likelihood of errors and mitigate their effects.
More attention when speaking: does it help or does it hurt?
Nozari, Nazbanou; Thompson-Schill, Sharon L
2013-11-01
Paying selective attention to a word in a multi-word utterance results in a decreased probability of error on that word (benefit), but an increased probability of error on the other words (cost). We ask whether excitation of the prefrontal cortex helps or hurts this cost. One hypothesis (the resource hypothesis) predicts a decrease in the cost due to the deployment of more attentional resources, while another (the focus hypothesis) predicts even greater costs due to further fine-tuning of selective attention. Our results are more consistent with the focus hypothesis: prefrontal stimulation caused a reliable increase in the benefit and a marginal increase in the cost of selective attention. To ensure that the effects are due to changes to the prefrontal cortex, we provide two checks: We show that the pattern of results is quite different if, instead, the primary motor cortex is stimulated. We also show that the stimulation-related benefits in the verbal task correlate with the stimulation-related benefits in an N-back task, which is known to tap into a prefrontal function. Our results shed light on how selective attention affects language production, and more generally, on how selective attention affects production of a sequence over time. Copyright © 2013 Elsevier Ltd. All rights reserved.
Stem revenue losses with effective CDM management.
Alwell, Michael
2003-09-01
Effective CDM management not only minimizes revenue losses due to denied claims, but also helps eliminate administrative costs associated with correcting coding errors. Accountability for CDM management should be assigned to a single individual, who ideally reports to the CFO or high-level finance director. If your organization is prone to making billing errors due to CDM deficiencies, you should consider purchasing CDM software to help you manage your CDM.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin
Walker, J.F.; Osen, L.L.; Hughes, P.E.
1987-01-01
A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%.
FMLRC: Hybrid long read error correction using an FM-index.
Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D
2018-02-09
Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.
Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont
Smath, J.A.; Blackey, F.E.
1986-01-01
Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)
Novel parametric reduced order model for aeroengine blade dynamics
NASA Astrophysics Data System (ADS)
Yuan, Jie; Allegri, Giuliano; Scarpa, Fabrizio; Rajasekaran, Ramesh; Patsias, Sophoclis
2015-10-01
The work introduces a novel reduced order model (ROM) technique to describe the dynamic behavior of turbofan aeroengine blades. We introduce an equivalent 3D frame model to describe the coupled flexural/torsional mode shapes, with their relevant natural frequencies and associated modal masses. The frame configurations are identified through a structural identification approach based on a simulated annealing algorithm with stochastic tunneling. The cost functions are constituted by linear combinations of relative errors associated to the resonance frequencies, the individual modal assurance criteria (MAC), and on either overall static or modal masses. When static masses are considered the optimized 3D frame can represent the blade dynamic behavior with an 8% error on the MAC, a 1% error on the associated modal frequencies and a 1% error on the overall static mass. When using modal masses in the cost function the performance of the ROM is similar, but the overall error increases to 7%. The approach proposed in this paper is considerably more accurate than state-of-the-art blade ROMs based on traditional Timoshenko beams, and provides excellent accuracy at reduced computational time when compared against high fidelity FE models. A sensitivity analysis shows that the proposed model can adequately predict the global trends of the variations of the natural frequencies when lumped masses are used for mistuning analysis. The proposed ROM also follows extremely closely the sensitivity of the high fidelity finite element models when the material parameters are used in the sensitivity.
Nursing Home Levels of Care: Reimbursement of Resident Specific Costs
Willemain, Thomas R.
1980-01-01
The companion paper on nursing home levels of care (Bishop, Plough and Willemain, 1980) recommended a “split-rate” approach to nursing home reimbursement that would distinguish between fixed and variable costs. This paper examines three alternative treatments of the variable cost component of the rate: a two-level system similar to the distinction between skilled and intermediate care facilities, an individualized (“patient-centered”) system, and a system that assigns a single facility-specific rate that depends on the facility's case-mix (“case-mix reimbursement”). The aim is to better understand the theoretical strengths and weaknesses of these three approaches. The comparison of reimbursement alternatives is framed in terms of minimizing reimbursement error, meaning overpayment and underpayment. We develop a conceptual model of reimbursement error that stresses that the features of the reimbursement scheme are only some of the factors contributing to over- and underpayment. The conceptual model is translated into a computer program for quantitative comparison of the alternatives. PMID:10309330
Zurovac, Dejan; Larson, Bruce A.; Skarbinski, Jacek; Slutsker, Laurence; Snow, Robert W.; Hamel, Mary J.
2008-01-01
Using data on clinical practices for outpatients 5 years and older, test accuracy, and malaria prevalence, we model financial and clinical implications of malaria rapid diagnostic tests (RDTs) under the new artemether-lumefantrine (AL) treatment policy in one high and one low malaria prevalence district in Kenya. In the high transmission district, RDTs as actually used would improve malaria treatment (61% less over-treatment but 8% more under-treatment) and lower costs (21% less). Nonetheless, the majority of patients with malaria would not be correctly treated with AL. In the low transmission district, especially because the treatment policy was new and AL was not widely used, RDTs as actually used would yield a minor reduction in under-treatment errors (36% less but the base is small) with 41% higher costs. In both districts, adherence to revised clinical practices with RDTs has the potential to further decrease treatment errors with acceptable costs. PMID:18541764
Nursing home levels of care: reimbursement of resident specific costs.
Willemain, T R
1980-01-01
The companion paper on nursing home levels of care (Bishop, Plough and Willemain, 1980) recommended a "split-rate" approach to nursing home reimbursement that would distinguish between fixed and variable costs. This paper examines three alternative treatments of the variable cost component of the rate: a two-level system similar to the distinction between skilled and intermediate care facilities, an individualized ("patient-centered") system, and a system that assigns a single facility-specific rate that depends on the facility's case-mix ("case-mix reimbursement"). The aim is to better understand the theoretical strengths and weaknesses of these three approaches. The comparison of reimbursement alternatives is framed in terms of minimizing reimbursement error, meaning overpayment and underpayment. We develop a conceptual model of reimbursement error that stresses that the features of the reimbursement scheme are only some of the factors contributing to over- and underpayment. The conceptual model is translated into a computer program for quantitative comparison of the alternatives.
Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance
NASA Astrophysics Data System (ADS)
Speck, Richard P.; Herz, Norman E., Jr.
2000-06-01
Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.
Error and Error Mitigation in Low-Coverage Genome Assemblies
Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam
2011-01-01
The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033
Williams, Camille K.; Tremblay, Luc; Carnahan, Heather
2016-01-01
Researchers in the domain of haptic training are now entering the long-standing debate regarding whether or not it is best to learn a skill by experiencing errors. Haptic training paradigms provide fertile ground for exploring how various theories about feedback, errors and physical guidance intersect during motor learning. Our objective was to determine how error minimizing, error augmenting and no haptic feedback while learning a self-paced curve-tracing task impact performance on delayed (1 day) retention and transfer tests, which indicate learning. We assessed performance using movement time and tracing error to calculate a measure of overall performance – the speed accuracy cost function. Our results showed that despite exhibiting the worst performance during skill acquisition, the error augmentation group had significantly better accuracy (but not overall performance) than the error minimization group on delayed retention and transfer tests. The control group’s performance fell between that of the two experimental groups but was not significantly different from either on the delayed retention test. We propose that the nature of the task (requiring online feedback to guide performance) coupled with the error augmentation group’s frequent off-target experience and rich experience of error-correction promoted information processing related to error-detection and error-correction that are essential for motor learning. PMID:28082937
Improving posture-motor dual-task with a supraposture-focus strategy in young and elderly adults
Yu, Shu-Han
2017-01-01
In a postural-suprapostural task, appropriate prioritization is necessary to achieve task goals and maintain postural stability. A “posture-first” principle is typically favored by elderly people in order to secure stance stability, but this comes at the cost of reduced suprapostural performance. Using a postural-suprapostural task with a motor suprapostural goal, this study investigated differences between young and older adults in dual-task cost across varying task prioritization paradigms. Eighteen healthy young (mean age: 24.8 ± 5.2 years) and 18 older (mean age: 68.8 ± 3.7 years) adults executed a designated force-matching task from a stabilometer board using either a stabilometer stance (posture-focus strategy) or force-matching (supraposture-focus strategy) as the primary task. The dual-task effect (DTE: % change in dual-task condition; positive value: dual-task benefit, negative value: dual-task cost) of force-matching error and reaction time (RT), posture error, and approximate entropy (ApEn) of stabilometer movement were measured. When using the supraposture-focus strategy, young adults exhibited larger DTE values in each behavioral parameter than when using the posture-focus strategy. The older adults using the supraposture-focus strategy also attained larger DTE values for posture error, stabilometer movement ApEn, and force-matching error than when using the posture-focus strategy. These results suggest that the supraposture-focus strategy exerted an increased dual-task benefit for posture-motor dual-tasking in both healthy young and elderly adults. The present findings imply that the older adults should make use of the supraposture-focus strategy for fall prevention during dual-task execution. PMID:28151943
Improving posture-motor dual-task with a supraposture-focus strategy in young and elderly adults.
Yu, Shu-Han; Huang, Cheng-Ya
2017-01-01
In a postural-suprapostural task, appropriate prioritization is necessary to achieve task goals and maintain postural stability. A "posture-first" principle is typically favored by elderly people in order to secure stance stability, but this comes at the cost of reduced suprapostural performance. Using a postural-suprapostural task with a motor suprapostural goal, this study investigated differences between young and older adults in dual-task cost across varying task prioritization paradigms. Eighteen healthy young (mean age: 24.8 ± 5.2 years) and 18 older (mean age: 68.8 ± 3.7 years) adults executed a designated force-matching task from a stabilometer board using either a stabilometer stance (posture-focus strategy) or force-matching (supraposture-focus strategy) as the primary task. The dual-task effect (DTE: % change in dual-task condition; positive value: dual-task benefit, negative value: dual-task cost) of force-matching error and reaction time (RT), posture error, and approximate entropy (ApEn) of stabilometer movement were measured. When using the supraposture-focus strategy, young adults exhibited larger DTE values in each behavioral parameter than when using the posture-focus strategy. The older adults using the supraposture-focus strategy also attained larger DTE values for posture error, stabilometer movement ApEn, and force-matching error than when using the posture-focus strategy. These results suggest that the supraposture-focus strategy exerted an increased dual-task benefit for posture-motor dual-tasking in both healthy young and elderly adults. The present findings imply that the older adults should make use of the supraposture-focus strategy for fall prevention during dual-task execution.
Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.
Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan
2018-04-01
In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.
Hessian matrix approach for determining error field sensitivity to coil deviations.
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; ...
2018-03-15
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
Hessian matrix approach for determining error field sensitivity to coil deviations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Productivity associated with visual status of computer users.
Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W
2004-01-01
The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.
NASA Astrophysics Data System (ADS)
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
Survey Costs and Errors: User’s Manual for the Lotus 1-2-3 Spreadsheet
1991-04-01
select appropriate options such as the use of a business reply envelope or a self -addressed, stamped envelope for returning mailed surveys. Recruit. T... self -explanatory and need not be discussed here. Mode/Systematic Automatically enter ALL time and cost estimates for a survey project. "Time and cost...user can choose between a business reply envelope (BRE) or a self -addressed, stamped envelope (SASE) for returning the surveys. For mail surveys, the
Cost efficient command management
NASA Technical Reports Server (NTRS)
Brandt, Theresa; Murphy, C. W.; Kuntz, Jon; Barlett, Tom
1996-01-01
The design and implementation of a command management system (CMS) for a NASA control center, is described. The technology innovations implemented in the CMS provide the infrastructure required for operations cost reduction and future development cost reduction through increased operational efficiency and reuse in future missions. The command management design facilitates error-free operations which enables the automation of the routine control center functions and allows for the distribution of scheduling responsibility to the instrument teams. The reusable system was developed using object oriented methodologies.
Errors in the Extra-Analytical Phases of Clinical Chemistry Laboratory Testing.
Zemlin, Annalise E
2018-04-01
The total testing process consists of various phases from the pre-preanalytical to the post-postanalytical phase, the so-called brain-to-brain loop. With improvements in analytical techniques and efficient quality control programmes, most laboratory errors now occur in the extra-analytical phases. There has been recent interest in these errors with numerous publications highlighting their effect on service delivery, patient care and cost. This interest has led to the formation of various working groups whose mission is to develop standardized quality indicators which can be used to measure the performance of service of these phases. This will eventually lead to the development of external quality assessment schemes to monitor these phases in agreement with ISO15189:2012 recommendations. This review focuses on potential errors in the extra-analytical phases of clinical chemistry laboratory testing, some of the studies performed to assess the severity and impact of these errors and processes that are in place to address these errors. The aim of this review is to highlight the importance of these errors for the requesting clinician.
Review of "The High Cost of High School Dropouts in Ohio"
ERIC Educational Resources Information Center
Dorn, Sherman
2009-01-01
A new report published by the Buckeye Institute for Public Policy Solutions is a minor variant on six similar reports published by the Friedman Foundation over the past three years. The new report repeats some of the errors in the previous reports, and it follows a parallel structure, arguing that the costs of dropping out are dramatic for the…
Applying airline safety practices to medication administration.
Pape, Theresa M
2003-04-01
Medication administration errors (MAE) continue as major problems for health care institutions, nurses, and patients. However, MAEs are often the result of system failures leading to patient injury, increased hospital costs, and blaming. Costs include those related to increased hospital length of stay and legal expenses. Contributing factors include distractions, lack of focus, poor communication, and failure to follow standard protocols during medication administration.
NASA Astrophysics Data System (ADS)
Gunawardena, N.; Pardyjak, E. R.; Stoll, R.; Khadka, A.
2018-02-01
Over the last decade there has been a proliferation of low-cost sensor networks that enable highly distributed sensor deployments in environmental applications. The technology is easily accessible and rapidly advancing due to the use of open-source microcontrollers. While this trend is extremely exciting, and the technology provides unprecedented spatial coverage, these sensors and associated microcontroller systems have not been well evaluated in the literature. Given the large number of new deployments and proposed research efforts using these technologies, it is necessary to quantify the overall instrument and microcontroller performance for specific applications. In this paper, an Arduino-based weather station system is presented in detail. These low-cost energy-budget measurement stations, or LEMS, have now been deployed for continuous measurements as part of several different field campaigns, which are described herein. The LEMS are low-cost, flexible, and simple to maintain. In addition to presenting the technical details of the LEMS, its errors are quantified in laboratory and field settings. A simple artificial neural network-based radiation-error correction scheme is also presented. Finally, challenges and possible improvements to microcontroller-based atmospheric sensing systems are discussed.
Error management for musicians: an interdisciplinary conceptual framework
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly – or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels. PMID:25120501
Error management for musicians: an interdisciplinary conceptual framework.
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels.
A Comprehensive Revision of the Logistics Planning Exercise (Log-Plan-X).
1981-06-01
teaching objectives. The difference between conventional teaching methods and simulation rests in the fact that most conventional techniques focus on...Communication and Humanitie. AFIT/LSH, WPAFB OH 45433220 V&. MONITORING AGENCY NAME9 & ADORES(II different fron Ca.U.Ufind Office) is. SECURITY UNCLASSIFIED I...error systems in real life can be very costly. Simulations can be an efficient and effective alternative to such trial and error methods by allowing
Air Force Operational Test and Evaluation Center, Volume 2, Number 2
1988-01-01
the special class of attributes arc recorded, cost or In place of the normalization ( I). we propose beliefit. the lollowins normalization NUMERICAL ...comprchcnsi\\c set of modular basic data flow to meet requirements at test tools ,. designed to provide flexible data reduction start, then building to...possible. a totlinaion ot the two position error measurement techniques arc used SLR is a methd of fitting a linear model o accumlulate a position error
Improving registration accuracy.
Murphy, J Patrick; Shorrosh, Paul
2008-04-01
A registration quality assurance initiative--whether manual or automated--can result in benefits such as: Cleaner claims, Reduced cost to collect, Enhanced revenue, Decreased registration, error rates, Improved staff morale, Fewer customer complaints
Managing human fallibility in critical aerospace situations
NASA Astrophysics Data System (ADS)
Tew, Larry
2014-11-01
Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.
Analyzing human errors in flight mission operations
NASA Technical Reports Server (NTRS)
Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef
1993-01-01
A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.
Applications and error correction for adiabatic quantum optimization
NASA Astrophysics Data System (ADS)
Pudenz, Kristen
Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.
Comparison of a Virtual Older Driver Assessment with an On-Road Driving Test.
Eramudugolla, Ranmalee; Price, Jasmine; Chopra, Sidhant; Li, Xiaolan; Anstey, Kaarin J
2016-12-01
To design a low-cost simulator-based driving assessment for older adults and to compare its validity with that of an on-road driving assessment and other measures of older driver risk. Cross-sectional observational study. Canberra, Australia. Older adult drivers (N = 47; aged 65-88, mean age 75.2). Error rate on a simulated drive with environment and scoring procedure matched to those of an on-road test. Other measures included participant age, simulator sickness severity, neuropsychological measures, and driver screening measures. Outcome variables included occupational therapist (OT)-rated on-road errors, on-road safety rating, and safety category. Participants' error rate on the simulated drive was significantly correlated with their OT-rated driving safety (correlation coefficient (r) = -0.398, P = .006), even after adjustment for age and simulator sickness (P = .009). The simulator error rate was a significant predictor of categorization as unsafe on the road (P = .02, sensitivity 69.2%, specificity 100%), with 13 (27%) drivers assessed as unsafe. Simulator error was also associated with other older driver safety screening measures such as useful field of view (r = 0.341, P = .02), DriveSafe (r = -0.455, P < .01), and visual motion sensitivity (r = 0.368, P = .01) but was not associated with memory (delayed word recall) or global cognition (Mini-Mental State Examination). Drivers made twice as many errors on the simulated assessment as during the on-road assessment (P < .001), with significant differences in the rate and type of errors between the two mediums. A low-cost simulator-based assessment is valid as a screening instrument for identifying at-risk older drivers but not as an alternative to on-road evaluation when accurate data on competence or pattern of impairment is required for licensing decisions and training programs. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
Laxy, Michael; Wilson, Edward C F; Boothby, Clare E; Griffin, Simon J
2017-12-01
There is uncertainty about the cost effectiveness of early intensive treatment versus routine care in individuals with type 2 diabetes detected by screening. To derive a trial-informed estimate of the incremental costs of intensive treatment as delivered in the Anglo-Danish-Dutch Study of Intensive Treatment in People with Screen-Detected Diabetes in Primary Care-Europe (ADDITION) trial and to revisit the long-term cost-effectiveness analysis from the perspective of the UK National Health Service. We analyzed the electronic primary care records of a subsample of the ADDITION-Cambridge trial cohort (n = 173). Unit costs of used primary care services were taken from the published literature. Incremental annual costs of intensive treatment versus routine care in years 1 to 5 after diagnosis were calculated using multilevel generalized linear models. We revisited the long-term cost-utility analyses for the ADDITION-UK trial cohort and reported results for ADDITION-Cambridge using the UK Prospective Diabetes Study Outcomes Model and the trial-informed cost estimates according to a previously developed evaluation framework. Incremental annual costs of intensive treatment over years 1 to 5 averaged £29.10 (standard error = £33.00) for consultations with general practitioners and nurses and £54.60 (standard error = £28.50) for metabolic and cardioprotective medication. For ADDITION-UK, over the 10-, 20-, and 30-year time horizon, adjusted incremental quality-adjusted life-years (QALYs) were 0.014, 0.043, and 0.048, and adjusted incremental costs were £1,021, £1,217, and £1,311, resulting in incremental cost-effectiveness ratios of £71,232/QALY, £28,444/QALY, and £27,549/QALY, respectively. Respective incremental cost-effectiveness ratios for ADDITION-Cambridge were slightly higher. The incremental costs of intensive treatment as delivered in the ADDITION-Cambridge trial were lower than expected. Given UK willingness-to-pay thresholds in patients with screen-detected diabetes, intensive treatment is of borderline cost effectiveness over a time horizon of 20 years and more. Copyright © 2017. Published by Elsevier Inc.
Current pulse: can a production system reduce medical errors in health care?
Printezis, Antonios; Gopalakrishnan, Mohan
2007-01-01
One of the reasons for rising health care costs is medical errors, a majority of which result from faulty systems and processes. Health care in the past has used process-based initiatives such as Total Quality Management, Continuous Quality Improvement, and Six Sigma to reduce errors. These initiatives to redesign health care, reduce errors, and improve overall efficiency and customer satisfaction have had moderate success. Current trend is to apply the successful Toyota Production System (TPS) to health care since its organizing principles have led to tremendous improvement in productivity and quality for Toyota and other businesses that have adapted them. This article presents insights on the effectiveness of TPS principles in health care and the challenges that lie ahead in successfully integrating this approach with other quality initiatives.
[Risk Management: concepts and chances for public health].
Palm, Stefan; Cardeneo, Margareta; Halber, Marco; Schrappe, Matthias
2002-01-15
Errors are a common problem in medicine and occur as a result of a complex process involving many contributing factors. Medical errors significantly reduce the safety margin for the patient and contribute additional costs in health care delivery. In most cases adverse events cannot be attributed to a single underlying cause. Therefore an effective risk management strategy must follow a system approach, which is based on counting and analysis of near misses. The development of defenses against the undesired effects of errors should be the main focus rather than asking the question "Who blundered?". Analysis of near misses (which in this context can be compared to indicators) offers several methodological advantages as compared to the analysis of errors and adverse events. Risk management is an integral element of quality management.
Li, Ying
2016-09-16
Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.
NASA Astrophysics Data System (ADS)
Paul, Prakash
2009-12-01
The finite element method (FEM) is used to solve three-dimensional electromagnetic scattering and radiation problems. Finite element (FE) solutions of this kind contain two main types of error: discretization error and boundary error. Discretization error depends on the number of free parameters used to model the problem, and on how effectively these parameters are distributed throughout the problem space. To reduce the discretization error, the polynomial order of the finite elements is increased, either uniformly over the problem domain or selectively in those areas with the poorest solution quality. Boundary error arises from the condition applied to the boundary that is used to truncate the computational domain. To reduce the boundary error, an iterative absorbing boundary condition (IABC) is implemented. The IABC starts with an inexpensive boundary condition and gradually improves the quality of the boundary condition as the iteration continues. An automatic error control (AEC) is implemented to balance the two types of error. With the AEC, the boundary condition is improved when the discretization error has fallen to a low enough level to make this worth doing. The AEC has these characteristics: (i) it uses a very inexpensive truncation method initially; (ii) it allows the truncation boundary to be very close to the scatterer/radiator; (iii) it puts more computational effort on the parts of the problem domain where it is most needed; and (iv) it can provide as accurate a solution as needed depending on the computational price one is willing to pay. To further reduce the computational cost, disjoint scatterers and radiators that are relatively far from each other are bounded separately and solved using a multi-region method (MRM), which leads to savings in computational cost. A simple analytical way to decide whether the MRM or the single region method will be computationally cheaper is also described. To validate the accuracy and savings in computation time, different shaped metallic and dielectric obstacles (spheres, ogives, cube, flat plate, multi-layer slab etc.) are used for the scattering problems. For the radiation problems, waveguide excited antennas (horn antenna, waveguide with flange, microstrip patch antenna) are used. Using the AEC the peak reduction in computation time during the iteration is typically a factor of 2, compared to the IABC using the same element orders throughout. In some cases, it can be as high as a factor of 4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh
Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.
NASA Astrophysics Data System (ADS)
Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.
2018-01-01
Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that it accounts for pollutant cross-sensitivities. This highlights the importance of developing multipollutant sensor packages (as opposed to single-pollutant monitors); we determined this is especially critical for NO2 and CO2. The evaluation reveals that only the RF-calibrated sensors meet the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. We also demonstrate that the RF-model-calibrated sensors could detect differences in NO2 concentrations between a near-road site and a suburban site less than 1.5 km away. From this study, we conclude that combining RF models with carefully controlled state-of-the-art multipollutant sensor packages as in the RAMP monitors appears to be a very promising approach to address the poor performance that has plagued low-cost air quality sensors.
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
A Low-Cost Data Acquisition System for Automobile Dynamics Applications
González, Alejandro; Vinolas, Jordi
2018-01-01
This project addresses the need for the implementation of low-cost acquisition technology in the field of vehicle engineering: the design, development, manufacture, and verification of a low-cost Arduino-based data acquisition platform to be used in <80 Hz data acquisition in vehicle dynamics, using low-cost accelerometers. In addition to this, a comparative study is carried out of professional vibration acquisition technologies and low-cost systems, obtaining optimum results for low- and medium-frequency operations with an error of 2.19% on road tests. It is therefore concluded that these technologies are applicable to the automobile industry, thereby allowing the project costs to be reduced and thus facilitating access to this kind of research that requires limited resources. PMID:29382039
A Low-Cost Data Acquisition System for Automobile Dynamics Applications.
González, Alejandro; Olazagoitia, José Luis; Vinolas, Jordi
2018-01-27
This project addresses the need for the implementation of low-cost acquisition technology in the field of vehicle engineering: the design, development, manufacture, and verification of a low-cost Arduino-based data acquisition platform to be used in <80 Hz data acquisition in vehicle dynamics, using low-cost accelerometers. In addition to this, a comparative study is carried out of professional vibration acquisition technologies and low-cost systems, obtaining optimum results for low- and medium-frequency operations with an error of 2.19% on road tests. It is therefore concluded that these technologies are applicable to the automobile industry, thereby allowing the project costs to be reduced and thus facilitating access to this kind of research that requires limited resources.
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Cresswell, Kathrin; Eden, Martin; Elliott, Rachel A; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Prescott, Robin J; Swanwick, Glen; Franklin, Matthew; Putman, Koen; Boyd, Matthew; Sheikh, Aziz
2012-04-07
Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to researchers and statisticians involved in processing and analysing the data. The allocation was not masked to general practices, pharmacists, patients, or researchers who visited practices to extract data. [corrected]. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-effectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. 72 general practices with a combined list size of 480,942 patients were randomised. At 6 months' follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0·58, 95% CI 0·38-0·89); a β blocker if they had asthma (0·73, 0·58-0·91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0·51, 0·34-0·78). PINCER has a 95% probability of being cost effective if the decision-maker's ceiling willingness to pay reaches £75 per error avoided at 6 months. The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Patient Safety Research Portfolio, Department of Health, England. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Micromechanical INS/GPS System for Small Satellites
NASA Technical Reports Server (NTRS)
Barbour, N.; Brand, T.; Haley, R.; Socha, M.; Stoll, J.; Ward, P.; Weinberg, M.
1995-01-01
The cost and complexity of large satellite space missions continue to escalate. To reduce costs, more attention is being directed toward small lightweight satellites where future demand is expected to grow dramatically. Specifically, micromechanical inertial systems and microstrip global positioning system (GPS) antennas incorporating flip-chip bonding, application specific integrated circuits (ASIC) and MCM technologies will be required. Traditional microsatellite pointing systems do not employ active control. Many systems allow the satellite to point coarsely using gravity gradient, then attempt to maintain the image on the focal plane with fast-steering mirrors. Draper's approach is to actively control the line of sight pointing by utilizing on-board attitude determination with micromechanical inertial sensors and reaction wheel control actuators. Draper has developed commercial and tactical-grade micromechanical inertial sensors, The small size, low weight, and low cost of these gyroscopes and accelerometers enable systems previously impractical because of size and cost. Evolving micromechanical inertial sensors can be applied to closed-loop, active control of small satellites for micro-radian precision-pointing missions. An inertial reference feedback control loop can be used to determine attitude and line of sight jitter to provide error information to the controller for correction. At low frequencies, the error signal is provided by GPS. At higher frequencies, feedback is provided by the micromechanical gyros. This blending of sensors provides wide-band sensing from dc to operational frequencies. First order simulation has shown that the performance of existing micromechanical gyros, with integrated GPS, is feasible for a pointing mission of 10 micro-radians of jitter stability and approximately 1 milli-radian absolute error, for a satellite with 1 meter antenna separation. Improved performance micromechanical sensors currently under development will be suitable for a range of micro-nano-satellite applications.
Vosoughi, Aram; Smith, Paul Taylor; Zeitouni, Joseph A; Sodeman, Gregori M; Jorda, Merce; Gomez-Fernandez, Carmen; Garcia-Buitrago, Monica; Petito, Carol K; Chapman, Jennifer R; Campuzano-Zuluaga, German; Rosenberg, Andrew E; Kryvenko, Oleksandr N
2018-04-30
Frozen section telepathology interpretation experience has been largely limited to practices with locations significantly distant from one another with sporadic need for frozen section diagnosis. In 2010 we established a real-time non-robotic telepathology system in a very active cancer center for daily frozen section service. Herein, we evaluate its accuracy compared to direct microscopic interpretation performed in the main hospital by the same faculty and its cost-efficiency over a 1-year period. From 643 (1416 parts) cases requiring intraoperative consultation, 333 cases (690 parts) were examined by telepathology and 310 cases (726 parts) by direct microscopy. Corresponding discrepancy rates were 2.6% (18 cases: 6 (0.9%) sampling and 12 (1.7%) diagnostic errors) and 3.2% (23 cases: 8 (1.1%) sampling and 15 (2.1%) diagnostic errors), P=.63. The sensitivity and specificity of intraoperative frozen diagnosis were 0.92 and 0.99, respectively, in telepathology, and 0.90 and 0.99, respectively, in direct microscopy. There was no correlation of error incidence with post graduate year level of residents involved in the telepathology service. Cost analysis indicated that the time saved by telepathology was $19691 over one year of the study period while the capital cost for establishing the system was $8924. Thus, real-time non-robotic telepathology is a reliable and easy to use tool for frozen section evaluation in busy clinical settings, especially when frozen section service involves more than one hospital, and it is cost efficient when travel is a component of the service. Copyright © 2018. Published by Elsevier Inc.
Environmental cost of using poor decision metrics to prioritize environmental projects.
Pannell, David J; Gibson, Fiona L
2016-04-01
Conservation decision makers commonly use project-scoring metrics that are inconsistent with theory on optimal ranking of projects. As a result, there may often be a loss of environmental benefits. We estimated the magnitudes of these losses for various metrics that deviate from theory in ways that are common in practice. These metrics included cases where relevant variables were omitted from the benefits metric, project costs were omitted, and benefits were calculated using a faulty functional form. We estimated distributions of parameters from 129 environmental projects from Australia, New Zealand, and Italy for which detailed analyses had been completed previously. The cost of using poor prioritization metrics (in terms of lost environmental values) was often high--up to 80% in the scenarios we examined. The cost in percentage terms was greater when the budget was smaller. The most costly errors were omitting information about environmental values (up to 31% loss of environmental values), omitting project costs (up to 35% loss), omitting the effectiveness of management actions (up to 9% loss), and using a weighted-additive decision metric for variables that should be multiplied (up to 23% loss). The latter 3 are errors that occur commonly in real-world decision metrics, in combination often reducing potential benefits from conservation investments by 30-50%. Uncertainty about parameter values also reduced the benefits from investments in conservation projects but often not by as much as faulty prioritization metrics. © 2016 Society for Conservation Biology.
Reduced cost and improved figure of sapphire optical components
NASA Astrophysics Data System (ADS)
Walters, Mark; Bartlett, Kevin; Brophy, Matthew R.; DeGroote Nelson, Jessica; Medicus, Kate
2015-10-01
Sapphire presents many challenges to optical manufacturers due to its high hardness and anisotropic properties. Long lead times and high prices are the typical result of such challenges. The cost of even a simple 'grind and shine' process can be prohibitive. The high precision surfaces required by optical sensor applications further exacerbate the challenge of processing sapphire thereby increasing cost further. Optimax has demonstrated a production process for such windows that delivers over 50% time reduction as compared to traditional manufacturing processes for sapphire, while producing windows with less than 1/5 wave rms figure error. Optimax's sapphire production process achieves significant improvement in cost by implementation of a controlled grinding process to present the best possible surface to the polishing equipment. Following the grinding process is a polishing process taking advantage of chemical interactions between slurry and substrate to deliver excellent removal rates and surface finish. Through experiments, the mechanics of the polishing process were also optimized to produce excellent optical figure. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. Through specially developed polishing slurries, the peak-to-valley figure error of spherical sapphire parts is reduced by over 80%.
Impact of Medicare Part D on out-of-pocket drug costs and medical use for patients with cancer.
Kircher, Sheetal M; Johansen, Michael E; Nimeiri, Halla S; Richardson, Caroline R; Davis, Matthew M
2014-11-01
Medicare Part D was designed to reduce out-of-pocket (OOP) costs for Medicare beneficiaries, but to the authors' knowledge the extent to which this occurred for patients with cancer has not been measured to date. The objective of the current study was to examine the impact of Medicare Part D eligibility on OOP cost for prescription drugs and use of medical services among patients with cancer. Using the Medical Expenditure Panel Survey (MEPS) for the years 2002 through 2010, a differences-in-differences analysis estimated the effects of Medicare Part D eligibility on OOP pharmaceutical costs and medical use. The authors compared per capita OOP cost and use between Medicare beneficiaries (aged ≥65 years) with cancer to near-elderly patients aged 55 years to 64 years with cancer. Statistical weights were used to generate nationally representative estimates. A total of 1878 near-elderly and 4729 individuals with Medicare were included (total of 6607 individuals). The mean OOP pharmaceutical cost for Medicare beneficiaries before the enactment of Part D was $1158 (standard error, ±$52) and decreased to $501 (standard error, ±$30), a decline of 43%. Compared with changes in OOP pharmaceutical costs for nonelderly patients with cancer over the same period, the implementation of Medicare Part D was associated with a further reduction of $356 per person. Medicare Part D appeared to have no significant impact on the use of medications, hospitalizations, or emergency department visits, but was associated with a reduction of 1.55 in outpatient visits. Medicare D has reduced OOP prescription drug costs and outpatient visits for seniors with cancer beyond trends observed for younger patients, with no major impact on the use of other medical services noted. © 2014 American Cancer Society.
The next organizational challenge: finding and addressing diagnostic error.
Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H
2014-03-01
Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.
E-prescribing errors in community pharmacies: exploring consequences and contributing factors.
Odukoya, Olufunmilola K; Stone, Jamie A; Chui, Michelle A
2014-06-01
To explore types of e-prescribing errors in community pharmacies and their potential consequences, as well as the factors that contribute to e-prescribing errors. Data collection involved performing 45 total hours of direct observations in five pharmacies. Follow-up interviews were conducted with 20 study participants. Transcripts from observations and interviews were subjected to content analysis using NVivo 10. Pharmacy staff detected 75 e-prescription errors during the 45 h observation in pharmacies. The most common e-prescribing errors were wrong drug quantity, wrong dosing directions, wrong duration of therapy, and wrong dosage formulation. Participants estimated that 5 in 100 e-prescriptions have errors. Drug classes that were implicated in e-prescribing errors were antiinfectives, inhalers, ophthalmic, and topical agents. The potential consequences of e-prescribing errors included increased likelihood of the patient receiving incorrect drug therapy, poor disease management for patients, additional work for pharmacy personnel, increased cost for pharmacies and patients, and frustrations for patients and pharmacy staff. Factors that contribute to errors included: technology incompatibility between pharmacy and clinic systems, technology design issues such as use of auto-populate features and dropdown menus, and inadvertently entering incorrect information. Study findings suggest that a wide range of e-prescribing errors is encountered in community pharmacies. Pharmacists and technicians perceive that causes of e-prescribing errors are multidisciplinary and multifactorial, that is to say e-prescribing errors can originate from technology used in prescriber offices and pharmacies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
E-Prescribing Errors in Community Pharmacies: Exploring Consequences and Contributing Factors
Stone, Jamie A.; Chui, Michelle A.
2014-01-01
Objective To explore types of e-prescribing errors in community pharmacies and their potential consequences, as well as the factors that contribute to e-prescribing errors. Methods Data collection involved performing 45 total hours of direct observations in five pharmacies. Follow-up interviews were conducted with 20 study participants. Transcripts from observations and interviews were subjected to content analysis using NVivo 10. Results Pharmacy staff detected 75 e-prescription errors during the 45 hour observation in pharmacies. The most common e-prescribing errors were wrong drug quantity, wrong dosing directions, wrong duration of therapy, and wrong dosage formulation. Participants estimated that 5 in 100 e-prescriptions have errors. Drug classes that were implicated in e-prescribing errors were antiinfectives, inhalers, ophthalmic, and topical agents. The potential consequences of e-prescribing errors included increased likelihood of the patient receiving incorrect drug therapy, poor disease management for patients, additional work for pharmacy personnel, increased cost for pharmacies and patients, and frustrations for patients and pharmacy staff. Factors that contribute to errors included: technology incompatibility between pharmacy and clinic systems, technology design issues such as use of auto-populate features and dropdown menus, and inadvertently entering incorrect information. Conclusion Study findings suggest that a wide range of e-prescribing errors are encountered in community pharmacies. Pharmacists and technicians perceive that causes of e-prescribing errors are multidisciplinary and multifactorial, that is to say e-prescribing errors can originate from technology used in prescriber offices and pharmacies. PMID:24657055
Low-Cost, Light Weight, Thin Film Solar Concentrator
NASA Technical Reports Server (NTRS)
Ganapathi, G.; Palisoc, A.; Nesmith, B.; Greschik, G.; Gidanian, K.; Kindler, A.
2013-01-01
This research addresses a cost barrier towards achieving a solar thermal collector system with an installed cost of $75/sq m and meet the Department of Energy's (DOE's) performance targets for optical errors, operations during windy conditions and lifetime. Current concentrators can cost as much as 40-50% of the total installed costs for a CSP plant. In order to reduce the costs from current $200-$250/sq m, it is important to focus on the overall system. The reflector surface is a key cost driver, and our film-based polymer reflector will help significantly in achieving DOE's cost target of $75/sq m. The ease of manufacturability, installation and replacement make this technology a compelling one to develop. This technology can be easily modified for a variety of CSP options including heliostats, parabolic dishes and parabolic troughs.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
NASA Astrophysics Data System (ADS)
Romo, David Ricardo
Foreign Object Debris/Damage (FOD) has been an issue for military and commercial aircraft manufacturers since the early ages of aviation and aerospace. Currently, aerospace is growing rapidly and the chances of FOD presence are growing as well. One of the principal causes in manufacturing is the human error. The cost associated with human error in commercial and military aircrafts is approximately accountable for 4 billion dollars per year. This problem is currently addressed with prevention programs, elimination techniques, and designation of FOD areas, controlled access, restrictions of personal items entering designated areas, tool accountability, and the use of technology such as Radio Frequency Identification (RFID) tags, etc. All of the efforts mentioned before, have not show a significant occurrence reduction in terms of manufacturing processes. On the contrary, a repetitive path of occurrence is present, and the cost associated has not declined in a significant manner. In order to address the problem, this thesis proposes a new approach using statistical analysis. The effort of this thesis is to create a predictive model using historical categorical data from an aircraft manufacturer only focusing in human error causes. The use of contingency tables, natural logarithm of the odds and probability transformation is used in order to provide the predicted probabilities of each aircraft. A case of study is shown in this thesis in order to show the applied methodology. As a result, this approach is able to predict the possible outcomes of FOD by the workstation/area needed, and monthly predictions per workstation. This thesis is intended to be the starting point of statistical data analysis regarding FOD in human factors. The purpose of this thesis is to identify the areas where human error is the primary cause of FOD occurrence in order to design and implement accurate solutions. The advantages of the proposed methodology can go from the reduction of cost production, quality issues, repair cost, and assembly process time. Finally, a more reliable process is achieved, and the proposed methodology may be used in other aircrafts.
High-resolution wavefront control of high-power laser systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brase, J; Brown, C; Carrano, C
1999-07-08
Nearly every new large-scale laser system application at LLNL has requirements for beam control which exceed the current level of available technology. For applications such as inertial confinement fusion, laser isotope separation, laser machining, and laser the ability to transport significant power to a target while maintaining good beam quality is critical. There are many ways that laser wavefront quality can be degraded. Thermal effects due to the interaction of high-power laser or pump light with the internal optical components or with the ambient gas are common causes of wavefront degradation. For many years, adaptive optics based on thing deformablemore » glass mirrors with piezoelectric or electrostrictive actuators have be used to remove the low-order wavefront errors from high-power laser systems. These adaptive optics systems have successfully improved laser beam quality, but have also generally revealed additional high-spatial-frequency errors, both because the low-order errors have been reduced and because deformable mirrors have often introduced some high-spatial-frequency components due to manufacturing errors. Many current and emerging laser applications fall into the high-resolution category where there is an increased need for the correction of high spatial frequency aberrations which requires correctors with thousands of degrees of freedom. The largest Deformable Mirrors currently available have less than one thousand degrees of freedom at a cost of approximately $1M. A deformable mirror capable of meeting these high spatial resolution requirements would be cost prohibitive. Therefore a new approach using a different wavefront control technology is needed. One new wavefront control approach is the use of liquid-crystal (LC) spatial light modulator (SLM) technology for the controlling the phase of linearly polarized light. Current LC SLM technology provides high-spatial-resolution wavefront control, with hundreds of thousands of degrees of freedom, more than two orders of magnitude greater than the best Deformable Mirrors currently made. Even with the increased spatial resolution, the cost of these devices is nearly two orders of magnitude less than the cost of the largest deformable mirror.« less
PACE 2: Pricing and Cost Estimating Handbook
NASA Technical Reports Server (NTRS)
Stewart, R. D.; Shepherd, T.
1977-01-01
An automatic data processing system to be used for the preparation of industrial engineering type manhour and material cost estimates has been established. This computer system has evolved into a highly versatile and highly flexible tool which significantly reduces computation time, eliminates computational errors, and reduces typing and reproduction time for estimators and pricers since all mathematical and clerical functions are automatic once basic inputs are derived.
ERIC Educational Resources Information Center
Smith, Rachel A.; Levine, Timothy R.; Lachlan, Kenneth A.; Fediuk, Thomas A.
2002-01-01
Notes that the availability of statistical software packages has led to a sharp increase in use of complex research designs and complex statistical analyses in communication research. Reports a series of Monte Carlo simulations which demonstrate that this complexity may come at a heavier cost than many communication researchers realize. Warns…
Identification and correction of systematic error in high-throughput sequence data
2011-01-01
Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972
Wagner, James; Schroeder, Heather M.; Piskorowski, Andrew; Ursano, Robert J.; Stein, Murray B.; Heeringa, Steven G.; Colpe, Lisa J.
2017-01-01
Mixed-mode surveys need to determine a number of design parameters that may have a strong influence on costs and errors. In a sequential mixed-mode design with web followed by telephone, one of these decisions is when to switch modes. The web mode is relatively inexpensive but produces lower response rates. The telephone mode complements the web mode in that it is relatively expensive but produces higher response rates. Among the potential negative consequences, delaying the switch from web to telephone may lead to lower response rates if the effectiveness of the prenotification contact materials is reduced by longer time lags, or if the additional e-mail reminders to complete the web survey annoy the sampled person. On the positive side, delaying the switch may decrease the costs of the survey. We evaluate these costs and errors by experimentally testing four different timings (1, 2, 3, or 4 weeks) for the mode switch in a web–telephone survey. This experiment was conducted on the fourth wave of a longitudinal study of the mental health of soldiers in the U.S. Army. We find that the different timings of the switch in the range of 1–4 weeks do not produce differences in final response rates or key estimates but longer delays before switching do lead to lower costs. PMID:28943717
DC-Compensated Current Transformer.
Ripka, Pavel; Draxler, Karel; Styblíková, Renata
2016-01-20
Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component.
Cost effectiveness of the stream-gaging program in Louisiana
Herbert, R.A.; Carlson, D.D.
1985-01-01
This report documents the results of a study of the cost effectiveness of the stream-gaging program in Louisiana. Data uses and funding sources were identified for the 68 continuous-record stream gages currently (1984) in operation with a budget of $408,700. Three stream gages have uses specific to a short-term study with no need for continued data collection beyond the study. The remaining 65 stations should be maintained in the program for the foreseeable future. In addition to the current operation of continuous-record stations, a number of wells, flood-profile gages, crest-stage gages, and stage stations, are serviced on the continuous-record station routes; thus, increasing the current budget to $423,000. The average standard error of estimate for data collected at the stations is 34.6%. Standard errors computed in this study are one measure of streamflow errors, and can be used as guidelines in comparing the effectiveness of alternative networks. By using the routes and number of measurements prescribed by the ' Traveling Hydrographer Program, ' the standard error could be reduced to 31.5% with the current budget of $423,000. If the gaging resources are redistributed, the 34.6% overall level of accuracy at the 68 continuous-record sites and the servicing of the additional wells or gages could be maintained with a budget of approximately $410,000. (USGS)
Low-cost FM oscillator for capacitance type of blade tip clearance measurement system
NASA Technical Reports Server (NTRS)
Barranger, John P.
1987-01-01
The frequency-modulated (FM) oscillator described is part of a blade tip clearance measurement system that meets the needs of a wide class of fans, compressors, and turbines. As a result of advancements in the technology of ultra-high-frequency operational amplifiers, the FM oscillator requires only a single low-cost integrated circuit. Its carrier frequency is 42.8 MHz when it is used with an integrated probe and connecting cable assembly consisting of a 0.81 cm diameter engine-mounted capacitance probe and a 61 cm long hermetically sealed coaxial cable. A complete circuit analysis is given, including amplifier negative resistance characteristics. An error analysis of environmentally induced effects is also derived, and an error-correcting technique is proposed. The oscillator can be calibrated in the static mode and has a negative peak frequency deviation of 400 kHz for a rotor blade thickness of 1.2 mm. High-temperature performance tests of the probe and 13 cm of the adjacent cable show good accuracy up to 600 C, the maximum permissible seal temperature. The major source of error is the residual FM oscillator noise, which produces a clearance error of + or - 10 microns at a clearance of 0.5 mm. The oscillator electronics accommodates the high rotor speeds associated with small engines, the signals from which may have frequency components as high as 1 MHz.
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
Analyses of Blood Bank Efficiency, Cost-Effectiveness and Quality
NASA Astrophysics Data System (ADS)
Lam, Hwai-Tai Chen
In view of the increasing costs of hospital care, it is essential to investigate methods to improve the labor efficiency and the cost-effectiveness of the hospital technical core in order to control costs while maintaining the quality of care. This study was conducted to develop indices to measure efficiency, cost-effectiveness, and the quality of blood banks; to identify factors associated with efficiency, cost-effectiveness, and quality; and to generate strategies to improve blood bank labor efficiency and cost-effectiveness. Indices developed in this study for labor efficiency and cost-effectiveness were not affected by patient case mix and illness severity. Factors that were associated with labor efficiency were identified as managerial styles, and organizational designs that balance workload and labor resources. Medical directors' managerial involvement was not associated with labor efficiency, but their continuing education and specialty in blood bank were found to reduce the performance of unnecessary tests. Surprisingly, performing unnecessary tests had no association with labor efficiency. This suggested the existence of labor slack in blood banks. Cost -effectiveness was associated with workers' benefits, wages, and the production of high-end transfusion products by hospital-based donor rooms. Quality indices used in this study included autologous transfusion rates, platelet transfusion rates, and the check points available in an error-control system. Because the autologous transfusion rate was related to patient case mix, severity of illness, and possible inappropriate transfusion, it was not recommended to be used for quality index. Platelet-pheresis transfusion rates were associated with the transfusion preferences of the blood bank medical directors. The total number of check points in an error -control system was negatively associated with government ownership and workers' experience. Recommendations for improving labor efficiency and cost-effectiveness were focused on an incentive system that encourages team effort, and the use of appropriate measurements for laboratory efficiency and operational system designs.
ERIC Educational Resources Information Center
Kretchmer, Mark R.
2000-01-01
Discusses how to avoid costly errors in high-tech retrofits through proper planning and coordination. Guidelines are offered for selecting cable installers, using multi-disciplinary consulting engineering firm, and space planning when making high-tech retrofits. (GR)
Improved Quality in Aerospace Testing Through the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.
2000-01-01
This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.
Software Requirements Analysis as Fault Predictor
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Waiting until the integration and system test phase to discover errors leads to more costly rework than resolving those same errors earlier in the lifecycle. Costs increase even more significantly once a software system has become operational. WE can assess the quality of system requirements, but do little to correlate this information either to system assurance activities or long-term reliability projections - both of which remain unclear and anecdotal. Extending earlier work on requirements accomplished by the ARM tool, measuring requirements quality information against code complexity and test data for the same system may be used to predict specific software modules containing high impact or deeply embedded faults now escaping in operational systems. Such knowledge would lead to more effective and efficient test programs. It may enable insight into whether a program should be maintained or started over.
Spitzer Telemetry Processing System
NASA Technical Reports Server (NTRS)
Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.
2013-01-01
The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
Bayesian analysis of input uncertainty in hydrological modeling: 2. Application
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.
2006-03-01
The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.
An affordable cuff-less blood pressure estimation solution.
Jain, Monika; Kumar, Niranjan; Deb, Sujay
2016-08-01
This paper presents a cuff-less hypertension pre-screening device that non-invasively monitors the Blood Pressure (BP) and Heart Rate (HR) continuously. The proposed device simultaneously records two clinically significant and highly correlated biomedical signals, viz., Electrocardiogram (ECG) and Photoplethysmogram (PPG). The device provides a common data acquisition platform that can interface with PC/laptop, Smart phone/tablet and Raspberry-pi etc. The hardware stores and processes the recorded ECG and PPG in order to extract the real-time BP and HR using kernel regression approach. The BP and HR estimation error is measured in terms of normalized mean square error, Error Standard Deviation (ESD) and Mean Absolute Error (MAE), with respect to a clinically proven digital BP monitor (OMRON HBP1300). The computed error falls under the maximum standard allowable error mentioned by Association for the Advancement of Medical Instrumentation; MAE <; 5 mmHg and ESD <; 8mmHg. The results are validated using two-tailed dependent sample t-test also. The proposed device is a portable low-cost home and clinic bases solution for continuous health monitoring.
NASA Astrophysics Data System (ADS)
Allen, J. Icarus; Holt, Jason T.; Blackford, Jerry; Proctor, Roger
2007-12-01
Marine systems models are becoming increasingly complex and sophisticated, but far too little attention has been paid to model errors and the extent to which model outputs actually relate to ecosystem processes. Here we describe the application of summary error statistics to a complex 3D model (POLCOMS-ERSEM) run for the period 1988-1989 in the southern North Sea utilising information from the North Sea Project, which collected a wealth of observational data. We demonstrate that to understand model data misfit and the mechanisms creating errors, we need to use a hierarchy of techniques, including simple correlations, model bias, model efficiency, binary discriminator analysis and the distribution of model errors to assess model errors spatially and temporally. We also demonstrate that a linear cost function is an inappropriate measure of misfit. This analysis indicates that the model has some skill for all variables analysed. A summary plot of model performance indicates that model performance deteriorates as we move through the ecosystem from the physics, to the nutrients and plankton.
Tian, Zengshan; Xu, Kunjie; Yu, Xiang
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349
Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.
Factors that influence the generation of autobiographical memory conjunction errors
Devitt, Aleea L.; Monk-Fromont, Edwin; Schacter, Daniel L.; Addis, Donna Rose
2015-01-01
The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory may be incorrectly incorporated into another, forming autobiographical memory conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of autobiographical memory conjunction errors. PMID:25611492
Factors that influence the generation of autobiographical memory conjunction errors.
Devitt, Aleea L; Monk-Fromont, Edwin; Schacter, Daniel L; Addis, Donna Rose
2016-01-01
The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors.
NASA Astrophysics Data System (ADS)
Navidi, N.; Landry, R., Jr.
2015-08-01
Nowadays, Global Positioning System (GPS) receivers are aided by some complementary radio navigation systems and Inertial Navigation Systems (INS) to obtain more accuracy and robustness in land vehicular navigation. Extended Kalman Filter (EKF) is an acceptable conventional method to estimate the position, the velocity, and the attitude of the navigation system when INS measurements are fused with GPS data. However, the usage of the low-cost Inertial Measurement Units (IMUs) based on the Micro-Electro-Mechanical Systems (MEMS), for the land navigation systems, reduces the precision and stability of the navigation system due to their inherent errors. The main goal of this paper is to provide a new model for fusing low-cost IMU and GPS measurements. The proposed model is based on EKF aided by Fuzzy Inference Systems (FIS) as a promising method to solve the mentioned problems. This model considers the parameters of the measurement noise to adjust the measurement and noise process covariance. The simulation results show the efficiency of the proposed method to reduce the navigation system errors compared with EKF.
In-situ Testing of the EHT High Gain and Frequency Ultra-Stable Integrators
NASA Astrophysics Data System (ADS)
Miller, Kenneth; Ziemba, Timothy; Prager, James; Slobodov, Ilia; Lotz, Dan
2014-10-01
Eagle Harbor Technologies (EHT) has developed a long-pulse integrator that exceeds the ITER specification for integration error and pulse duration. During the Phase I program, EHT improved the RPPL short-pulse integrators, added a fast digital reset, and demonstrated that the new integrators exceed the ITER integration error and pulse duration requirements. In Phase II, EHT developed Field Programmable Gate Array (FPGA) software that allows for integrator control and real-time signal digitization and processing. In the second year of Phase II, the EHT integrator will be tested at a validation platform experiment (HIT-SI) and tokamak (DIII-D). In the Phase IIB program, EHT will continue development of the EHT integrator to reduce overall cost per channel. EHT will test lower cost components, move to surface mount components, and add an onboard Field Programmable Gate Array and data acquisition to produce a stand-alone system with lower cost per channel and increased the channel density. EHT will test the Phase IIB integrator at a validation platform experiment (HIT-SI) and tokamak (DIII-D). Work supported by the DOE under Contract Number (DE-SC0006281).
Evaluation and analysis of the orbital maneuvering vehicle video system
NASA Technical Reports Server (NTRS)
Moorhead, Robert J., II
1989-01-01
The work accomplished in the summer of 1989 in association with the NASA/ASEE Summer Faculty Research Fellowship Program at Marshall Space Flight Center is summarized. The task involved study of the Orbital Maneuvering Vehicle (OMV) Video Compression Scheme. This included such activities as reviewing the expected scenes to be compressed by the flight vehicle, learning the error characteristics of the communication channel, monitoring the CLASS tests, and assisting in development of test procedures and interface hardware for the bit error rate lab being developed at MSFC to test the VCU/VRU. Numerous comments and suggestions were made during the course of the fellowship period regarding the design and testing of the OMV Video System. Unfortunately from a technical point of view, the program appears at this point in time to be trouble from an expense prospective and is in fact in danger of being scaled back, if not cancelled altogether. This makes technical improvements prohibitive and cost-reduction measures necessary. Fortunately some cost-reduction possibilities and some significant technical improvements that should cost very little were identified.
Quantum computation with realistic magic-state factories
NASA Astrophysics Data System (ADS)
O'Gorman, Joe; Campbell, Earl T.
2017-03-01
Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.
Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.
Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J
2017-01-01
There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.
Cost-estimating relationships for space programs
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1992-01-01
Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.
Yu, Tzy-Chyi; Zhou, Huanxue
2015-09-01
Evaluate performance of techniques used to handle missing cost-to-charge ratio (CCR) data in the USA Healthcare Cost and Utilization Project's Nationwide Inpatient Sample. Four techniques to replace missing CCR data were evaluated: deleting discharges with missing CCRs (complete case analysis), reweighting as recommended by Healthcare Cost and Utilization Project, reweighting by adjustment cells and hot deck imputation by adjustment cells. Bias and root mean squared error of these techniques on hospital cost were evaluated in five disease cohorts. Similar mean cost estimates would be obtained with any of the four techniques when the percentage of missing data is low (<10%). When total cost is the outcome of interest, a reweighting technique to avoid underestimation from dropping observations with missing data should be adopted.
SEU System Analysis: Not Just the Sum of All Parts
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth
2014-01-01
Single event upset (SEU) analysis of complex systems is challenging. Currently, system SEU analysis is performed by component level partitioning and then either: the most dominant SEU cross-sections (SEUs) are used in system error rate calculations; or the partition SEUs are summed to eventually obtain a system error rate. In many cases, system error rates are overestimated because these methods generally overlook system level derating factors. The problem with overestimating is that it can cause overdesign and consequently negatively affect the following: cost, schedule, functionality, and validation/verification. The scope of this presentation is to discuss the risks involved with our current scheme of SEU analysis for complex systems; and to provide alternative methods for improvement.
Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity
NASA Technical Reports Server (NTRS)
Lin, J. Y.; Mingori, D. L.
1992-01-01
We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.
The current approach to human error and blame in the NHS.
Ottewill, Melanie
There is a large body of research to suggest that serious errors are widespread throughout medicine. The traditional response to these adverse events has been to adopt a 'person approach' - blaming the individual seen as 'responsible'. The culture of medicine is highly complicit in this response. Such an approach results in enormous personal costs to the individuals concerned and does little to address the root causes of errors and thus prevent their recurrence. Other industries, such as aviation, where safety is a paramount concern and which have similar structures to the medical profession, have, over the past decade or so, adopted a 'systems' approach to error, recognizing that human error is ubiquitous and inevitable and that systems need to be developed with this in mind. This approach has been highly successful, but has necessitated, first and foremost, a cultural shift. It is in the best interests of patients, and medical professionals alike, that such a shift is embraced in the NHS.
ERIC Educational Resources Information Center
Comptroller General of the U.S., Washington, DC.
Efforts of the U.S. Department of Education to verify data submitted by applicants to the Pell Grant program were analyzed by the General Accounting Office. The effects of carrying out the Department's policy or methodology, called "validation," on financial aid applicants and colleges were assessed. Costs of 1982-1983 validation on…
Sampling error of cruises in the California pine region
A.A. Hasel
1942-01-01
To organize cruises so as to steer a desirable middle course between high accuracy at too much cost and little accuracy at low cost is a problem many foresters have to contend with. It can only be done when the cruiser in charge has a real knowledge of the required standard of accuracy and of the variability existing in the timber stand. The study reported in this...
Economics of human performance and systems total ownership cost.
Onkham, Wilawan; Karwowski, Waldemar; Ahram, Tareq Z
2012-01-01
Financial costs of investing in people is associated with training, acquisition, recruiting, and resolving human errors have a significant impact on increased total ownership costs. These costs can also affect the exaggerate budgets and delayed schedules. The study of human performance economical assessment in the system acquisition process enhances the visibility of hidden cost drivers which support program management informed decisions. This paper presents the literature review of human total ownership cost (HTOC) and cost impacts on overall system performance. Economic value assessment models such as cost benefit analysis, risk-cost tradeoff analysis, expected value of utility function analysis (EV), growth readiness matrix, multi-attribute utility technique, and multi-regressions model were introduced to reflect the HTOC and human performance-technology tradeoffs in terms of the dollar value. The human total ownership regression model introduces to address the influencing human performance cost component measurement. Results from this study will increase understanding of relevant cost drivers in the system acquisition process over the long term.
Gonser, Phillipp; Fuchsberger, Thomas; Matern, Ulrich
2017-08-01
The use of active medical devices in clinical routine should be as safe and efficient as possible. Usability tests (UTs) help improve these aspects of medical devices during their development, but UTs can be of use for hospitals even after a product has been launched. The present pilot study examines the costs and possible benefits of UT for hospitals before buying new medical devices for theatre. Two active medical devices with different complexity were tested in a standardized UT and a cost-benefit analysis was carried out assuming a different device bought at the same price with a higher usability could increase the efficiency of task solving and due to that save valuable theatre time. The cost of the UT amounted up to €19.400. Hospitals could benefit from UTs before buying new devices for theatre by reducing time-consuming operator errors and thereby increase productivity and patient safety. The possible benefits amounted from €23.300 to €1.570.000 (median = €797.000). Not only hospitals could benefit economically from investing in a UT before deciding to buy a medical device, but especially patients would profit from a higher usability by reducing possible operator errors and increase safety and performance of use.
Voshall, Barbara; Piscotty, Ronald; Lawrence, Jeanette; Targosz, Mary
2013-10-01
Safe medication administration is necessary to ensure quality healthcare. Barcode medication administration systems were developed to reduce drug administration errors and the related costs and improve patient safety. Work-arounds created by nurses in the execution of the required processes can lead to unintended consequences, including errors. This article provides a systematic review of the literature associated with barcoded medication administration and work-arounds and suggests interventions that should be adopted by nurse executives to ensure medication safety.
DC-Compensated Current Transformer †
Ripka, Pavel; Draxler, Karel; Styblíková, Renata
2016-01-01
Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component. PMID:26805830
1982-10-22
computation of yearend accruals were not detected. These errors occurred in the computation of: --The net change in the fair value of invest- ments. The...plan administrator made several errors when computing ac- ::ruals for the net change in fair value of investments, interest receivable, and annuity...value of investment securities instead of actual cost in computing the net change in the fair value of investments. The net change was overstated by
Disclosing Medical Errors to Patients: Attitudes and Practices of Physicians and Trainees
Jones, Elizabeth W.; Wu, Barry J.; Forman-Hoffman, Valerie L.; Levi, Benjamin H.; Rosenthal, Gary E.
2007-01-01
BACKGROUND Disclosing errors to patients is an important part of patient care, but the prevalence of disclosure, and factors affecting it, are poorly understood. OBJECTIVE To survey physicians and trainees about their practices and attitudes regarding error disclosure to patients. DESIGN AND PARTICIPANTS Survey of faculty physicians, resident physicians, and medical students in Midwest, Mid-Atlantic, and Northeast regions of the United States. MEASUREMENTS Actual error disclosure; hypothetical error disclosure; attitudes toward disclosure; demographic factors. RESULTS Responses were received from 538 participants (response rate = 77%). Almost all faculty and residents responded that they would disclose a hypothetical error resulting in minor (97%) or major (93%) harm to a patient. However, only 41% of faculty and residents had disclosed an actual minor error (resulting in prolonged treatment or discomfort), and only 5% had disclosed an actual major error (resulting in disability or death). Moreover, 19% acknowledged not disclosing an actual minor error and 4% acknowledged not disclosing an actual major error. Experience with malpractice litigation was not associated with less actual or hypothetical error disclosure. Faculty were more likely than residents and students to disclose a hypothetical error and less concerned about possible negative consequences of disclosure. Several attitudes were associated with greater likelihood of hypothetical disclosure, including the belief that disclosure is right even if it comes at a significant personal cost. CONCLUSIONS There appears to be a gap between physicians’ attitudes and practices regarding error disclosure. Willingness to disclose errors was associated with higher training level and a variety of patient-centered attitudes, and it was not lessened by previous exposure to malpractice litigation. PMID:17473944
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
[Improving blood safety: errors management in transfusion medicine].
Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana
2014-01-01
The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.
Error compensation of single-antenna attitude determination using GNSS for Low-dynamic applications
NASA Astrophysics Data System (ADS)
Chen, Wen; Yu, Chao; Cai, Miaomiao
2017-04-01
GNSS-based single-antenna pseudo-attitude determination method has attracted more and more attention from the field of high-dynamic navigation due to its low cost, low system complexity, and no temporal accumulated errors. Related researches indicate that this method can be an important complement or even an alternative to the traditional sensors for general accuracy requirement (such as small UAV navigation). The application of single-antenna attitude determining method to low-dynamic carrier has just started. Different from the traditional multi-antenna attitude measurement technique, the pseudo-attitude attitude determination method calculates the rotation angle of the carrier trajectory relative to the earth. Thus it inevitably contains some deviations comparing with the real attitude angle. In low-dynamic application, these deviations are particularly noticeable, which may not be ignored. The causes of the deviations can be roughly classified into three categories, including the measurement error, the offset error, and the lateral error. Empirical correction strategies for the formal two errors have been promoted in previous study, but lack of theoretical support. In this paper, we will provide quantitative description of the three type of errors and discuss the related error compensation methods. Vehicle and shipborne experiments were carried out to verify the feasibility of the proposed correction methods. Keywords: Error compensation; Single-antenna; GNSS; Attitude determination; Low-dynamic
Peterson, J.; Dunham, J.B.
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.
An Effective Terrain Aided Navigation for Low-Cost Autonomous Underwater Vehicles.
Zhou, Ling; Cheng, Xianghong; Zhu, Yixian; Dai, Chenxi; Fu, Jinbo
2017-03-25
Terrain-aided navigation is a potentially powerful solution for obtaining submerged position fixes for autonomous underwater vehicles. The application of terrain-aided navigation with high-accuracy inertial navigation systems has demonstrated meter-level navigation accuracy in sea trials. However, available sensors may be limited depending on the type of the mission. Such limitations, especially for low-grade navigation sensors, not only degrade the accuracy of traditional navigation systems, but further impact the ability to successfully employ terrain-aided navigation. To address this problem, a tightly-coupled navigation is presented to successfully estimate the critical sensor errors by incorporating raw sensor data directly into an augmented navigation system. Furthermore, three-dimensional distance errors are calculated, providing measurement updates through the particle filter for absolute and bounded position error. The development of the terrain aided navigation system is elaborated for a vehicle equipped with a non-inertial-grade strapdown inertial navigation system, a 4-beam Doppler Velocity Log range sensor and a sonar altimeter. Using experimental data for navigation performance evaluation in areas with different terrain characteristics, the experiment results further show that the proposed method can be successfully applied to the low-cost AUVs and significantly improves navigation performance.
An Effective Terrain Aided Navigation for Low-Cost Autonomous Underwater Vehicles
Zhou, Ling; Cheng, Xianghong; Zhu, Yixian; Dai, Chenxi; Fu, Jinbo
2017-01-01
Terrain-aided navigation is a potentially powerful solution for obtaining submerged position fixes for autonomous underwater vehicles. The application of terrain-aided navigation with high-accuracy inertial navigation systems has demonstrated meter-level navigation accuracy in sea trials. However, available sensors may be limited depending on the type of the mission. Such limitations, especially for low-grade navigation sensors, not only degrade the accuracy of traditional navigation systems, but further impact the ability to successfully employ terrain-aided navigation. To address this problem, a tightly-coupled navigation is presented to successfully estimate the critical sensor errors by incorporating raw sensor data directly into an augmented navigation system. Furthermore, three-dimensional distance errors are calculated, providing measurement updates through the particle filter for absolute and bounded position error. The development of the terrain aided navigation system is elaborated for a vehicle equipped with a non-inertial-grade strapdown inertial navigation system, a 4-beam Doppler Velocity Log range sensor and a sonar altimeter. Using experimental data for navigation performance evaluation in areas with different terrain characteristics, the experiment results further show that the proposed method can be successfully applied to the low-cost AUVs and significantly improves navigation performance. PMID:28346346
Methods developed to elucidate nursing related adverse events in Japan.
Yamagishi, Manaho; Kanda, Katsuya; Takemura, Yukie
2003-05-01
Financial resources for quality assurance in Japanese hospitals are limited and few hospitals have quality monitoring systems of nursing service systems. However, recently its necessity has been recognized. This study has cost effectively used adverse event occurrence rates as indicators of the quality of nursing service, and audited methods of collecting data on adverse events to elucidate their approximate true numbers. Data collection was conducted in July, August and November 2000 at a hospital in Tokyo that administered both primary and secondary health care services (281 beds, six wards, average length of stay 23 days). We collected adverse events through incident reports, logs, check-lists, nurse interviews, medication error questionnaires, urine leucocyte tests, patient interviews and medical records. Adverse events included the unplanned removals of invasive lines, medication errors, falls, pressure sores, skin deficiencies, physical restraints, and nosocomial infections. After evaluating the time and useful outcomes of each source, it soon became clear that we could elucidate adverse events most consistently and cost-effectively through incident reports, check lists, nurse interviews, urine leucocyte tests and medication error questionnaires. This study suggests that many hospitals in Japan could monitor the quality of the nursing service using these sources.
Neural Mechanisms for Adaptive Learned Avoidance of Mental Effort.
Mitsuto Nagase, Asako; Onoda, Keiichi; Clifford Foo, Jerome; Haji, Tomoki; Akaishi, Rei; Yamaguchi, Shuhei; Sakai, Katsuyuki; Morita, Kenji
2018-02-05
Humans tend to avoid mental effort. Previous studies have demonstrated this tendency using various demand-selection tasks; participants generally avoid options associated with higher cognitive demand. However, it remains unclear whether humans avoid mental effort adaptively in uncertain and non-stationary environments, and if so, what neural mechanisms underlie this learned avoidance and whether they remain the same irrespective of cognitive-demand types. We addressed these issues by developing novel demand-selection tasks where associations between choice options and cognitive-demand levels change over time, with two variations using mental arithmetic and spatial reasoning problems (29:4 and 18:2 males:females). Most participants showed avoidance, and their choices depended on the demand experienced on multiple preceding trials. We assumed that participants updated the expected cost of mental effort through experience, and fitted their choices by reinforcement learning models, comparing several possibilities. Model-based fMRI analyses revealed that activity in the dorsomedial and lateral frontal cortices was positively correlated with the trial-by-trial expected cost for the chosen option commonly across the different types of cognitive demand, and also revealed a trend of negative correlation in the ventromedial prefrontal cortex. We further identified correlates of cost-prediction-error at time of problem-presentation or answering the problem, the latter of which partially overlapped with or were proximal to the correlates of expected cost at time of choice-cue in the dorsomedial frontal cortex. These results suggest that humans adaptively learn to avoid mental effort, having neural mechanisms to represent expected cost and cost-prediction-error, and the same mechanisms operate for various types of cognitive demand. SIGNIFICANCE STATEMENT In daily life, humans encounter various cognitive demands, and tend to avoid high-demand options. However, it remains unclear whether humans avoid mental effort adaptively under dynamically changing environments, and if so, what are the underlying neural mechanisms and whether they operate irrespective of cognitive-demand types. To address these issues, we developed novel tasks, where participants could learn to avoid high-demand options under uncertain and non-stationary environments. Through model-based fMRI analyses, we found regions whose activity was correlated with the expected mental effort cost, or cost-prediction-error, regardless of demand-type, with overlap or adjacence in the dorsomedial frontal cortex. This finding contributes to clarifying the mechanisms for cognitive-demand avoidance, and provides empirical building blocks for the emerging computational theory of mental effort. Copyright © 2018 the authors.
Effects of a preceptorship programme on turnover rate, cost, quality and professional development.
Lee, Tso-Ying; Tzeng, Wen-Chii; Lin, Chia-Huei; Yeh, Mei-Ling
2009-04-01
The purpose of the present study was to design a preceptorship programme and to evaluate its effects on turnover rate, turnover cost, quality of care and professional development. A high turnover rate of nurses is a common global problem. How to improve nurses' willingness to stay in their jobs and reduce the high turnover rate has become a focus. Well-designed preceptorship programmes could possibly decrease turnover rates and improve professional development. A quasi-experimental research design was used. First, a preceptorship programme was designed to establish the role and responsibilities of preceptors in instructing new nurses. Second, a quasi-experimental design was used to evaluate the preceptorship programme. Data on new nurses' turnover rate, turnover cost, quality of nursing care, satisfaction of preceptor's teaching and preceptor's perception were measured. After conducting the preceptorship programme, the turnover rate was 46.5% less than the previous year. The turnover cost was decreased by US$186,102. Additionally, medication error rates made by new nurses dropped from 50-0% and incident rates of adverse events and falls decreased. All new nurses were satisfied with preceptor guidance. The preceptorship programme effectively lowered the turnover rate of new nurses, reduced turnover costs and enhanced the quality of nursing care, especially by reducing medication error incidents. Positive feedback about the programme was received from new nurses. Study findings may offer healthcare administrators another option for retaining new nurses, controlling costs, improving quality and fostering professional development. In addition, incentives and effective support from the organisation must be considered when preceptors perform preceptorship responsibilities.
Nelson, Richard E; Angelovic, Aaron W; Nelson, Scott D; Gleed, Jeremy R; Drews, Frank A
2015-05-01
Adherence engineering applies human factors principles to examine non-adherence within a specific task and to guide the development of materials or equipment to increase protocol adherence and reduce human error. Central line maintenance (CLM) for intensive care unit (ICU) patients is a task through which error or non-adherence to protocols can cause central line-associated bloodstream infections (CLABSIs). We conducted an economic analysis of an adherence engineering CLM kit designed to improve the CLM task and reduce the risk of CLABSI. We constructed a Markov model to compare the cost-effectiveness of the CLM kit, which contains each of the 27 items necessary for performing the CLM procedure, compared with the standard care procedure for CLM, in which each item for dressing maintenance is gathered separately. We estimated the model using the cost of CLABSI overall ($45,685) as well as the excess LOS (6.9 excess ICU days, 3.5 excess general ward days). Assuming the CLM kit reduces the risk of CLABSI by 100% and 50%, this strategy was less costly (cost savings between $306 and $860) and more effective (between 0.05 and 0.13 more quality-adjusted life-years) compared with not using the pre-packaged kit. We identified threshold values for the effectiveness of the kit in reducing CLABSI for which the kit strategy was no longer less costly. An adherence engineering-based intervention to streamline the CLM process can improve patient outcomes and lower costs. Patient safety can be improved by adopting new approaches that are based on human factors principles.
Case-Mix for Performance Management: A Risk Algorithm Based on ICD-10-CM.
Gao, Jian; Moran, Eileen; Almenoff, Peter L
2018-06-01
Accurate risk adjustment is the key to a reliable comparison of cost and quality performance among providers and hospitals. However, the existing case-mix algorithms based on age, sex, and diagnoses can only explain up to 50% of the cost variation. More accurate risk adjustment is desired for provider performance assessment and improvement. To develop a case-mix algorithm that hospitals and payers can use to measure and compare cost and quality performance of their providers. All 6,048,895 patients with valid diagnoses and cost recorded in the US Veterans health care system in fiscal year 2016 were included in this study. The dependent variable was total cost at the patient level, and the explanatory variables were age, sex, and comorbidities represented by 762 clinically homogeneous groups, which were created by expanding the 283 categories from Clinical Classifications Software based on ICD-10-CM codes. The split-sample method was used to assess model overfitting and coefficient stability. The predictive power of the algorithms was ascertained by comparing the R, mean absolute percentage error, root mean square error, predictive ratios, and c-statistics. The expansion of the Clinical Classifications Software categories resulted in higher predictive power. The R reached 0.72 and 0.52 for the transformed and raw scale cost, respectively. The case-mix algorithm we developed based on age, sex, and diagnoses outperformed the existing case-mix models reported in the literature. The method developed in this study can be used by other health systems to produce tailored risk models for their specific purpose.
Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas
2011-01-01
Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.
Cost-utility analysis of a preventive home visit program for older adults in Germany.
Brettschneider, Christian; Luck, Tobias; Fleischer, Steffen; Roling, Gudrun; Beutner, Katrin; Luppa, Melanie; Behrens, Johann; Riedel-Heller, Steffi G; König, Hans-Helmut
2015-04-03
Most older adults want to live independently in a familiar environment instead of moving to a nursing home. Preventive home visits based on multidimensional geriatric assessment can be one strategy to support this preference and might additionally reduce health care costs, due to the avoidance of costly nursing home admissions. The purpose of this study was to analyse the cost-effectiveness of preventive home visits from a societal perspective in Germany. This study is part of a multi-centre, non-blinded, randomised controlled trial aiming at the reduction of nursing home admissions. Participants were older than 80 years and living at home. Up to three home visits were conducted to identify self-care deficits and risk factors, to present recommendations and to implement solutions. The control group received usual care. A cost-utility analysis using quality-adjusted life years (QALY) based on the EQ-5D was performed. Resource utilization was assessed by means of the interview version of a patient questionnaire. A cost-effectiveness acceptability curve controlled for prognostic variables was constructed and a sensitivity analysis to control for the influence of the mode of QALY calculation was performed. 278 individuals (intervention group: 133; control group: 145) were included in the analysis. During 18 months follow-up mean adjusted total cost (mean: +4,401 EUR; bootstrapped standard error: 3,019.61 EUR) and number of QALY (mean: 0.0061 QALY; bootstrapped standard error: 0.0388 QALY) were higher in the intervention group, but differences were not significant. For preventive home visits the probability of an incremental cost-effectiveness ratio <50,000 EUR per QALY was only 15%. The results were robust with respect to the mode of QALY calculation. The evaluated preventive home visits programme is unlikely to be cost-effective. Clinical Trials.gov Identifier: NCT00644826.
Anticipating cognitive effort: roles of perceived error-likelihood and time demands.
Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F
2017-11-13
Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.
New developments in spatial interpolation methods of Sea-Level Anomalies in the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Troupin, Charles; Barth, Alexander; Beckers, Jean-Marie; Pascual, Ananda
2014-05-01
The gridding of along-track Sea-Level Anomalies (SLA) measured by a constellation of satellites has numerous applications in oceanography, such as model validation, data assimilation or eddy tracking. Optimal Interpolation (OI) is often the preferred method for this task, as it leads to the lowest expected error and provides an error field associated to the analysed field. However, the numerical cost of the method may limit its utilization in situations where the number of data points is significant. Furthermore, the separation of non-adjacent regions with OI requires adaptation of the code, leading to a further increase of the numerical cost. To solve these issues, the Data-Interpolating Variational Analysis (DIVA), a technique designed to produce gridded from sparse in situ measurements, is applied on SLA data in the Mediterranean Sea. DIVA and OI have been shown to be equivalent (provided some assumptions on the covariances are made). The main difference lies in the covariance function, which is not explicitly formulated in DIVA. The particular spatial and temporal distributions of measurements required adaptation in the Software tool (data format, parameter determinations, ...). These adaptation are presented in the poster. The daily analysed and error fields obtained with this technique are compared with available products such as the gridded field from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) data server. The comparison reveals an overall good agreement between the products. The time evolution of the mean error field evidences the need of a large number of simultaneous altimetry satellites: in period during which 4 satellites are available, the mean error is on the order of 17.5%, while when only 2 satellites are available, the error exceeds 25%. Finally, we propose the use sea currents to improve the results of the interpolation, especially in the coastal area. These currents can be constructed from the bathymetry or extracted from a HF radar located in the Balearic Sea.
Experiments with explicit filtering for LES using a finite-difference method
NASA Technical Reports Server (NTRS)
Lund, T. S.; Kaltenbach, H. J.
1995-01-01
The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method.