Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
To evaluate the cost-effectiveness of an automated medication system (AMS) implemented in a Danish hospital setting. An economic evaluation was performed alongside a controlled before-and-after effectiveness study with one control ward and one intervention ward. The primary outcome measure was the number of errors in the medication administration process observed prospectively before and after implementation. To determine the difference in proportion of errors after implementation of the AMS, logistic regression was applied with the presence of error(s) as the dependent variable. Time, group, and interaction between time and group were the independent variables. The cost analysis used the hospital perspective with a short-term incremental costing approach. The total 6-month costs with and without the AMS were calculated as well as the incremental costs. The number of avoided administration errors was related to the incremental costs to obtain the cost-effectiveness ratio expressed as the cost per avoided administration error. The AMS resulted in a statistically significant reduction in the proportion of errors in the intervention ward compared with the control ward. The cost analysis showed that the AMS increased the ward's 6-month cost by €16,843. The cost-effectiveness ratio was estimated at €2.01 per avoided administration error, €2.91 per avoided procedural error, and €19.38 per avoided clinical error. The AMS was effective in reducing errors in the medication administration process at a higher overall cost. The cost-effectiveness analysis showed that the AMS was associated with affordable cost-effectiveness rates. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error
Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G
2012-01-01
Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908
CLEAR: Cross-Layer Exploration for Architecting Resilience
2017-03-01
benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above
The cost of implementing inpatient bar code medication administration.
Sakowski, Julie Ann; Ketchel, Alan
2013-02-01
To calculate the costs associated with implementing and operating an inpatient bar-code medication administration (BCMA) system in the community hospital setting and to estimate the cost per harmful error prevented. This is a retrospective, observational study. Costs were calculated from the hospital perspective and a cost-consequence analysis was performed to estimate the cost per preventable adverse drug event averted. Costs were collected from financial records and key informant interviews at 4 not-for profit community hospitals. Costs included direct expenditures on capital, infrastructure, additional personnel, and the opportunity costs of time for existing personnel working on the project. The number of adverse drug events prevented using BCMA was estimated by multiplying the number of doses administered using BCMA by the rate of harmful errors prevented by interventions in response to system warnings. Our previous work found that BCMA identified and intercepted medication errors in 1.1% of doses administered, 9% of which potentially could have resulted in lasting harm. The cost of implementing and operating BCMA including electronic pharmacy management and drug repackaging over 5 years is $40,000 (range: $35,600 to $54,600) per BCMA-enabled bed and $2000 (range: $1800 to $2600) per harmful error prevented. BCMA can be an effective and potentially cost-saving tool for preventing the harm and costs associated with medication errors.
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
NASA Technical Reports Server (NTRS)
Gordon, Steven C.
1993-01-01
Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, Diana
2014-01-01
Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
2018-02-01
Automated medication systems have been found to reduce errors in the medication process, but little is known about the cost-effectiveness of such systems. The objective of this study was to perform a model-based indirect cost-effectiveness comparison of three different, real-world automated medication systems compared with current standard practice. The considered automated medication systems were a patient-specific automated medication system (psAMS), a non-patient-specific automated medication system (npsAMS), and a complex automated medication system (cAMS). The economic evaluation used original effect and cost data from prospective, controlled, before-and-after studies of medication systems implemented at a Danish hematological ward and an acute medical unit. Effectiveness was described as the proportion of clinical and procedural error opportunities that were associated with one or more errors. An error was defined as a deviation from the electronic prescription, from standard hospital policy, or from written procedures. The cost assessment was based on 6-month standardization of observed cost data. The model-based comparative cost-effectiveness analyses were conducted with system-specific assumptions of the effect size and costs in scenarios with consumptions of 15,000, 30,000, and 45,000 doses per 6-month period. With 30,000 doses the cost-effectiveness model showed that the cost-effectiveness ratio expressed as the cost per avoided clinical error was €24 for the psAMS, €26 for the npsAMS, and €386 for the cAMS. Comparison of the cost-effectiveness of the three systems in relation to different valuations of an avoided error showed that the psAMS was the most cost-effective system regardless of error type or valuation. The model-based indirect comparison against the conventional practice showed that psAMS and npsAMS were more cost-effective than the cAMS alternative, and that psAMS was more cost-effective than npsAMS.
Karnon, Jonathan; Campbell, Fiona; Czoski-Murray, Carolyn
2009-04-01
Medication errors can lead to preventable adverse drug events (pADEs) that have significant cost and health implications. Errors often occur at care interfaces, and various interventions have been devised to reduce medication errors at the point of admission to hospital. The aim of this study is to assess the incremental costs and effects [measured as quality adjusted life years (QALYs)] of a range of such interventions for which evidence of effectiveness exists. A previously published medication errors model was adapted to describe the pathway of errors occurring at admission through to the occurrence of pADEs. The baseline model was populated using literature-based values, and then calibrated to observed outputs. Evidence of effects was derived from a systematic review of interventions aimed at preventing medication error at hospital admission. All five interventions, for which evidence of effectiveness was identified, are estimated to be extremely cost-effective when compared with the baseline scenario. Pharmacist-led reconciliation intervention has the highest expected net benefits, and a probability of being cost-effective of over 60% by a QALY value of pound10 000. The medication errors model provides reasonably strong evidence that some form of intervention to improve medicines reconciliation is a cost-effective use of NHS resources. The variation in the reported effectiveness of the few identified studies of medication error interventions illustrates the need for extreme attention to detail in the development of interventions, but also in their evaluation and may justify the primary evaluation of more than one specification of included interventions.
Economic measurement of medical errors using a hospital claims database.
David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S
2013-01-01
The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Lahue, Betsy J; Pyenson, Bruce; Iwasaki, Kosuke; Blumen, Helen E; Forray, Susan; Rothschild, Jeffrey M
2012-11-01
Harmful medication errors, or preventable adverse drug events (ADEs), are a prominent quality and cost issue in healthcare. Injectable medications are important therapeutic agents, but they are associated with a greater potential for serious harm than oral medications. The national burden of preventable ADEs associated with inpatient injectable medications and the associated medical professional liability (MPL) costs have not been previously described in the literature. To quantify the economic burden of preventable ADEs related to inpatient injectable medications in the United States. Medical error data (MedMarx 2009-2011) were utilized to derive the distribution of errors by injectable medication types. Hospital data (Premier 2010-2011) identified the numbers and the types of injections per hospitalization. US payer claims (2009-2010 MarketScan Commercial and Medicare 5% Sample) were used to calculate the incremental cost of ADEs by payer and by diagnosis-related group (DRG). The incremental cost of ADEs was defined as inclusive of the time of inpatient admission and the following 4 months. Actuarial calculations, assumptions based on published literature, and DRG proportions from 17 state discharge databases were used to derive the probability of preventable ADEs per hospitalization and their annual costs. MPL costs were assessed from state- and national-level industry reports, premium rates, and from closed claims databases between 1990 and 2011. The 2010 American Hospital Association database was used for hospital-level statistics. All costs were adjusted to 2013 dollars. Based on this medication-level analysis of reported harmful errors and the frequency of inpatient administrations with actuarial projections, we estimate that preventable ADEs associated with injectable medications impact 1.2 million hospitalizations annually. Using a matched cohort analysis of healthcare claims as a basis for evaluating incremental costs, we estimate that inpatient preventable ADEs associated with injectable medications increase the annual US payer costs by $2.7 billion to $5.1 billion, averaging $600,000 in extra costs per hospital. Across categories of injectable drugs, insulin had the highest risk per administration for a preventable ADE, although errors in the higher-volume categories of anti-infective, narcotic/analgesic, anticoagulant/thrombolytic and anxiolytic/sedative injectable medications harmed more patients. Our analysis of liability claims estimates that MPL associated with injectable medications totals $300 million to $610 million annually, with an average cost of $72,000 per US hospital. The incremental healthcare and MPL costs of preventable ADEs resulting from inpatient injectable medications are substantial. The data in this study strongly support the clinical and business cases of investing in efforts to prevent errors related to injectable medications.
Lahue, Betsy J.; Pyenson, Bruce; Iwasaki, Kosuke; Blumen, Helen E.; Forray, Susan; Rothschild, Jeffrey M.
2012-01-01
Background Harmful medication errors, or preventable adverse drug events (ADEs), are a prominent quality and cost issue in healthcare. Injectable medications are important therapeutic agents, but they are associated with a greater potential for serious harm than oral medications. The national burden of preventable ADEs associated with inpatient injectable medications and the associated medical professional liability (MPL) costs have not been previously described in the literature. Objective To quantify the economic burden of preventable ADEs related to inpatient injectable medications in the United States. Methods Medical error data (MedMarx 2009–2011) were utilized to derive the distribution of errors by injectable medication types. Hospital data (Premier 2010–2011) identified the numbers and the types of injections per hospitalization. US payer claims (2009–2010 MarketScan Commercial and Medicare 5% Sample) were used to calculate the incremental cost of ADEs by payer and by diagnosis-related group (DRG). The incremental cost of ADEs was defined as inclusive of the time of inpatient admission and the following 4 months. Actuarial calculations, assumptions based on published literature, and DRG proportions from 17 state discharge databases were used to derive the probability of preventable ADEs per hospitalization and their annual costs. MPL costs were assessed from state- and national-level industry reports, premium rates, and from closed claims databases between 1990 and 2011. The 2010 American Hospital Association database was used for hospital-level statistics. All costs were adjusted to 2013 dollars. Results Based on this medication-level analysis of reported harmful errors and the frequency of inpatient administrations with actuarial projections, we estimate that preventable ADEs associated with injectable medications impact 1.2 million hospitalizations annually. Using a matched cohort analysis of healthcare claims as a basis for evaluating incremental costs, we estimate that inpatient preventable ADEs associated with injectable medications increase the annual US payer costs by $2.7 billion to $5.1 billion, averaging $600,000 in extra costs per hospital. Across categories of injectable drugs, insulin had the highest risk per administration for a preventable ADE, although errors in the higher-volume categories of anti-infective, narcotic/analgesic, anticoagulant/thrombolytic and anxiolytic/sedative injectable medications harmed more patients. Our analysis of liability claims estimates that MPL associated with injectable medications totals $300 million to $610 million annually, with an average cost of $72,000 per US hospital. Conclusion The incremental healthcare and MPL costs of preventable ADEs resulting from inpatient injectable medications are substantial. The data in this study strongly support the clinical and business cases of investing in efforts to prevent errors related to injectable medications. PMID:24991335
Evaluation of The Operational Benefits Versus Costs of An Automated Cargo Mover
2016-12-01
logistics footprint and life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically...life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically significant differences...Error of Estimation. Source: Eskew and Lawler (1994). ...........................75 Figure 24. Load Results (100 Runs per Scenario
Reducing Formation-Keeping Maneuver Costs for Formation Flying Satellites in Low-Earth Orbit
NASA Technical Reports Server (NTRS)
Hamilton, Nicholas
2001-01-01
Several techniques are used to synthesize the formation-keeping control law for a three-satellite formation in low-earth orbit. The objective is to minimize maneuver cost and position tracking error. Initial reductions are found for a one-satellite case by tuning the state-weighting matrix within the linear-quadratic-Gaussian framework. Further savings come from adjusting the maneuver interval. Scenarios examined include cases with and without process noise. These results are then applied to a three-satellite formation. For both the one-satellite and three-satellite cases, increasing the maneuver interval yields a decrease in maneuver cost and an increase in position tracking error. A maneuver interval of 8-10 minutes provides a good trade-off between maneuver cost and position tracking error. An analysis of the closed-loop poles with respect to varying maneuver intervals explains the effectiveness of the chosen maneuver interval.
NASA Astrophysics Data System (ADS)
Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.
2017-12-01
The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.
Cost Risk Analysis Based on Perception of the Engineering Process
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Wood, Darrell A.; Moore, Arlene A.; Bogart, Edward H.
1986-01-01
In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering process parameters are elicited from the engineer/expert on the project and are based on that expert's technical knowledge. These are converted by a parametric cost model into a cost estimate. The method discussed makes no assumptions about the distribution underlying the distribution of possible costs, and is not tied to the analysis of previous projects, except through the expert calibrations performed by the parametric cost analyst.
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control
ERIC Educational Resources Information Center
Page, A.; Moreno, R.; Candelas, P.; Belmar, F.
2008-01-01
In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…
A reformulation of the Cost Plus Net Value Change (C+NVC) model of wildfire economics
Geoffrey H. Donovan; Douglas B. Rideout
2003-01-01
The Cost plus Net Value Change (C+NVC) model provides the theoretical foundation for wildland fire economics and provides the basis for the National Fire Management Analysis System (NFMAS). The C+NVC model is based on the earlier least Cost plus Loss model (LC+L) expressed by Sparhawk (1925). Mathematical and graphical analysis of the LC+L model illustrates two errors...
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
Error Cost Escalation Through the Project Life Cycle
NASA Technical Reports Server (NTRS)
Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory
2004-01-01
It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units
NASA Astrophysics Data System (ADS)
Colins, Karen; Li, Liqian; Liu, Yu
2017-05-01
Mass production of widely used semiconductor digital integrated circuits (ICs) has lowered unit costs to the level of ordinary daily consumables of a few dollars. It is therefore reasonable to contemplate the idea of an engineered system that consumes unshielded low-cost ICs for the purpose of measuring gamma radiation dose. Underlying the idea is the premise of a measurable correlation between an observable property of ICs and radiation dose. Accumulation of radiation-damage-induced state changes or error events is such a property. If correct, the premise could make possible low-cost wide-area radiation dose measurement systems, instantiated as wireless sensor networks (WSNs) with unshielded consumable ICs as nodes, communicating error events to a remote base station. The premise has been investigated quantitatively for the first time in laboratory experiments and related analyses performed at the Canadian Nuclear Laboratories. State changes or error events were recorded in real time during irradiation of samples of ICs of different types in a 60Co gamma cell. From the error-event sequences, empirical distribution functions of dose were generated. The distribution functions were inverted and probabilities scaled by total error events, to yield plots of the relationship between dose and error tallies. Positive correlation was observed, and discrete functional dependence of dose quantiles on error tallies was measured, demonstrating the correctness of the premise. The idea of an engineered system that consumes unshielded low-cost ICs in a WSN, for the purpose of measuring gamma radiation dose over wide areas, is therefore tenable.
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush
1999-01-01
The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.
Cost effectiveness of stream-gaging program in Michigan
Holtschlag, D.J.
1985-01-01
This report documents the results of a study of the cost effectiveness of the stream-gaging program in Michigan. Data uses and funding sources were identified for the 129 continuous gaging stations being operated in Michigan as of 1984. One gaging station was identified as having insufficient reason to continue its operation. Several stations were identified for reactivation, should funds become available, because of insufficiencies in the data network. Alternative methods of developing streamflow information based on routing and regression analyses were investigated for 10 stations. However, no station records were reproduced with sufficient accuracy to replace conventional gaging practices. A cost-effectiveness analysis of the data-collection procedure for the ice-free season was conducted using a Kalman-filter analysis. To define missing-record characteristics, cross-correlation coefficients and coefficients of variation were computed at stations on the basis of daily mean discharge. Discharge-measurement data were used to describe the gage/discharge rating stability at each station. The results of the cost-effectiveness analysis for a 9-month ice-free season show that the current policy of visiting most stations on a fixed servicing schedule once every 6 weeks results in an average standard error of 12.1 percent for the current $718,100 budget. By adopting a flexible servicing schedule, the average standard error could be reduced to 11.1 percent. Alternatively, the budget could be reduced to $700,200 while maintaining the current level of accuracy. A minimum budget of $680,200 is needed to operate the 129-gaging-station program; a budget less than this would not permit proper service and maintenance of stations. At the minimum budget, the average standard error would be 14.4 percent. A budget of $789,900 (the maximum analyzed) would result in a decrease in the average standard error to 9.07 percent. Owing to continual changes in the composition of the network and the changes in the uncertainties of streamflow accuracy at individual stations, the cost-effectiveness analysis will need to be updated regularly if it is to be used as a management tool. Cost of these updates need to be considered in decisions concerning the feasibility of flexible servicing schedules.
Chen, Chia-Chi; Hsiao, Fei-Yuan; Shen, Li-Jiuan; Wu, Chien-Chih
2017-08-01
Medication errors may lead to adverse drug events (ADEs), which endangers patient safety and increases healthcare-related costs. The on-ward deployment of clinical pharmacists has been shown to reduce preventable ADEs, and save costs. The purpose of this study was to evaluate the ADEs prevention and cost-saving effects by clinical pharmacist deployment in a nephrology ward.This was a retrospective study, which compared the number of pharmacist interventions 1 year before and after a clinical pharmacist was deployed in a nephrology ward. The clinical pharmacist attended ward rounds, reviewed and revised all medication orders, and gave active recommendations of medication use. For intervention analysis, the numbers and types of the pharmacist's interventions in medication orders and the active recommendations were compared. For cost analysis, both estimated cost saving and avoidance were calculated and compared.The total numbers of pharmacist interventions in medication orders were 824 in 2012 (preintervention), and 1977 in 2013 (postintervention). The numbers of active recommendation were 40 in 2012, and 253 in 2013. The estimated cost savings in 2012 and 2013 were NT$52,072 and NT$144,138, respectively. The estimated cost avoidances of preventable ADEs in 2012 and 2013 were NT$3,383,700 and NT$7,342,200, respectively. The benefit/cost ratio increased from 4.29 to 9.36, and average admission days decreased by 2 days after the on-ward deployment of a clinical pharmacist.The number of pharmacist's interventions increased dramatically after her on-ward deployment. This service could reduce medication errors, preventable ADEs, and costs of both medications and potential ADEs.
Al-lela, Omer Qutaiba B; Bahari, Mohd Baidi; Al-abbassi, Mustafa G; Salih, Muhannad R M; Basher, Amena Y
2012-06-06
The immunization status of children is improved by interventions that increase community demand for compulsory and non-compulsory vaccines, one of the most important interventions related to immunization providers. The aim of this study is to evaluate the activities of immunization providers in terms of activities time and cost, to calculate the immunization doses cost, and to determine the immunization dose errors cost. Time-motion and cost analysis study design was used. Five public health clinics in Mosul-Iraq participated in the study. Fifty (50) vaccine doses were required to estimate activities time and cost. Micro-costing method was used; time and cost data were collected for each immunization-related activity performed by the clinic staff. A stopwatch was used to measure the duration of activity interactions between the parents and clinic staff. The immunization service cost was calculated by multiplying the average salary/min by activity time per minute. 528 immunization cards of Iraqi children were scanned to determine the number and the cost of immunization doses errors (extraimmunization doses and invalid doses). The average time for child registration was 6.7 min per each immunization dose, and the physician spent more than 10 min per dose. Nurses needed more than 5 min to complete child vaccination. The total cost of immunization activities was 1.67 US$ per each immunization dose. Measles vaccine (fifth dose) has a lower price (0.42 US$) than all other immunization doses. The cost of a total of 288 invalid doses was 744.55 US$ and the cost of a total of 195 extra immunization doses was 503.85 US$. The time spent on physicians' activities was longer than that spent on registrars' and nurses' activities. Physician total cost was higher than registrar cost and nurse cost. The total immunization cost will increase by about 13.3% owing to dose errors. Copyright © 2012 Elsevier Ltd. All rights reserved.
The impact of using an intravenous workflow management system (IVWMS) on cost and patient safety.
Lin, Alex C; Deng, Yihong; Thaibah, Hilal; Hingl, John; Penm, Jonathan; Ivey, Marianne F; Thomas, Mark
2018-07-01
The aim of this study was to determine the financial costs associated with wasted and missing doses before and after the implementation of an intravenous workflow management system (IVWMS) and to quantify the number and the rate of detected intravenous (IV) preparation errors. A retrospective analysis of the sample hospital information system database was conducted using three months of data before and after the implementation of an IVWMS System (DoseEdge ® ) which uses barcode scanning and photographic technologies to track and verify each step of the preparation process. The financial impact associated with wasted and missing >IV doses was determined by combining drug acquisition, labor, accessory, and disposal costs. The intercepted error reports and pharmacist detected error reports were drawn from the IVWMS to quantify the number of errors by defined error categories. The total number of IV doses prepared before and after the implementation of the IVWMS system were 110,963 and 101,765 doses, respectively. The adoption of the IVWMS significantly reduced the amount of wasted and missing IV doses by 14,176 and 2268 doses, respectively (p < 0.001). The overall cost savings of using the system was $144,019 over 3 months. The total number of errors detected was 1160 (1.14%) after using the IVWMS. The implementation of the IVWMS facilitated workflow changes that led to a positive impact on cost and patient safety. The implementation of the IVWMS increased patient safety by enforcing standard operating procedures and bar code verifications. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
SEU System Analysis: Not Just the Sum of All Parts
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth
2014-01-01
Single event upset (SEU) analysis of complex systems is challenging. Currently, system SEU analysis is performed by component level partitioning and then either: the most dominant SEU cross-sections (SEUs) are used in system error rate calculations; or the partition SEUs are summed to eventually obtain a system error rate. In many cases, system error rates are overestimated because these methods generally overlook system level derating factors. The problem with overestimating is that it can cause overdesign and consequently negatively affect the following: cost, schedule, functionality, and validation/verification. The scope of this presentation is to discuss the risks involved with our current scheme of SEU analysis for complex systems; and to provide alternative methods for improvement.
Wang, Wansheng; Chen, Long; Zhou, Jie
2015-01-01
A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063
Analyzing human errors in flight mission operations
NASA Technical Reports Server (NTRS)
Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef
1993-01-01
A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.
[Risk Management: concepts and chances for public health].
Palm, Stefan; Cardeneo, Margareta; Halber, Marco; Schrappe, Matthias
2002-01-15
Errors are a common problem in medicine and occur as a result of a complex process involving many contributing factors. Medical errors significantly reduce the safety margin for the patient and contribute additional costs in health care delivery. In most cases adverse events cannot be attributed to a single underlying cause. Therefore an effective risk management strategy must follow a system approach, which is based on counting and analysis of near misses. The development of defenses against the undesired effects of errors should be the main focus rather than asking the question "Who blundered?". Analysis of near misses (which in this context can be compared to indicators) offers several methodological advantages as compared to the analysis of errors and adverse events. Risk management is an integral element of quality management.
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Nurhayati, R. A.; Wiyono, S. B.; Handajani, S. S.; Martini, T. S.
2017-01-01
In this paper, we develop an integrated inventory model considering the imperfect quality items, inspection error, controllable lead time, and budget capacity constraint. The imperfect items were uniformly distributed and detected on the screening process. However there are two types of possibilities. The first is type I of inspection error (when a non-defective item classified as defective) and the second is type II of inspection error (when a defective item classified as non-defective). The demand during the lead time is unknown, and it follows the normal distribution. The lead time can be controlled by adding the crashing cost. Furthermore, the existence of the budget capacity constraint is caused by the limited purchasing cost. The purposes of this research are: to modify the integrated vendor and buyer inventory model, to establish the optimal solution using Kuhn-Tucker’s conditions, and to apply the models. Based on the result of application and the sensitivity analysis, it can be obtained minimum integrated inventory total cost rather than separated inventory.
Prevalence and cost of hospital medical errors in the general and elderly United States populations.
Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S
2013-12-01
The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.
Cost-effectiveness of an electronic medication ordering system (CPOE/CDSS) in hospitalized patients.
Vermeulen, K M; van Doormaal, J E; Zaal, R J; Mol, P G M; Lenderink, A W; Haaijer-Ruskamp, F M; Kosterink, J G W; van den Bemt, P M L A
2014-08-01
Prescribing medication is an important aspect of almost all in-hospital treatment regimes. Besides their obviously beneficial effects, medicines can also cause adverse drug events (ADE), which increase morbidity, mortality and health care costs. Partially, these ADEs arise from medication errors, e.g. at the prescribing stage. ADEs caused by medication errors are preventable ADEs. Until now, medication ordering was primarily a paper-based process and consequently, it was error prone. Computerized Physician Order Entry, combined with basic Clinical Decision Support System (CPOE/CDSS) is considered to enhance patient safety. Limited information is available on the balance between the health gains and the costs that need to be invested in order to achieve these positive effects. Aim of this study was to study the balance between the effects and costs of CPOE/CDSS compared to the traditional paper-based medication ordering. The economic evaluation was performed alongside a clinical study (interrupted time series design) on the effectiveness of CPOE/CDSS, including a cost minimization and a cost-effectiveness analysis. Data collection took place between 2005 and 2008. Analyses were performed from a hospital perspective. The study was performed in a general teaching hospital and a University Medical Centre on general internal medicine, gastroenterology and geriatric wards. Computerized Physician Order Entry, combined with basic Clinical Decision Support System (CPOE/CDSS) was compared to a traditional paper based system. All costs of both medication ordering systems are based on resources used and time invested. Prices were expressed in Euros (price level 2009). Effectiveness outcomes were medication errors and preventable adverse drug events. During the paper-based prescribing period 592 patients were included, and during the CPOE/CDSS period 603. Total costs of the paper-based system and CPOE/CDSS amounted to €12.37 and €14.91 per patient/day respectively. The Incremental Cost-Effectiveness Ratio (ICER) for medication errors was 3.54 and for preventable adverse drug events 322.70, indicating the extra amount (€) that has to be invested in order to prevent one medication error or one pADE. CPOE with basic CDSS contributes to a decreased risk of preventable harm. Overall, the extra costs of CPOE/CDSS needed to prevent one ME or one pADE seem to be acceptable. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
The development of a public optometry system in Mozambique: a Cost Benefit Analysis.
Thompson, Stephen; Naidoo, Kovin; Harris, Geoff; Bilotto, Luigi; Ferrão, Jorge; Loughman, James
2014-09-23
The economic burden of uncorrected refractive error (URE) is thought to be high in Mozambique, largely as a consequence of the lack of resources and systems to tackle this largely avoidable problem. The Mozambique Eyecare Project (MEP) has established the first optometry training and human resource deployment initiative to address the burden of URE in Lusophone Africa. The nature of the MEP programme provides the opportunity to determine, using Cost Benefit Analysis (CBA), whether investing in the establishment and delivery of a comprehensive system for optometry human resource development and public sector deployment is economically justifiable for Lusophone Africa. A CBA methodology was applied across the period 2009-2049. Costs associated with establishing and operating a school of optometry, and a programme to address uncorrected refractive error, were included. Benefits were calculated using a human capital approach to valuing sight. Disability weightings from the Global Burden of Disease study were applied. Costs were subtracted from benefits to provide the net societal benefit, which was discounted to provide the net present value using a 3% discount rate. Using the most recently published disability weightings, the potential exists, through the correction of URE in 24.3 million potentially economically productive persons, to achieve a net present value societal benefit of up to $1.1 billion by 2049, at a Benefit-Cost ratio of 14:1. When CBA assumptions are varied as part of the sensitivity analysis, the results suggest the societal benefit could lie in the range of $649 million to $9.6 billion by 2049. This study demonstrates that a programme designed to address the burden of refractive error in Mozambique is economically justifiable in terms of the increased productivity that would result due to its implementation.
Economic impact of medication error: a systematic review.
Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P
2017-05-01
Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
An Application of Linear Covariance Analysis to the Design of Responsive Near-Rendezvous Missions
2007-06-01
accurately before making large ma- neuvers. A fifth type of error is maneuver knowledge error (MKER). This error accounts for how well a spacecraft is able...utilized due in a large part to the cost of designing and launching spacecraft , in a market where currently there are not many options for launching...is then ordered to fire its thrusters to increase its orbital altitude to 800 km. Before the maneuver the spacecraft is moving with some velocity, V
Economics of human performance and systems total ownership cost.
Onkham, Wilawan; Karwowski, Waldemar; Ahram, Tareq Z
2012-01-01
Financial costs of investing in people is associated with training, acquisition, recruiting, and resolving human errors have a significant impact on increased total ownership costs. These costs can also affect the exaggerate budgets and delayed schedules. The study of human performance economical assessment in the system acquisition process enhances the visibility of hidden cost drivers which support program management informed decisions. This paper presents the literature review of human total ownership cost (HTOC) and cost impacts on overall system performance. Economic value assessment models such as cost benefit analysis, risk-cost tradeoff analysis, expected value of utility function analysis (EV), growth readiness matrix, multi-attribute utility technique, and multi-regressions model were introduced to reflect the HTOC and human performance-technology tradeoffs in terms of the dollar value. The human total ownership regression model introduces to address the influencing human performance cost component measurement. Results from this study will increase understanding of relevant cost drivers in the system acquisition process over the long term.
Cost-effectiveness of the streamflow-gaging program in Wyoming
Druse, S.A.; Wahl, K.L.
1988-01-01
This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)
Aziz, Muhammad Tahir; Ur-Rehman, Tofeeq; Qureshi, Sadia; Bukhari, Nadeem Irfan
Medication errors in chemotherapy are frequent and lead to patient morbidity and mortality, as well as increased rates of re-admission and length of stay, and considerable extra costs. Objective: This study investigated the proposition that computerised chemotherapy ordering reduces the incidence and severity of chemotherapy protocol errors. A computerised physician order entry of chemotherapy order (C-CO) with clinical decision support system was developed in-house, including standardised chemotherapy protocol definitions, automation of pharmacy distribution, clinical checks, labeling and invoicing. A prospective study was then conducted in a C-CO versus paper based chemotherapy order (P-CO) in a 30-bed chemotherapy bay of a tertiary hospital. Both C-CO and P-CO orders, including pharmacoeconomic analysis and the severity of medication errors, were checked and validated by a clinical pharmacist. A group analysis and field trial were also conducted to assess clarity, feasibility and decision making. The C-CO was very usable in terms of its clarity and feasibility. The incidence of medication errors was significantly lower in the C-CO compared with the P-CO (10/3765 [0.26%] versus 134/5514 [2.4%]). There was also a reduction in dispensing time of chemotherapy protocols in the C-CO. The chemotherapy computerisation with clinical decision support system resulted in a significant decrease in the occurrence and severity of medication errors, improvements in chemotherapy dispensing and administration times, and reduction of chemotherapy cost.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-16
... due Revision due to agency Collection Old burden to error error (old-- error) IC1: ``Ready to Move... Revisions of Estimates of Annual Costs to Respondents Total cost Collection New cost Old cost reduction (new--old) IC1: ``Ready to Move?'' $288,000 $720,000 -$432,000 ``Rights & Responsibilities'' 3,264,000 8,160...
Development of a Methodology to Optimally Allocate Visual Inspection Time
1989-06-01
Model and then takes into account the costs of the errors. The purpose of the Alternative Model is to not make 104 costly mistakes while meeting the...James Buck, and Virgil Anderson, AIIE Transactions, Volume 11, No.4, December 1979. 26. "Inspection of Sheet Materials - Model and Data", Colin G. Drury ...worker error, the probability of inspector error, and the cost of system error. Paired comparisons of error phenomena from operational personnel are
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
Uncharted territory: measuring costs of diagnostic errors outside the medical record.
Schwartz, Alan; Weiner, Saul J; Weaver, Frances; Yudkowsky, Rachel; Sharma, Gunjan; Binns-Calvey, Amy; Preyss, Ben; Jordan, Neil
2012-11-01
In a past study using unannounced standardised patients (USPs), substantial rates of diagnostic and treatment errors were documented among internists. Because the authors know the correct disposition of these encounters and obtained the physicians' notes, they can identify necessary treatment that was not provided and unnecessary treatment. They can also discern which errors can be identified exclusively from a review of the medical records. To estimate the avoidable direct costs incurred by physicians making errors in our previous study. In the study, USPs visited 111 internal medicine attending physicians. They presented variants of four previously validated cases that jointly manipulate the presence or absence of contextual and biomedical factors that could lead to errors in management if overlooked. For example, in a patient with worsening asthma symptoms, a complicating biomedical factor was the presence of reflux disease and a complicating contextual factor was inability to afford the currently prescribed inhaler. Costs of missed or unnecessary services were computed using Medicare cost-based reimbursement data. Fourteen practice locations, including two academic clinics, two community-based primary care networks with multiple sites, a core safety net provider, and three Veteran Administration government facilities. Contribution of errors to costs of care. Overall, errors in care resulted in predicted costs of approximately $174,000 across 399 visits, of which only $8745 was discernible from a review of the medical records alone (without knowledge of the correct diagnoses). The median cost of error per visit with an incorrect care plan differed by case and by presentation variant within case. Chart reviews alone underestimate costs of care because they typically reflect appropriate treatment decisions conditional on (potentially erroneous) diagnoses. Important information about patient context is often entirely missing from medical records. Experimental methods, including the use of USPs, reveal the substantial costs of these errors.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less
Cost-benefit analysis: newborn screening for inborn errors of metabolism in Lebanon.
Khneisser, I; Adib, S; Assaad, S; Megarbane, A; Karam, P
2015-12-01
Few countries in the Middle East-North Africa region have adopted national newborn screening for inborn errors of metabolism by tandem mass spectrometry (MS/MS). We aimed to evaluate the cost-benefit of newborn screening for such disorders in Lebanon, as a model for other developing countries in the region. Average costs of expected care for inborn errors of metabolism cases as a group, between ages 0 and 18, early and late diagnosed, were calculated from 2007 to 2013. The monetary value of early detection using MS/MS was compared with that of clinical "late detection", including cost of diagnosis and hospitalizations. During this period, 126000 newborns were screened. Incidence of detected cases was 1/1482, which can be explained by high consanguinity rates in Lebanon. A reduction by half of direct cost of care, reaching on average 31,631 USD per detected case was shown. This difference more than covers the expense of starting a newborn screening programme. Although this model does not take into consideration the indirect benefits of the better quality of life of those screened early, it can be argued that direct and indirect costs saved through early detection of these disorders are important enough to justify universal publicly-funded screening, especially in developing countries with high consanguinity rates, as shown through this data from Lebanon. © The Author(s) 2015.
Modal cost analysis for simple continua
NASA Technical Reports Server (NTRS)
Hu, A.; Skelton, R. E.; Yang, T. Y.
1988-01-01
The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations, it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for simple continua such as beam-like structures. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.
Cost-utility analysis of a preventive home visit program for older adults in Germany.
Brettschneider, Christian; Luck, Tobias; Fleischer, Steffen; Roling, Gudrun; Beutner, Katrin; Luppa, Melanie; Behrens, Johann; Riedel-Heller, Steffi G; König, Hans-Helmut
2015-04-03
Most older adults want to live independently in a familiar environment instead of moving to a nursing home. Preventive home visits based on multidimensional geriatric assessment can be one strategy to support this preference and might additionally reduce health care costs, due to the avoidance of costly nursing home admissions. The purpose of this study was to analyse the cost-effectiveness of preventive home visits from a societal perspective in Germany. This study is part of a multi-centre, non-blinded, randomised controlled trial aiming at the reduction of nursing home admissions. Participants were older than 80 years and living at home. Up to three home visits were conducted to identify self-care deficits and risk factors, to present recommendations and to implement solutions. The control group received usual care. A cost-utility analysis using quality-adjusted life years (QALY) based on the EQ-5D was performed. Resource utilization was assessed by means of the interview version of a patient questionnaire. A cost-effectiveness acceptability curve controlled for prognostic variables was constructed and a sensitivity analysis to control for the influence of the mode of QALY calculation was performed. 278 individuals (intervention group: 133; control group: 145) were included in the analysis. During 18 months follow-up mean adjusted total cost (mean: +4,401 EUR; bootstrapped standard error: 3,019.61 EUR) and number of QALY (mean: 0.0061 QALY; bootstrapped standard error: 0.0388 QALY) were higher in the intervention group, but differences were not significant. For preventive home visits the probability of an incremental cost-effectiveness ratio <50,000 EUR per QALY was only 15%. The results were robust with respect to the mode of QALY calculation. The evaluated preventive home visits programme is unlikely to be cost-effective. Clinical Trials.gov Identifier: NCT00644826.
Engine dynamic analysis with general nonlinear finite element codes
NASA Technical Reports Server (NTRS)
Adams, M. L.; Padovan, J.; Fertis, D. G.
1991-01-01
A general engine dynamic analysis as a standard design study computational tool is described for the prediction and understanding of complex engine dynamic behavior. Improved definition of engine dynamic response provides valuable information and insights leading to reduced maintenance and overhaul costs on existing engine configurations. Application of advanced engine dynamic simulation methods provides a considerable cost reduction in the development of new engine designs by eliminating some of the trial and error process done with engine hardware development.
A comparison between different error modeling of MEMS applied to GPS/INS integrated systems.
Quinchia, Alex G; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles
2013-07-24
Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.
A Comparison between Different Error Modeling of MEMS Applied to GPS/INS Integrated Systems
Quinchia, Alex G.; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles
2013-01-01
Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways. PMID:23887084
Alaska national hydrography dataset positional accuracy assessment study
Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy
2013-01-01
Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Managing human fallibility in critical aerospace situations
NASA Astrophysics Data System (ADS)
Tew, Larry
2014-11-01
Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.
Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita
2013-03-01
Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.
NASA Astrophysics Data System (ADS)
Wu, Guocan; Zheng, Xiaogu; Dan, Bo
2016-04-01
The shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. The forecast error is inflated to improve the analysis state accuracy and the water balance constraint is adopted to reduce the water budget residual in the assimilation procedure. The experiment results illustrate that the adaptive forecast error inflation can reduce the analysis error, while the proper inflation layer can be selected based on the -2log-likelihood function of the innovation statistic. The water balance constraint can result in reducing water budget residual substantially, at a low cost of assimilation accuracy loss. The assimilation scheme can be potentially applied to assimilate the remote sensing data.
Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh
2017-03-01
Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.
Secondary analysis of national survey datasets.
Boo, Sunjoo; Froelicher, Erika Sivarajan
2013-06-01
This paper describes the methodological issues associated with secondary analysis of large national survey datasets. Issues about survey sampling, data collection, and non-response and missing data in terms of methodological validity and reliability are discussed. Although reanalyzing large national survey datasets is an expedient and cost-efficient way of producing nursing knowledge, successful investigations require a methodological consideration of the intrinsic limitations of secondary survey analysis. Nursing researchers using existing national survey datasets should understand potential sources of error associated with survey sampling, data collection, and non-response and missing data. Although it is impossible to eliminate all potential errors, researchers using existing national survey datasets must be aware of the possible influence of errors on the results of the analyses. © 2012 The Authors. Japan Journal of Nursing Science © 2012 Japan Academy of Nursing Science.
[Improving blood safety: errors management in transfusion medicine].
Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana
2014-01-01
The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.
Benjamin, David M; Pendrak, Robert F
2003-07-01
Clinical pharmacologists are all dedicated to improving the use of medications and decreasing medication errors and adverse drug reactions. However, quality improvement requires that some significant parameters of quality be categorized, measured, and tracked to provide benchmarks to which future data (performance) can be compared. One of the best ways to accumulate data on medication errors and adverse drug reactions is to look at medical malpractice data compiled by the insurance industry. Using data from PHICO insurance company, PHICO's Closed Claims Data, and PHICO's Event Reporting Trending System (PERTS), this article examines the significance and trends of the claims and events reported between 1996 and 1998. Those who misread history are doomed to repeat the mistakes of the past. From a quality improvement perspective, the categorization of the claims and events is useful for reengineering integrated medication delivery, particularly in a hospital setting, and for redesigning drug administration protocols on low therapeutic index medications and "high-risk" drugs. Demonstrable evidence of quality improvement is being required by state laws and by accreditation agencies. The state of Florida requires that quality improvement data be posted quarterly on the Web sites of the health care facilities. Other states have followed suit. The insurance industry is concerned with costs, and medication errors cost money. Even excluding costs of litigation, an adverse drug reaction may cost up to $2500 in hospital resources, and a preventable medication error may cost almost $4700. To monitor costs and assess risk, insurance companies want to know what errors are made and where the system has broken down, permitting the error to occur. Recording and evaluating reliable data on adverse drug events is the first step in improving the quality of pharmacotherapy and increasing patient safety. Cost savings and quality improvement evolve on parallel paths. The PHICO data provide an excellent opportunity to review information that typically would not be in the public domain. The events captured by PHICO are similar to the errors and "high-risk" drugs described in the literature, the U.S. Pharmacopeia's MedMARx Reporting System, and the Sentinel Event reporting system maintained by the Joint Commission for the Accreditation of Healthcare Organizations. The information in this report serves to alert clinicians to the possibility of adverse events when treating patients with the reported drugs, thus allowing for greater care in their use and closer monitoring. Moreover, when using high-risk drugs, patients should be well informed of known risks, dosage should be titrated slowly, and therapeutic drug monitoring and laboratory monitoring should be employed to optimize therapy and minimize adverse effects.
Ground support system methodology and architecture
NASA Technical Reports Server (NTRS)
Schoen, P. D.
1991-01-01
A synergistic approach to systems test and support is explored. A building block architecture provides transportability of data, procedures, and knowledge. The synergistic approach also lowers cost and risk for life cycle of a program. The determination of design errors at the earliest phase reduces cost of vehicle ownership. Distributed scaleable architecture is based on industry standards maximizing transparency and maintainability. Autonomous control structure provides for distributed and segmented systems. Control of interfaces maximizes compatibility and reuse, reducing long term program cost. Intelligent data management architecture also reduces analysis time and cost (automation).
NASA Astrophysics Data System (ADS)
Almalaq, Yasser; Matin, Mohammad A.
2014-09-01
The broadband passive optical network (BPON) has the ability to support high-speed data, voice, and video services to home and small businesses customers. In this work, the performance of bi-directional BPON is analyzed for both down and up streams traffic cases by the help of erbium doped fiber amplifier (EDFA). The importance of BPON is reduced cost. Because PBON uses a splitter the cost of the maintenance between the providers and the customers side is suitable. In the proposed research, BPON has been tested by the use of bit error rate (BER) analyzer. BER analyzer realizes maximum Q factor, minimum bit error rate, and eye height.
Claims, errors, and compensation payments in medical malpractice litigation.
Studdert, David M; Mello, Michelle M; Gawande, Atul A; Gandhi, Tejal K; Kachalia, Allen; Yoon, Catherine; Puopolo, Ann Louise; Brennan, Troyen A
2006-05-11
In the current debate over tort reform, critics of the medical malpractice system charge that frivolous litigation--claims that lack evidence of injury, substandard care, or both--is common and costly. Trained physicians reviewed a random sample of 1452 closed malpractice claims from five liability insurers to determine whether a medical injury had occurred and, if so, whether it was due to medical error. We analyzed the prevalence, characteristics, litigation outcomes, and costs of claims that lacked evidence of error. For 3 percent of the claims, there were no verifiable medical injuries, and 37 percent did not involve errors. Most of the claims that were not associated with errors (370 of 515 [72 percent]) or injuries (31 of 37 [84 percent]) did not result in compensation; most that involved injuries due to error did (653 of 889 [73 percent]). Payment of claims not involving errors occurred less frequently than did the converse form of inaccuracy--nonpayment of claims associated with errors. When claims not involving errors were compensated, payments were significantly lower on average than were payments for claims involving errors (313,205 dollars vs. 521,560 dollars, P=0.004). Overall, claims not involving errors accounted for 13 to 16 percent of the system's total monetary costs. For every dollar spent on compensation, 54 cents went to administrative expenses (including those involving lawyers, experts, and courts). Claims involving errors accounted for 78 percent of total administrative costs. Claims that lack evidence of error are not uncommon, but most are denied compensation. The vast majority of expenditures go toward litigation over errors and payment of them. The overhead costs of malpractice litigation are exorbitant. Copyright 2006 Massachusetts Medical Society.
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
Novel parametric reduced order model for aeroengine blade dynamics
NASA Astrophysics Data System (ADS)
Yuan, Jie; Allegri, Giuliano; Scarpa, Fabrizio; Rajasekaran, Ramesh; Patsias, Sophoclis
2015-10-01
The work introduces a novel reduced order model (ROM) technique to describe the dynamic behavior of turbofan aeroengine blades. We introduce an equivalent 3D frame model to describe the coupled flexural/torsional mode shapes, with their relevant natural frequencies and associated modal masses. The frame configurations are identified through a structural identification approach based on a simulated annealing algorithm with stochastic tunneling. The cost functions are constituted by linear combinations of relative errors associated to the resonance frequencies, the individual modal assurance criteria (MAC), and on either overall static or modal masses. When static masses are considered the optimized 3D frame can represent the blade dynamic behavior with an 8% error on the MAC, a 1% error on the associated modal frequencies and a 1% error on the overall static mass. When using modal masses in the cost function the performance of the ROM is similar, but the overall error increases to 7%. The approach proposed in this paper is considerably more accurate than state-of-the-art blade ROMs based on traditional Timoshenko beams, and provides excellent accuracy at reduced computational time when compared against high fidelity FE models. A sensitivity analysis shows that the proposed model can adequately predict the global trends of the variations of the natural frequencies when lumped masses are used for mistuning analysis. The proposed ROM also follows extremely closely the sensitivity of the high fidelity finite element models when the material parameters are used in the sensitivity.
Bayesian analysis of input uncertainty in hydrological modeling: 2. Application
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.
2006-03-01
The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.
NASA Astrophysics Data System (ADS)
Allen, J. Icarus; Holt, Jason T.; Blackford, Jerry; Proctor, Roger
2007-12-01
Marine systems models are becoming increasingly complex and sophisticated, but far too little attention has been paid to model errors and the extent to which model outputs actually relate to ecosystem processes. Here we describe the application of summary error statistics to a complex 3D model (POLCOMS-ERSEM) run for the period 1988-1989 in the southern North Sea utilising information from the North Sea Project, which collected a wealth of observational data. We demonstrate that to understand model data misfit and the mechanisms creating errors, we need to use a hierarchy of techniques, including simple correlations, model bias, model efficiency, binary discriminator analysis and the distribution of model errors to assess model errors spatially and temporally. We also demonstrate that a linear cost function is an inappropriate measure of misfit. This analysis indicates that the model has some skill for all variables analysed. A summary plot of model performance indicates that model performance deteriorates as we move through the ecosystem from the physics, to the nutrients and plankton.
Tian, Zengshan; Xu, Kunjie; Yu, Xiang
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349
Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.
Smart Annotation of Cyclic Data Using Hierarchical Hidden Markov Models.
Martindale, Christine F; Hoenig, Florian; Strohrmann, Christina; Eskofier, Bjoern M
2017-10-13
Cyclic signals are an intrinsic part of daily life, such as human motion and heart activity. The detailed analysis of them is important for clinical applications such as pathological gait analysis and for sports applications such as performance analysis. Labeled training data for algorithms that analyze these cyclic data come at a high annotation cost due to only limited annotations available under laboratory conditions or requiring manual segmentation of the data under less restricted conditions. This paper presents a smart annotation method that reduces this cost of labeling for sensor-based data, which is applicable to data collected outside of strict laboratory conditions. The method uses semi-supervised learning of sections of cyclic data with a known cycle number. A hierarchical hidden Markov model (hHMM) is used, achieving a mean absolute error of 0.041 ± 0.020 s relative to a manually-annotated reference. The resulting model was also used to simultaneously segment and classify continuous, 'in the wild' data, demonstrating the applicability of using hHMM, trained on limited data sections, to label a complete dataset. This technique achieved comparable results to its fully-supervised equivalent. Our semi-supervised method has the significant advantage of reduced annotation cost. Furthermore, it reduces the opportunity for human error in the labeling process normally required for training of segmentation algorithms. It also lowers the annotation cost of training a model capable of continuous monitoring of cycle characteristics such as those employed to analyze the progress of movement disorders or analysis of running technique.
Benhamou, Dan; Piriou, Vincent; De Vaumas, Cyrille; Albaladejo, Pierre; Malinovsky, Jean-Marc; Doz, Marianne; Lafuma, Antoine; Bouaziz, Hervé
2017-04-01
Patient safety is improved by the use of labelled, ready-to-use, pre-filled syringes (PFS) when compared to conventional methods of syringe preparation (CMP) of the same product from an ampoule. However, the PFS presentation costs more than the CMP presentation. To estimate the budget impact for French hospitals of switching from atropine in ampoules to atropine PFS for anaesthesia care. A model was constructed to simulate the financial consequences of the use of atropine PFS in operating theatres, taking into account wastage and medication errors. The model tested different scenarios and a sensitivity analysis was performed. In a reference scenario, the systematic use of atropine PFS rather than atropine CMP yielded a net one-year budget saving of €5,255,304. Medication errors outweighed other cost factors relating to the use of atropine CMP (€9,425,448). Avoidance of wastage in the case of atropine CMP (prepared and unused) was a major source of savings (€1,167,323). Significant savings were made by means of other scenarios examined. The sensitivity analysis suggests that the results obtained are robust and stable for a range of parameter estimates and assumptions. The financial model was based on data obtained from the literature and expert opinions. The budget impact analysis shows that even though atropine PFS is more expensive than atropine CMP, its use would lead to significant cost savings. Savings would mainly be due to fewer medication errors and their associated consequences and the absence of wastage when atropine syringes are prepared in advance. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.
ERIC Educational Resources Information Center
Smith, Rachel A.; Levine, Timothy R.; Lachlan, Kenneth A.; Fediuk, Thomas A.
2002-01-01
Notes that the availability of statistical software packages has led to a sharp increase in use of complex research designs and complex statistical analyses in communication research. Reports a series of Monte Carlo simulations which demonstrate that this complexity may come at a heavier cost than many communication researchers realize. Warns…
Estimating the Imputed Social Cost of Errors of Measurement.
1983-10-01
social cost of an error of measurement in the score on a unidimensional test, an asymptotic method, based on item response theory, is developed for...11111111 ij MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963-A.5. ,,, I v.P I RR-83-33-ONR 4ESTIMATING THE IMPUTED SOCIAL COST S OF... SOCIAL COST OF ERRORS OF MEASUREMENT Frederic M. Lord This research was sponsored in part by the Personnel and Training Research Programs Psychological
NASA Technical Reports Server (NTRS)
Rango, A.
1981-01-01
Both LANDSAT and NOAA satellite data were used in improving snowmelt runoff forecasts. When the satellite snow cover data were tested in both empirical seasonal runoff estimation and short term modeling approaches, a definite potential for reducing forecast error was evident. A cost benefit analysis run in conjunction with the snow mapping indicated a $36.5 million annual benefit accruing from a one percent improvement in forecast accuracy using the snow cover data for the western United States. The annual cost of employing the system would be $505,000. The snow mapping has proven that satellite snow cover data can be used to reduce snowmelt runoff forecast error in a cost effective manner once all operational satellite data are available within 72 hours after acquisition. Executive summaries of the individual snow mapping projects are presented.
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Low-cost FM oscillator for capacitance type of blade tip clearance measurement system
NASA Technical Reports Server (NTRS)
Barranger, John P.
1987-01-01
The frequency-modulated (FM) oscillator described is part of a blade tip clearance measurement system that meets the needs of a wide class of fans, compressors, and turbines. As a result of advancements in the technology of ultra-high-frequency operational amplifiers, the FM oscillator requires only a single low-cost integrated circuit. Its carrier frequency is 42.8 MHz when it is used with an integrated probe and connecting cable assembly consisting of a 0.81 cm diameter engine-mounted capacitance probe and a 61 cm long hermetically sealed coaxial cable. A complete circuit analysis is given, including amplifier negative resistance characteristics. An error analysis of environmentally induced effects is also derived, and an error-correcting technique is proposed. The oscillator can be calibrated in the static mode and has a negative peak frequency deviation of 400 kHz for a rotor blade thickness of 1.2 mm. High-temperature performance tests of the probe and 13 cm of the adjacent cable show good accuracy up to 600 C, the maximum permissible seal temperature. The major source of error is the residual FM oscillator noise, which produces a clearance error of + or - 10 microns at a clearance of 0.5 mm. The oscillator electronics accommodates the high rotor speeds associated with small engines, the signals from which may have frequency components as high as 1 MHz.
Integrating Solar PV in Utility System Operations: Analytical Framework and Arizona Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jing; Botterud, Audun; Mills, Andrew
2015-06-01
A systematic framework is proposed to estimate the impact on operating costs due to uncertainty and variability in renewable resources. The framework quantifies the integration costs associated with subhourly variability and uncertainty as well as day-ahead forecasting errors in solar PV (photovoltaics) power. A case study illustrates how changes in system operations may affect these costs for a utility in the southwestern United States (Arizona Public Service Company). We conduct an extensive sensitivity analysis under different assumptions about balancing reserves, system flexibility, fuel prices, and forecasting errors. We find that high solar PV penetrations may lead to operational challenges, particularlymore » during low-load and high solar periods. Increased system flexibility is essential for minimizing integration costs and maintaining reliability. In a set of sensitivity cases where such flexibility is provided, in part, by flexible operations of nuclear power plants, the estimated integration costs vary between $1.0 and $4.4/MWh-PV for a PV penetration level of 17%. The integration costs are primarily due to higher needs for hour-ahead balancing reserves to address the increased sub-hourly variability and uncertainty in the PV resource. (C) 2015 Elsevier Ltd. All rights reserved.« less
Integrating automated structured analysis and design with Ada programming support environments
NASA Technical Reports Server (NTRS)
Hecht, Alan; Simmons, Andy
1986-01-01
Ada Programming Support Environments (APSE) include many powerful tools that address the implementation of Ada code. These tools do not address the entire software development process. Structured analysis is a methodology that addresses the creation of complete and accurate system specifications. Structured design takes a specification and derives a plan to decompose the system subcomponents, and provides heuristics to optimize the software design to minimize errors and maintenance. It can also produce the creation of useable modules. Studies have shown that most software errors result from poor system specifications, and that these errors also become more expensive to fix as the development process continues. Structured analysis and design help to uncover error in the early stages of development. The APSE tools help to insure that the code produced is correct, and aid in finding obscure coding errors. However, they do not have the capability to detect errors in specifications or to detect poor designs. An automated system for structured analysis and design TEAMWORK, which can be integrated with an APSE to support software systems development from specification through implementation is described. These tools completement each other to help developers improve quality and productivity, as well as to reduce development and maintenance costs. Complete system documentation and reusable code also resultss from the use of these tools. Integrating an APSE with automated tools for structured analysis and design provide capabilities and advantages beyond those realized with any of these systems used by themselves.
Gonser, Phillipp; Fuchsberger, Thomas; Matern, Ulrich
2017-08-01
The use of active medical devices in clinical routine should be as safe and efficient as possible. Usability tests (UTs) help improve these aspects of medical devices during their development, but UTs can be of use for hospitals even after a product has been launched. The present pilot study examines the costs and possible benefits of UT for hospitals before buying new medical devices for theatre. Two active medical devices with different complexity were tested in a standardized UT and a cost-benefit analysis was carried out assuming a different device bought at the same price with a higher usability could increase the efficiency of task solving and due to that save valuable theatre time. The cost of the UT amounted up to €19.400. Hospitals could benefit from UTs before buying new devices for theatre by reducing time-consuming operator errors and thereby increase productivity and patient safety. The possible benefits amounted from €23.300 to €1.570.000 (median = €797.000). Not only hospitals could benefit economically from investing in a UT before deciding to buy a medical device, but especially patients would profit from a higher usability by reducing possible operator errors and increase safety and performance of use.
Design principles in telescope development: invariance, innocence, and the costs
NASA Astrophysics Data System (ADS)
Steinbach, Manfred
1997-03-01
Instrument design is, for the most part, a battle against errors and costs. Passive methods of error damping are in many cases effective and inexpensive. This paper shows examples of error minimization in our design of telescopes, instrumentation and evaluation instruments.
The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.
Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius
2017-05-01
To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.
A study on characteristics of retrospective optimal interpolation with WRF testbed
NASA Astrophysics Data System (ADS)
Kim, S.; Noh, N.; Lim, G.
2012-12-01
This study presents the application of retrospective optimal interpolation (ROI) with Weather Research and Forecasting model (WRF). Song et al. (2009) suggest ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. Song and Lim (2011) improve the method by incorporating eigen-decomposition and covariance inflation. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In this study, ROI method is applied to WRF model to validate the algorithm and to investigate the capability. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance. Using the background error covariance in eigen-space, 1-profile assimilation experiment is performed. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation. The characteristics and strength/weakness of ROI method are investigated by conducting the experiments with other data assimilation method.
Modeling and control of beam-like structures
NASA Technical Reports Server (NTRS)
Hu, A.; Skelton, R. E.; Yang, T. Y.
1987-01-01
The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for beam-like continua. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng
2015-08-15
This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less
Analysis and design of algorithm-based fault-tolerant systems
NASA Technical Reports Server (NTRS)
Nair, V. S. Sukumaran
1990-01-01
An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Cresswell, Kathrin; Eden, Martin; Elliott, Rachel A; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Prescott, Robin J; Swanwick, Glen; Franklin, Matthew; Putman, Koen; Boyd, Matthew; Sheikh, Aziz
2012-01-01
Summary Background Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-effectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months' follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0·58, 95% CI 0·38–0·89); a β blocker if they had asthma (0·73, 0·58–0·91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0·51, 0·34–0·78). PINCER has a 95% probability of being cost effective if the decision-maker's ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding Patient Safety Research Portfolio, Department of Health, England. PMID:22357106
ERIC Educational Resources Information Center
Chiarini, Marc A.
2010-01-01
Traditional methods for system performance analysis have long relied on a mix of queuing theory, detailed system knowledge, intuition, and trial-and-error. These approaches often require construction of incomplete gray-box models that can be costly to build and difficult to scale or generalize. In this thesis, we present a black-box analysis…
Evaluation of errors in quantitative determination of asbestos in rock
NASA Astrophysics Data System (ADS)
Baietto, Oliviero; Marini, Paola; Vitaliti, Martina
2016-04-01
The quantitative determination of the content of asbestos in rock matrices is a complex operation which is susceptible to important errors. The principal methodologies for the analysis are Scanning Electron Microscopy (SEM) and Phase Contrast Optical Microscopy (PCOM). Despite the PCOM resolution is inferior to that of SEM, PCOM analysis has several advantages, including more representativity of the analyzed sample, more effective recognition of chrysotile and a lower cost. The DIATI LAA internal methodology for the analysis in PCOM is based on a mild grinding of a rock sample, its subdivision in 5-6 grain size classes smaller than 2 mm and a subsequent microscopic analysis of a portion of each class. The PCOM is based on the optical properties of asbestos and of the liquids with note refractive index in which the particles in analysis are immersed. The error evaluation in the analysis of rock samples, contrary to the analysis of airborne filters, cannot be based on a statistical distribution. In fact for airborne filters a binomial distribution (Poisson), which theoretically defines the variation in the count of fibers resulting from the observation of analysis fields, chosen randomly on the filter, can be applied. The analysis in rock matrices instead cannot lean on any statistical distribution because the most important object of the analysis is the size of the of asbestiform fibers and bundles of fibers observed and the resulting relationship between the weights of the fibrous component compared to the one granular. The error evaluation generally provided by public and private institutions varies between 50 and 150 percent, but there are not, however, specific studies that discuss the origin of the error or that link it to the asbestos content. Our work aims to provide a reliable estimation of the error in relation to the applied methodologies and to the total content of asbestos, especially for the values close to the legal limits. The error assessments must be made through the repetition of the same analysis on the same sample to try to estimate the error on the representativeness of the sample and the error related to the sensitivity of the operator, in order to provide a sufficiently reliable uncertainty of the method. We used about 30 natural rock samples with different asbestos content, performing 3 analysis on each sample to obtain a trend sufficiently representative of the percentage. Furthermore we made on one chosen sample 10 repetition of the analysis to try to define more specifically the error of the methodology.
Nelson, Richard E; Angelovic, Aaron W; Nelson, Scott D; Gleed, Jeremy R; Drews, Frank A
2015-05-01
Adherence engineering applies human factors principles to examine non-adherence within a specific task and to guide the development of materials or equipment to increase protocol adherence and reduce human error. Central line maintenance (CLM) for intensive care unit (ICU) patients is a task through which error or non-adherence to protocols can cause central line-associated bloodstream infections (CLABSIs). We conducted an economic analysis of an adherence engineering CLM kit designed to improve the CLM task and reduce the risk of CLABSI. We constructed a Markov model to compare the cost-effectiveness of the CLM kit, which contains each of the 27 items necessary for performing the CLM procedure, compared with the standard care procedure for CLM, in which each item for dressing maintenance is gathered separately. We estimated the model using the cost of CLABSI overall ($45,685) as well as the excess LOS (6.9 excess ICU days, 3.5 excess general ward days). Assuming the CLM kit reduces the risk of CLABSI by 100% and 50%, this strategy was less costly (cost savings between $306 and $860) and more effective (between 0.05 and 0.13 more quality-adjusted life-years) compared with not using the pre-packaged kit. We identified threshold values for the effectiveness of the kit in reducing CLABSI for which the kit strategy was no longer less costly. An adherence engineering-based intervention to streamline the CLM process can improve patient outcomes and lower costs. Patient safety can be improved by adopting new approaches that are based on human factors principles.
A Multigrid NLS-4DVar Data Assimilation Scheme with Advanced Research WRF (ARW)
NASA Astrophysics Data System (ADS)
Zhang, H.; Tian, X.
2017-12-01
The motions of the atmosphere have multiscale properties in space and/or time, and the background error covariance matrix (Β) should thus contain error information at different correlation scales. To obtain an optimal analysis, the multigrid three-dimensional variational data assimilation scheme is used widely when sequentially correcting errors from large to small scales. However, introduction of the multigrid technique into four-dimensional variational data assimilation is not easy, due to its strong dependence on the adjoint model, which has extremely high computational costs in data coding, maintenance, and updating. In this study, the multigrid technique was introduced into the nonlinear least-squares four-dimensional variational assimilation (NLS-4DVar) method, which is an advanced four-dimensional ensemble-variational method that can be applied without invoking the adjoint models. The multigrid NLS-4DVar (MG-NLS-4DVar) scheme uses the number of grid points to control the scale, with doubling of this number when moving from a coarse to a finer grid. Furthermore, the MG-NLS-4DVar scheme not only retains the advantages of NLS-4DVar, but also sufficiently corrects multiscale errors to achieve a highly accurate analysis. The effectiveness and efficiency of the proposed MG-NLS-4DVar scheme were evaluated by several groups of observing system simulation experiments using the Advanced Research Weather Research and Forecasting Model. MG-NLS-4DVar outperformed NLS-4DVar, with a lower computational cost.
Development of WRF-ROI system by incorporating eigen-decomposition
NASA Astrophysics Data System (ADS)
Kim, S.; Noh, N.; Song, H.; Lim, G.
2011-12-01
This study presents the development of WRF-ROI system, which is the implementation of Retrospective Optimal Interpolation (ROI) to the Weather Research and Forecasting model (WRF). ROI is a new data assimilation algorithm introduced by Song et al. (2009) and Song and Lim (2009). The formulation of ROI is similar with that of Optimal Interpolation (OI), but ROI iteratively assimilates an observation set at a post analysis time into a prior analysis, possibly providing the high quality reanalysis data. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In previous study, ROI method is applied to Lorenz 40-variable model (Lorenz, 1996) to validate the algorithm and to investigate the capability. It is therefore required to apply this ROI method into a more realistic and complicated model framework such as WRF. In this research, the reduced-rank formulation of ROI is used instead of a reduced-resolution method. The computational costs can be reduced due to the eigen-decomposition of background error covariance in the reduced-rank method. When single profile of observations is assimilated in the WRF-ROI system by incorporating eigen-decomposition, the analysis error tends to be reduced if compared with the background error. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation.
Lost in Translation: the Case for Integrated Testing
NASA Technical Reports Server (NTRS)
Young, Aaron
2017-01-01
The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.
Cost effectiveness of the US Geological Survey's stream-gaging program in New York
Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.
1986-01-01
The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)
Liability claims and costs before and after implementation of a medical error disclosure program.
Kachalia, Allen; Kaufman, Samuel R; Boothman, Richard; Anderson, Susan; Welch, Kathleen; Saint, Sanjay; Rogers, Mary A M
2010-08-17
Since 2001, the University of Michigan Health System (UMHS) has fully disclosed and offered compensation to patients for medical errors. To compare liability claims and costs before and after implementation of the UMHS disclosure-with-offer program. Retrospective before-after analysis from 1995 to 2007. Public academic medical center and health system. Inpatients and outpatients involved in claims made to UMHS. Number of new claims for compensation, number of claims compensated, time to claim resolution, and claims-related costs. After full implementation of a disclosure-with-offer program, the average monthly rate of new claims decreased from 7.03 to 4.52 per 100,000 patient encounters (rate ratio [RR], 0.64 [95% CI, 0.44 to 0.95]). The average monthly rate of lawsuits decreased from 2.13 to 0.75 per 100,000 patient encounters (RR, 0.35 [CI, 0.22 to 0.58]). Median time from claim reporting to resolution decreased from 1.36 to 0.95 years. Average monthly cost rates decreased for total liability (RR, 0.41 [CI, 0.26 to 0.66]), patient compensation (RR, 0.41 [CI, 0.26 to 0.67]), and non-compensation-related legal costs (RR, 0.39 [CI, 0.22 to 0.67]). The study design cannot establish causality. Malpractice claims generally declined in Michigan during the latter part of the study period. The findings might not apply to other health systems, given that UMHS has a closed staff model covered by a captive insurance company and often assumes legal responsibility. The UMHS implemented a program of full disclosure of medical errors with offers of compensation without increasing its total claims and liability costs. Blue Cross Blue Shield of Michigan Foundation.
Sorock, G S; Ranney, T A; Lehto, M R
1996-01-01
Motor vehicle travel through roadway construction workzones has been shown to increase the risk of a crash. The number of workzones has increased due to recent congressional funding in 1991 for expanded roadway maintenance and repair. In this paper, we describe the characteristics and costs of motor vehicle crashes in roadway construction workzones. As opposed to using standard accident codes to identify accident types, automobile insurance claims files from 1990-93 were searched to identify records with the keyword "construction" in the accident narrative field. A total of 3,686 claims were used for the analysis of crashes. Keywords from the accident narrative field were used to identify five pre-crash vehicle activities and five crash types. We evaluated misclassification error by reading 560 randomly selected claims and found it to be only 5%. For each of four years, 1990-93, there was a total of 648,996,977 and 1,065 crashes, respectively. There was a 70% increase in the crash rate per 10,000 personal insured vehicles from 1990-93 (2.1-3.6). Most crashes (26%) involved a stopped or slowing vehicle in the workzone. The most common crash (31%) was a rear-end collision. The most costly pre-crash activity was a major judgment error on the part of a driver (n = 120, median cost = $2,628). An overturned vehicle was the most costly crash type (n = 16, median cost = $4,745). In summary, keyword text analysis of accident narrative data used in this study demonstrated its utility and potential for enhancing injury epidemiology. The results suggest interventions are needed to respond to growing traffic hazards in construction workzones.
Rahman, Md. Sayedur; Sathasivam, Kathiresan V.
2015-01-01
Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb2+, Cu2+, Fe2+, and Zn2+ onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich) and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models). The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment. PMID:26295032
Rahman, Md Sayedur; Sathasivam, Kathiresan V
2015-01-01
Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb(2+), Cu(2+), Fe(2+), and Zn(2+) onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich) and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models). The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment.
Advanced Small Modular Reactor Economics Model Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Thomas J.
2014-10-01
The US Department of Energy Office of Nuclear Energy’s Advanced Small Modular Reactor (SMR) research and development activities focus on four key areas: Developing assessment methods for evaluating advanced SMR technologies and characteristics; and Developing and testing of materials, fuels and fabrication techniques; and Resolving key regulatory issues identified by US Nuclear Regulatory Commission and industry; and Developing advanced instrumentation and controls and human-machine interfaces. This report focuses on development of assessment methods to evaluate advanced SMR technologies and characteristics. Specifically, this report describes the expansion and application of the economic modeling effort at Oak Ridge National Laboratory. Analysis ofmore » the current modeling methods shows that one of the primary concerns for the modeling effort is the handling of uncertainty in cost estimates. Monte Carlo–based methods are commonly used to handle uncertainty, especially when implemented by a stand-alone script within a program such as Python or MATLAB. However, a script-based model requires each potential user to have access to a compiler and an executable capable of handling the script. Making the model accessible to multiple independent analysts is best accomplished by implementing the model in a common computing tool such as Microsoft Excel. Excel is readily available and accessible to most system analysts, but it is not designed for straightforward implementation of a Monte Carlo–based method. Using a Monte Carlo algorithm requires in-spreadsheet scripting and statistical analyses or the use of add-ons such as Crystal Ball. An alternative method uses propagation of error calculations in the existing Excel-based system to estimate system cost uncertainty. This method has the advantage of using Microsoft Excel as is, but it requires the use of simplifying assumptions. These assumptions do not necessarily bring into question the analytical results. In fact, the analysis shows that the propagation of error method introduces essentially negligible error, especially when compared to the uncertainty associated with some of the estimates themselves. The results of these uncertainty analyses generally quantify and identify the sources of uncertainty in the overall cost estimation. The obvious generalization—that capital cost uncertainty is the main driver—can be shown to be an accurate generalization for the current state of reactor cost analysis. However, the detailed analysis on a component-by-component basis helps to demonstrate which components would benefit most from research and development to decrease the uncertainty, as well as which components would benefit from research and development to decrease the absolute cost.« less
Shahly, Victoria; Berglund, Patricia A; Coulouvrat, Catherine; Fitzgerald, Timothy; Hajak, Goeran; Roth, Thomas; Shillington, Alicia C; Stephenson, Judith J; Walsh, James K; Kessler, Ronald C
2012-10-01
Insomnia is a common and seriously impairing condition that often goes unrecognized. To examine associations of broadly defined insomnia (ie, meeting inclusion criteria for a diagnosis from International Statistical Classification of Diseases, 10th Revision, DSM-IV, or Research Diagnostic Criteria/International Classification of Sleep Disorders, Second Edition) with costly workplace accidents and errors after excluding other chronic conditions among workers in the America Insomnia Survey (AIS). A national cross-sectional telephone survey (65.0% cooperation rate) of commercially insured health plan members selected from the more than 34 million in the HealthCore Integrated Research Database. Four thousand nine hundred ninety-one employed AIS respondents. Costly workplace accidents or errors in the 12 months before the AIS interview were assessed with one question about workplace accidents "that either caused damage or work disruption with a value of $500 or more" and another about other mistakes "that cost your company $500 or more." Current insomnia with duration of at least 12 months was assessed with the Brief Insomnia Questionnaire, a validated (area under the receiver operating characteristic curve, 0.86 compared with diagnoses based on blinded clinical reappraisal interviews), fully structured diagnostic interview. Eighteen other chronic conditions were assessed with medical/pharmacy claims records and validated self-report scales. Insomnia had a significant odds ratio with workplace accidents and/or errors controlled for other chronic conditions (1.4). The odds ratio did not vary significantly with respondent age, sex, educational level, or comorbidity. The average costs of insomnia-related accidents and errors ($32 062) were significantly higher than those of other accidents and errors ($21 914). Simulations estimated that insomnia was associated with 7.2% of all costly workplace accidents and errors and 23.7% of all the costs of these incidents. These proportions are higher than for any other chronic condition, with annualized US population projections of 274 000 costly insomnia-related workplace accidents and errors having a combined value of US $31.1 billion. Effectiveness trials are needed to determine whether expanded screening, outreach, and treatment of workers with insomnia would yield a positive return on investment for employers.
NASA Astrophysics Data System (ADS)
Romo, David Ricardo
Foreign Object Debris/Damage (FOD) has been an issue for military and commercial aircraft manufacturers since the early ages of aviation and aerospace. Currently, aerospace is growing rapidly and the chances of FOD presence are growing as well. One of the principal causes in manufacturing is the human error. The cost associated with human error in commercial and military aircrafts is approximately accountable for 4 billion dollars per year. This problem is currently addressed with prevention programs, elimination techniques, and designation of FOD areas, controlled access, restrictions of personal items entering designated areas, tool accountability, and the use of technology such as Radio Frequency Identification (RFID) tags, etc. All of the efforts mentioned before, have not show a significant occurrence reduction in terms of manufacturing processes. On the contrary, a repetitive path of occurrence is present, and the cost associated has not declined in a significant manner. In order to address the problem, this thesis proposes a new approach using statistical analysis. The effort of this thesis is to create a predictive model using historical categorical data from an aircraft manufacturer only focusing in human error causes. The use of contingency tables, natural logarithm of the odds and probability transformation is used in order to provide the predicted probabilities of each aircraft. A case of study is shown in this thesis in order to show the applied methodology. As a result, this approach is able to predict the possible outcomes of FOD by the workstation/area needed, and monthly predictions per workstation. This thesis is intended to be the starting point of statistical data analysis regarding FOD in human factors. The purpose of this thesis is to identify the areas where human error is the primary cause of FOD occurrence in order to design and implement accurate solutions. The advantages of the proposed methodology can go from the reduction of cost production, quality issues, repair cost, and assembly process time. Finally, a more reliable process is achieved, and the proposed methodology may be used in other aircrafts.
Thiboonboon, Kittiphong; Leelahavarong, Pattara; Wattanasirichaigoon, Duangrurdee; Vatanavicharn, Nithiwat; Wasant, Pornswan; Shotelersuk, Vorasuk; Pangkanon, Suthipong; Kuptanon, Chulaluck; Chaisomchit, Sumonta; Teerawattananon, Yot
2015-01-01
Inborn errors of metabolism (IEM) are a rare group of genetic diseases which can lead to several serious long-term complications in newborns. In order to address these issues as early as possible, a process called tandem mass spectrometry (MS/MS) can be used as it allows for rapid and simultaneous detection of the diseases. This analysis was performed to determine whether newborn screening by MS/MS is cost-effective in Thailand. A cost-utility analysis comprising a decision-tree and Markov model was used to estimate the cost in Thai baht (THB) and health outcomes in life-years (LYs) and quality-adjusted life year (QALYs) presented as an incremental cost-effectiveness ratio (ICER). The results were also adjusted to international dollars (I$) using purchasing power parities (PPP) (1 I$ = 17.79 THB for the year 2013). The comparisons were between 1) an expanded neonatal screening programme using MS/MS screening for six prioritised diseases: phenylketonuria (PKU); isovaleric acidemia (IVA); methylmalonic acidemia (MMA); propionic acidemia (PA); maple syrup urine disease (MSUD); and multiple carboxylase deficiency (MCD); and 2) the current practice that is existing PKU screening. A comparison of the outcome and cost of treatment before and after clinical presentations were also analysed to illustrate the potential benefit of early treatment for affected children. A budget impact analysis was conducted to illustrate the cost of implementing the programme for 10 years. The ICER of neonatal screening using MS/MS amounted to 1,043,331 THB per QALY gained (58,647 I$ per QALY gained). The potential benefits of early detection compared with late detection yielded significant results for PKU, IVA, MSUD, and MCD patients. The budget impact analysis indicated that the implementation cost of the programme was expected at approximately 2,700 million THB (152 million I$) over 10 years. At the current ceiling threshold, neonatal screening using MS/MS in the Thai context is not cost-effective. However, the treatment of patients who were detected early for PKU, IVA, MSUD, and MCD, are considered favourable. The budget impact analysis suggests that the implementation of the programme will incur considerable expenses under limited resources. A long-term epidemiological study on the incidence of IEM in Thailand is strongly recommended to ascertain the magnitude of problem.
Thiboonboon, Kittiphong; Leelahavarong, Pattara; Wattanasirichaigoon, Duangrurdee; Vatanavicharn, Nithiwat; Wasant, Pornswan; Shotelersuk, Vorasuk; Pangkanon, Suthipong; Kuptanon, Chulaluck; Chaisomchit, Sumonta; Teerawattananon, Yot
2015-01-01
Background Inborn errors of metabolism (IEM) are a rare group of genetic diseases which can lead to several serious long-term complications in newborns. In order to address these issues as early as possible, a process called tandem mass spectrometry (MS/MS) can be used as it allows for rapid and simultaneous detection of the diseases. This analysis was performed to determine whether newborn screening by MS/MS is cost-effective in Thailand. Method A cost-utility analysis comprising a decision-tree and Markov model was used to estimate the cost in Thai baht (THB) and health outcomes in life-years (LYs) and quality-adjusted life year (QALYs) presented as an incremental cost-effectiveness ratio (ICER). The results were also adjusted to international dollars (I$) using purchasing power parities (PPP) (1 I$ = 17.79 THB for the year 2013). The comparisons were between 1) an expanded neonatal screening programme using MS/MS screening for six prioritised diseases: phenylketonuria (PKU); isovaleric acidemia (IVA); methylmalonic acidemia (MMA); propionic acidemia (PA); maple syrup urine disease (MSUD); and multiple carboxylase deficiency (MCD); and 2) the current practice that is existing PKU screening. A comparison of the outcome and cost of treatment before and after clinical presentations were also analysed to illustrate the potential benefit of early treatment for affected children. A budget impact analysis was conducted to illustrate the cost of implementing the programme for 10 years. Results The ICER of neonatal screening using MS/MS amounted to 1,043,331 THB per QALY gained (58,647 I$ per QALY gained). The potential benefits of early detection compared with late detection yielded significant results for PKU, IVA, MSUD, and MCD patients. The budget impact analysis indicated that the implementation cost of the programme was expected at approximately 2,700 million THB (152 million I$) over 10 years. Conclusion At the current ceiling threshold, neonatal screening using MS/MS in the Thai context is not cost-effective. However, the treatment of patients who were detected early for PKU, IVA, MSUD, and MCD, are considered favourable. The budget impact analysis suggests that the implementation of the programme will incur considerable expenses under limited resources. A long-term epidemiological study on the incidence of IEM in Thailand is strongly recommended to ascertain the magnitude of problem. PMID:26258410
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Riga, Marina; Vozikis, Athanassios; Pollalis, Yannis; Souliotis, Kyriakos
2015-04-01
The economic crisis in Greece poses the necessity to resolve problems concerning both the spiralling cost and the quality assurance in the health system. The detection and the analysis of patient adverse events and medical errors are considered crucial elements of this course. The implementation of MERIS embodies a mandatory module, which adopts the trigger tool methodology for measuring adverse events and medical errors an intensive care unit [ICU] environment, and a voluntary one with web-based public reporting methodology. A pilot implementation of MERIS running in a public hospital identified 35 adverse events, with approx. 12 additional hospital days and an extra healthcare cost of €12,000 per adverse event or of about €312,000 per annum for ICU costs only. At the same time, the voluntary module unveiled 510 reports on adverse events submitted by citizens or patients. MERIS has been evaluated as a comprehensive and effective system; it succeeded in detecting the main factors that cause adverse events and discloses severe omissions of the Greek health system. MERIS may be incorporated and run efficiently nationally, adapted to the needs and peculiarities of each hospital or clinic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Does the cost function matter in Bayes decision rule?
Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann
2012-02-01
In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.
NASA Astrophysics Data System (ADS)
Smith, Gennifer T.; Dwork, Nicholas; Khan, Saara A.; Millet, Matthew; Magar, Kiran; Javanmard, Mehdi; Bowden, Audrey K.
2017-03-01
Urinalysis dipsticks were designed to revolutionize urine-based medical diagnosis. They are cheap, extremely portable, and have multiple assays patterned on a single platform. They were also meant to be incredibly easy to use. Unfortunately, there are many aspects in both the preparation and the analysis of the dipsticks that are plagued by user error. This high error is one reason that dipsticks have failed to flourish in both the at-home market and in low-resource settings. Sources of error include: inaccurate volume deposition, varying lighting conditions, inconsistent timing measurements, and misinterpreted color comparisons. We introduce a novel manifold and companion software for dipstick urinalysis that eliminates the aforementioned error sources. A micro-volume slipping manifold ensures precise sample delivery, an opaque acrylic box guarantees consistent lighting conditions, a simple sticker-based timing mechanism maintains accurate timing, and custom software that processes video data captured by a mobile phone ensures proper color comparisons. We show that the results obtained with the proposed device are as accurate and consistent as a properly executed dip-and-wipe method, the industry gold-standard, suggesting the potential for this strategy to enable confident urinalysis testing. Furthermore, the proposed all-acrylic slipping manifold is reusable and low in cost, making it a potential solution for at-home users and low-resource settings.
Error and Uncertainty Analysis for Ecological Modeling and Simulation
2001-12-01
management (LRAM) accounting for environmental, training, and economic factors. In the ELVS methodology, soil erosion status is used as a quantitative...Monte-Carlo approach. The optimization is realized through economic functions or on decision constraints, such as, unit sample cost, number of samples... nitrate flux to the Gulf of Mexico. Nature (Brief Communication) 414: 166-167. (Uncertainty analysis done with SERDP software) Gertner, G., G
Baltussen, Rob; Smith, Andrew
2012-03-02
To determine the relative costs, effects, and cost effectiveness of selected interventions to control cataract, trachoma, refractive error, hearing loss, meningitis and chronic otitis media. Cost effectiveness analysis of or combined strategies for controlling vision and hearing loss by means of a lifetime population model. Two World Health Organization sub-regions of the world where vision and hearing loss are major burdens: sub-Saharan Africa and South East Asia. Biological and behavioural parameters from clinical and observational studies and population based surveys. Intervention effects and resource inputs based on published reports, expert opinion, and the WHO-CHOICE database. Cost per disability adjusted life year (DALY) averted, expressed in international dollars ($Int) for the year 2005. Treatment of chronic otitis media, extracapsular cataract surgery, trichiasis surgery, treatment for meningitis, and annual screening of schoolchildren for refractive error are among the most cost effective interventions to control hearing and vision impairment, with the cost per DALY averted <$Int285 in both regions. Screening of both schoolchildren (annually) and adults (every five years) for hearing loss costs around $Int1000 per DALY averted. These interventions can be considered highly cost effective. Mass treatment with azithromycin to control trachoma can be considered cost effective in the African but not the South East Asian sub-region. Vision and hearing impairment control interventions are generally cost effective. To decide whether substantial investments in these interventions is warranted, this finding should be considered in relation to the economic attractiveness of other, existing or new, interventions in health.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
Prediction errors in wildland fire situation analyses.
Geoffrey H. Donovan; Peter Noordijk
2005-01-01
Wildfires consume budgets and put the heat on fire managers to justify and control suppression costs. To determine the appropriate suppression strategy, land managers must conduct a wildland fire situation analysis (WFSA) when:A wildland fire is expected to or does escape initial attack,A wildland fire managed for resource benefits...
Disclosure of Medical Errors: What Factors Influence How Patients Respond?
Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H
2006-01-01
BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770
Report on Automated Semantic Analysis of Scientific and Engineering Codes
NASA Technical Reports Server (NTRS)
Stewart. Maark E. M.; Follen, Greg (Technical Monitor)
2001-01-01
The loss of the Mars Climate Orbiter due to a software error reveals what insiders know: software development is difficult and risky because, in part, current practices do not readily handle the complex details of software. Yet, for scientific software development the MCO mishap represents the tip of the iceberg; few errors are so public, and many errors are avoided with a combination of expertise, care, and testing during development and modification. Further, this effort consumes valuable time and resources even when hardware costs and execution time continually decrease. Software development could use better tools! This lack of tools has motivated the semantic analysis work explained in this report. However, this work has a distinguishing emphasis; the tool focuses on automated recognition of the fundamental mathematical and physical meaning of scientific code. Further, its comprehension is measured by quantitatively evaluating overall recognition with practical codes. This emphasis is necessary if software errors-like the MCO error-are to be quickly and inexpensively avoided in the future. This report evaluates the progress made with this problem. It presents recommendations, describes the approach, the tool's status, the challenges, related research, and a development strategy.
Software Requirements Analysis as Fault Predictor
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Waiting until the integration and system test phase to discover errors leads to more costly rework than resolving those same errors earlier in the lifecycle. Costs increase even more significantly once a software system has become operational. WE can assess the quality of system requirements, but do little to correlate this information either to system assurance activities or long-term reliability projections - both of which remain unclear and anecdotal. Extending earlier work on requirements accomplished by the ARM tool, measuring requirements quality information against code complexity and test data for the same system may be used to predict specific software modules containing high impact or deeply embedded faults now escaping in operational systems. Such knowledge would lead to more effective and efficient test programs. It may enable insight into whether a program should be maintained or started over.
Spitzer Telemetry Processing System
NASA Technical Reports Server (NTRS)
Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.
2013-01-01
The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.
Improving the Cost Estimation of Space Systems. Past Lessons and Future Recommendations
2008-01-01
a reasonable gauge for the relative propor- tions of cost growth attributable to errors, decisions, and other causes in any MDAP. Analysis of the...program. The program offices visited were the Defense Metrological Satellite Pro- gram (DMSP), Evolved Expendable Launch Vehicle (EELV), Advanced...3 years 1.8 0.9 3–8 years 1.8 0.9 8+ years 3.7 1.8 Staffing Requirement 7.4 3.7 areas represent earned value and budget drills ; the tan area on top
The impact of estimation errors on evaluations of timber production opportunities.
Dennis L. Schweitzer
1970-01-01
Errors in estimating costs and return, the timing of harvests, and the cost of using funds can greatly affect the apparent desirability of investments in timber production. Partial derivatives are used to measure the impact of these errors on the predicted present net worth of potential investments in timber production. Graphs that illustrate the impact of each type...
Yu, Tzy-Chyi; Zhou, Huanxue
2015-09-01
Evaluate performance of techniques used to handle missing cost-to-charge ratio (CCR) data in the USA Healthcare Cost and Utilization Project's Nationwide Inpatient Sample. Four techniques to replace missing CCR data were evaluated: deleting discharges with missing CCRs (complete case analysis), reweighting as recommended by Healthcare Cost and Utilization Project, reweighting by adjustment cells and hot deck imputation by adjustment cells. Bias and root mean squared error of these techniques on hospital cost were evaluated in five disease cohorts. Similar mean cost estimates would be obtained with any of the four techniques when the percentage of missing data is low (<10%). When total cost is the outcome of interest, a reweighting technique to avoid underestimation from dropping observations with missing data should be adopted.
Cost-effectiveness of orthoptic screening in kindergarten: a decision-analytic model.
König, H H; Barry, J C; Leidl, R; Zrenner, E
2000-06-01
The purpose of this study was to analyze the cost-effectiveness of orthoptic screening for amblyopia in kindergarten. A decision-analytic model was used. In this model all kindergarten children in Germany aged 3 years were examined by an orthoptist. Children with positive screening results were referred to an ophthalmologist for diagnosis. The number of newly diagnosed cases of amblyopia, amblyogenic non-obvious strabismus and amblyogenic refractive errors was used as the measure of effectiveness. Direct costs were measured form a third-party payer perspective. Data for model parameters were obtained from the literature and from own measurements in kindergartens. A base analysis was performed using median parameter values. The influence of uncertain parameters was tested in sensitivity analyses. According to the base analysis, the cost of one orthoptic screening test was 7.87 euro. One ophthalmologic examination cost 36.40 euro. The total cost of the screening program in all kindergartens was 3.1 million euro. A total of 4,261 new cases would be detected. The cost-effectiveness ratio was 727 euro per case detected. Sensitivity analysis showed considerable influence of the prevalence rate of target conditions and of the specificity of the orthopic examination on the cost-effectiveness ratio. This analysis provides information which is useful for discussion about the implementation of orthoptic screening and for planning a field study.
Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi
2018-02-14
This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.
Principles of cost-effective resource allocation in health care organizations.
Weinstein, M C
1990-01-01
Cost-effectiveness analysis (CEA) is a method of economic evaluation that can be used to assess the efficiency with which health care technologies use limited resources to produce health outputs. However, inconsistencies in the way that such ratios are constructed often lead to misleading conclusions when CEAs are compared. Some of these inconsistencies, such as failure to discount or to calculate incremental ratios correctly, reflect analytical errors that, if corrected, would resolve the inconsistencies. Others reflect fundamental differences in the viewpoint of the analysis. The perspectives of different decision-making entities can properly lead to different items in the numerator and denominator of the cost-effectiveness (C/E) ratio. Producers and consumers of CEA need to be more conscious of the perspectives of analysis, so that C/E comparisons from a given perspective are based upon a common understanding of the elements that are properly included.
Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont
Smath, J.A.; Blackey, F.E.
1986-01-01
Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)
Human-Agent Teaming for Multi-Robot Control: A Literature Review
2013-02-01
neurophysiological devices are becoming more cost effective and less invasive, future systems will most likely take advantage of this technology to monitor...Parasuraman et al., 1993). It has also been reported that both the cost of automation errors and the cost of verification affect humans’ reliance on...decision aids, and the effects are also moderated by age (Ezer et al., 2008). Generally, reliance is reduced as the cost of error increases and it
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Cresswell, Kathrin; Eden, Martin; Elliott, Rachel A; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Prescott, Robin J; Swanwick, Glen; Franklin, Matthew; Putman, Koen; Boyd, Matthew; Sheikh, Aziz
2012-04-07
Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to researchers and statisticians involved in processing and analysing the data. The allocation was not masked to general practices, pharmacists, patients, or researchers who visited practices to extract data. [corrected]. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-effectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. 72 general practices with a combined list size of 480,942 patients were randomised. At 6 months' follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0·58, 95% CI 0·38-0·89); a β blocker if they had asthma (0·73, 0·58-0·91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0·51, 0·34-0·78). PINCER has a 95% probability of being cost effective if the decision-maker's ceiling willingness to pay reaches £75 per error avoided at 6 months. The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Patient Safety Research Portfolio, Department of Health, England. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie
2014-01-01
This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).
Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.
2013-01-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559
Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H
2013-10-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.
On verifying a high-level design. [cost and error analysis
NASA Technical Reports Server (NTRS)
Mathew, Ben; Wehbeh, Jalal A.; Saab, Daniel G.
1993-01-01
An overview of design verification techniques is presented, and some of the current research in high-level design verification is described. Formal hardware description languages that are capable of adequately expressing the design specifications have been developed, but some time will be required before they can have the expressive power needed to be used in real applications. Simulation-based approaches are more useful in finding errors in designs than they are in proving the correctness of a certain design. Hybrid approaches that combine simulation with other formal design verification techniques are argued to be the most promising over the short term.
MetAMOS: a modular and open source metagenomic assembly and analysis pipeline
2013-01-01
We describe MetAMOS, an open source and modular metagenomic assembly and analysis pipeline. MetAMOS represents an important step towards fully automated metagenomic analysis, starting with next-generation sequencing reads and producing genomic scaffolds, open-reading frames and taxonomic or functional annotations. MetAMOS can aid in reducing assembly errors, commonly encountered when assembling metagenomic samples, and improves taxonomic assignment accuracy while also reducing computational cost. MetAMOS can be downloaded from: https://github.com/treangen/MetAMOS. PMID:23320958
Karapinar-Çarkit, Fatma; Borgsteede, Sander D; Zoer, Jan; Egberts, Toine C G; van den Bemt, Patricia M L A; van Tulder, Maurits
2012-03-01
Medication reconciliation aims to correct discrepancies in medication use between health care settings and to check the quality of pharmacotherapy to improve effectiveness and safety. In addition, medication reconciliation might also reduce costs. To evaluate the effect of medication reconciliation on medication costs after hospital discharge in relation to hospital pharmacy labor costs. A prospective observational study was performed. Patients discharged from the pulmonology department were included. A pharmacy team assessed medication errors prevented by medication reconciliation. Interventions were classified into 3 categories: correcting hospital formulary-induced medication changes (eg, reinstating less costly generic drugs used before admission), optimizing pharmacotherapy (eg, discontinuing unnecessary laxative), and eliminating discrepancies (eg, restarting omitted preadmission medication). Because eliminating discrepancies does not represent real costs to society (before hospitalization, the patient was also using the medication), these medication costs were not included in the cost calculation. Medication costs at 1 month and 6 months after hospital discharge and the associated labor costs were assessed using descriptive statistics and scenario analyses. For the 6-month extrapolation, only medication intended for chronic use was included. Two hundred sixty-two patients were included. Correcting hospital formulary changes saved €1.63/patient (exchange rate: EUR 1 = USD 1.3443) in medication costs at 1 month after discharge and €9.79 at 6 months. Optimizing pharmacotherapy saved €20.13/patient in medication costs at 1 month and €86.86 at 6 months. The associated labor costs for performing medication reconciliation were €41.04/patient. Medication cost savings from correcting hospital formulary-induced changes and optimizing of pharmacotherapy (€96.65/patient) outweighed the labor costs at 6 months extrapolation by €55.62/patient (sensitivity analysis €37.25-71.10). Preventing medication errors through medication reconciliation results in higher benefits than the costs related to the net time investment.
Evaluation and analysis of the orbital maneuvering vehicle video system
NASA Technical Reports Server (NTRS)
Moorhead, Robert J., II
1989-01-01
The work accomplished in the summer of 1989 in association with the NASA/ASEE Summer Faculty Research Fellowship Program at Marshall Space Flight Center is summarized. The task involved study of the Orbital Maneuvering Vehicle (OMV) Video Compression Scheme. This included such activities as reviewing the expected scenes to be compressed by the flight vehicle, learning the error characteristics of the communication channel, monitoring the CLASS tests, and assisting in development of test procedures and interface hardware for the bit error rate lab being developed at MSFC to test the VCU/VRU. Numerous comments and suggestions were made during the course of the fellowship period regarding the design and testing of the OMV Video System. Unfortunately from a technical point of view, the program appears at this point in time to be trouble from an expense prospective and is in fact in danger of being scaled back, if not cancelled altogether. This makes technical improvements prohibitive and cost-reduction measures necessary. Fortunately some cost-reduction possibilities and some significant technical improvements that should cost very little were identified.
Nair, Vinit; Salmon, J Warren; Kaul, Alan F
2007-12-01
Disease Management (DM) programs have advanced to address costly chronic disease patterns in populations. This is in part due to the programs' significant clinical and economical value, coupled with interest by pharmaceutical manufacturers, managed care organizations, and pharmacy benefit management firms. While cost containment realizations for many such interventions have been less than anticipated, this article explores potentials in marrying Medication Error Risk Reduction into DM programs within managed care environments. Medication errors are an emergent serious problem now gaining attention in US health policy. They represent a failure within population-based health programs because they remain significant cost drivers. Therefore, medication errors should be addressed in an organized fashion, with DM being a worthy candidate for piggybacking such programs to achieve the best synergistic effects.
Analysis of the U.S. geological survey streamgaging network
Scott, A.G.
1987-01-01
This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U.S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3,493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19.9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17.8 percent. The existing streamgaging networks in four Districts were further analyzed to determine the impacts that satellite telemetry would have on the cost effectiveness. Satellite telemetry was not found to be cost effective on the basis of hydrologic data collection alone, given present cost of equipment and operation.This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U. S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3, 493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19. 9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17. 8 percent. Additional study results are discussed.
Statistical Optimality in Multipartite Ranking and Ordinal Regression.
Uematsu, Kazuki; Lee, Yoonkyung
2015-05-01
Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.
NASA Technical Reports Server (NTRS)
Nichols, J. D.; Gialdini, M.; Jaakkola, S.
1974-01-01
A quasi-operational study demonstrating that a timber inventory based on manual and automated analysis of ERTS-1, supporting aircraft data and ground data was made using multistage sampling techniques. The inventory proved to be a timely, cost effective alternative to conventional timber inventory techniques. The timber volume on the Quincy Ranger District of the Plumas National Forest was estimated to be 2.44 billion board feet with a sampling error of 8.2 percent. Costs per acre for the inventory procedure at 1.1 cent/acre compared favorably with the costs of a conventional inventory at 25 cents/acre. A point-by-point comparison of CALSCAN-classified ERTS data with human-interpreted low altitude photo plots indicated no significant differences in the overall classification accuracies.
Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.
NASA Technical Reports Server (NTRS)
Thornton, C. L.
1976-01-01
An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.
2003-03-01
test returns a p-value greater than 0.05. Similarly, the assumption of constant variance can be confirmed using the Breusch - Pagan test ...megaphone effect. To test this visual observation, the Breusch - Pagan test is applied. .515 6 3.919 31 2 5.371= The p-value returned from this...The data points have a relatively even spread, but a potential megaphone pattern is present. An application of the more robust Breusch - Pagan test
De la Hoz-Restrepo, Fernando; Castañeda-Orjuela, Carlos; Paternina, Angel; Alvis-Guzman, Nelson
2013-07-02
To review the approaches used in the cost-effectiveness analysis (CEAs) literature to estimate the cost of expanded program on immunization (EPI) activities, other than vaccine purchase, for rotavirus and pneumococcal vaccines. A systematic review in PubMed and NHS EED databases of rotavirus and pneumococcal vaccines CEAs was done. Selected articles were read and information on how EPI costs were calculated was extracted. EPI costing approaches were classified according to the method or assumption used for estimation. Seventy-nine studies that evaluated cost effectiveness of rotavirus (n=43) or pneumococcal (n=36) vaccines were identified. In general, there are few details on how EPI costs other than vaccine procurement were estimated. While 30 studies used some measurement of that cost, only one study on pneumococcal vaccine used a primary cost evaluation (bottom-up costing analysis) and one study used a costing tool. Twenty-seven studies (17 on rotavirus and 10 on pneumococcal vaccine) assumed the non-vaccine costs. Five studies made no reference to additional costs. Fourteen studies (9 rotavirus and 5 pneumococcal) did not consider any additional EPI cost beyond vaccine procurement. For rotavirus studies, the median for non-vaccine cost per dose was US$0.74 in developing countries and US$6.39 in developed countries. For pneumococcal vaccines, the median for non-vaccine cost per dose was US$1.27 in developing countries and US$8.71 in developed countries. Many pneumococcal (52.8%) and rotavirus (60.4%) cost-effectiveness analyses did not consider additional EPI costs or used poorly supported assumptions. Ignoring EPI costs in addition to those for vaccine procurement in CEA analysis of new vaccines may lead to significant errors in the estimations of ICERs since several factors like personnel, cold chain, or social mobilization can be substantially affected by the introduction of new vaccines. Copyright © 2013 Elsevier Ltd. All rights reserved.
Introduction to the Application of Web-Based Surveys.
ERIC Educational Resources Information Center
Timmerman, Annemarie
This paper discusses some basic assumptions and issues concerning web-based surveys. Discussion includes: assumptions regarding cost and ease of use; disadvantages of web-based surveys, concerning the inability to compensate for four common errors of survey research: coverage error, sampling error, measurement error and nonresponse error; and…
A Semantic Analysis of XML Schema Matching for B2B Systems Integration
ERIC Educational Resources Information Center
Kim, Jaewook
2011-01-01
One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…
2009-08-01
habitat analysis because of the high horizontal error between the mosaicked image tiles . The imagery was collected with a non-metric camera and likewise...possible with true color imagery (digital orthophotos ) or multispectral imagery, but usually comes at a much higher cost. Due to its availability and
Analysis of medication-related malpractice claims: causes, preventability, and costs.
Rothschild, Jeffrey M; Federico, Frank A; Gandhi, Tejal K; Kaushal, Rainu; Williams, Deborah H; Bates, David W
2002-11-25
Adverse drug events (ADEs) may lead to serious injury and may result in malpractice claims. While ADEs resulting in claims are not representative of all ADEs, such data provide a useful resource for studying ADEs. Therefore, we conducted a review of medication-related malpractice claims to study their frequency, nature, and costs and to assess the human factor failures associated with preventable ADEs. We also assessed the potential benefits of proved effective ADE prevention strategies on ADE claims prevention. We conducted a retrospective analysis of a New England malpractice insurance company claims records from January 1, 1990, to December 31, 1999. Cases were electronically screened for possible ADEs and followed up by independent review of abstracts by 2 physician reviewers (T.K.G. and R.K.). Additional in-depth claims file reviews identified potential human factor failures associated with ADEs. Adverse drug events represented 6.3% (129/2040) of claims. Adverse drug events were judged preventable in 73% (n = 94) of the cases and were nearly evenly divided between outpatient and inpatient settings. The most frequently involved medication classes were antibiotics, antidepressants or antipsychotics, cardiovascular drugs, and anticoagulants. Among these ADEs, 46% were life threatening or fatal. System deficiencies and performance errors were the most frequent cause of preventable ADEs. The mean costs of defending malpractice claims due to ADEs were comparable for nonpreventable inpatient and outpatient ADEs and preventable outpatient ADEs (mean, $64,700-74,200), but costs were considerably greater for preventable inpatient ADEs (mean, $376,500). Adverse drug events associated with malpractice claims were often severe, costly, and preventable, and about half occurred in outpatients. Many interventions could potentially have prevented ADEs, with error proofing and process standardization covering the greatest proportion of events.
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
The Extended HANDS Characterization and Analysis of Metric Biases
NASA Astrophysics Data System (ADS)
Kelecy, T.; Knox, R.; Cognion, R.
The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.
Budiman, Erwin S; Samant, Navendu; Resch, Ansgar
2013-03-01
Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors. Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. Further research is necessary to validate the findings of this model-based approach. © 2013 Diabetes Technology Society.
Budiman, Erwin S.; Samant, Navendu; Resch, Ansgar
2013-01-01
Background Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. Methods We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors.. Results Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Conclusions Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. PMID:23566995
Automation for Air Traffic Control: The Rise of a New Discipline
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Tobias, Leonard (Technical Monitor)
1997-01-01
The current debate over the concept of Free Flight has renewed interest in automated conflict detection and resolution in the enroute airspace. An essential requirement for effective conflict detection is accurate prediction of trajectories. Trajectory prediction is, however, an inexact process which accumulates errors that grow in proportion to the length of the prediction time interval. Using a model of prediction errors for the trajectory predictor incorporated in the Center-TRACON Automation System (CTAS), a computationally fast algorithm for computing conflict probability has been derived. Furthermore, a method of conflict resolution has been formulated that minimizes the average cost of resolution, when cost is defined as the increment in airline operating costs incurred in flying the resolution maneuver. The method optimizes the trade off between early resolution at lower maneuver costs but higher prediction error on the one hand and late resolution with higher maneuver costs but lower prediction errors on the other. The method determines both the time to initiate the resolution maneuver as well as the characteristics of the resolution trajectory so as to minimize the cost of the resolution. Several computational examples relevant to the design of a conflict probe that can support user-preferred trajectories in the enroute airspace will be presented.
NASA Astrophysics Data System (ADS)
Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William
2017-10-01
We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.
Cost-effectiveness of the US Geological Survey stream-gaging program in Arkansas
Darling, M.E.; Lamb, T.E.
1984-01-01
This report documents the results of the cost-effectiveness of the stream-gaging program in Arkansas. Data uses and funding sources were identified for the daily-discharge stations. All daily-discharge stations were found to be in one or more data use categories, and none were candidates for alternate methods which would result in discontinuation or conversion to a partial record station. The cost for operation of daily-discharge stations and routing costs to partial record stations, crest gages, pollution control stations as well as seven recording ground-water stations was evaluated in the Kalman-Filtering Cost-Effective Resource allocation (K-CERA) analysis. This operation under current practices requires a budget of $292,150. The average standard error of estimate of streamflow record for the Arkansas District was analyzed at 33 percent.
The effect of misclassification errors on case mix measurement.
Sutherland, Jason M; Botz, Chas K
2006-12-01
Case mix systems have been implemented for hospital reimbursement and performance measurement across Europe and North America. Case mix categorizes patients into discrete groups based on clinical information obtained from patient charts in an attempt to identify clinical or cost difference amongst these groups. The diagnosis related group (DRG) case mix system is the most common methodology, with variants adopted in many countries. External validation studies of coding quality have confirmed that widespread variability exists between originally recorded diagnoses and re-abstracted clinical information. DRG assignment errors in hospitals that share patient level cost data for the purpose of establishing cost weights affects cost weight accuracy. The purpose of this study is to estimate bias in cost weights due to measurement error of reported clinical information. DRG assignment error rates are simulated based on recent clinical re-abstraction study results. Our simulation study estimates that 47% of cost weights representing the least severe cases are over weight by 10%, while 32% of cost weights representing the most severe cases are under weight by 10%. Applying the simulated weights to a cross-section of hospitals, we find that teaching hospitals tend to be under weight. Since inaccurate cost weights challenges the ability of case mix systems to accurately reflect patient mix and may lead to potential distortions in hospital funding, bias in hospital case mix measurement highlights the role clinical data quality plays in hospital funding in countries that use DRG-type case mix systems. Quality of clinical information should be carefully considered from hospitals that contribute financial data for establishing cost weights.
Cromwell, I; Ferreira, Z; Smith, L; van der Hoek, K; Ogilvie, G; Coldman, A; Peacock, S J
2016-02-01
We set out to assess the health care resource utilization and cost of cervical cancer from the perspective of a single-payer health care system. Retrospective observational data for women diagnosed with cervical cancer in British Columbia between 2004 and 2009 were analyzed to calculate patient-level resource utilization patterns from diagnosis to death or 5-year discharge. Domains of resource use within the scope of this cost analysis were chemotherapy, radiotherapy, and brachytherapy administered by the BC Cancer Agency; resource utilization related to hospitalization and outpatient visits as recorded by the B.C. Ministry of Health; medically required services billed under the B.C. Medical Services Plan; and prescriptions dispensed under British Columbia's health insurance programs. Unit costs were applied to radiotherapy and brachytherapy, producing per-patient costs. The mean cost per case of treating cervical cancer in British Columbia was $19,153 (standard error: $3,484). Inpatient hospitalizations, at 35%, represented the largest proportion of the total cost (95% confidence interval: 32.9% to 36.9%). Costs were compared for subgroups of the total cohort. As health care systems change the way they manage, screen for, and prevent cervical cancer, cost-effectiveness evaluations of the overall approach will require up-to-date data for resource utilization and costs. We provide information suitable for such a purpose and also identify factors that influence costs.
E-prescribing errors in community pharmacies: exploring consequences and contributing factors.
Odukoya, Olufunmilola K; Stone, Jamie A; Chui, Michelle A
2014-06-01
To explore types of e-prescribing errors in community pharmacies and their potential consequences, as well as the factors that contribute to e-prescribing errors. Data collection involved performing 45 total hours of direct observations in five pharmacies. Follow-up interviews were conducted with 20 study participants. Transcripts from observations and interviews were subjected to content analysis using NVivo 10. Pharmacy staff detected 75 e-prescription errors during the 45 h observation in pharmacies. The most common e-prescribing errors were wrong drug quantity, wrong dosing directions, wrong duration of therapy, and wrong dosage formulation. Participants estimated that 5 in 100 e-prescriptions have errors. Drug classes that were implicated in e-prescribing errors were antiinfectives, inhalers, ophthalmic, and topical agents. The potential consequences of e-prescribing errors included increased likelihood of the patient receiving incorrect drug therapy, poor disease management for patients, additional work for pharmacy personnel, increased cost for pharmacies and patients, and frustrations for patients and pharmacy staff. Factors that contribute to errors included: technology incompatibility between pharmacy and clinic systems, technology design issues such as use of auto-populate features and dropdown menus, and inadvertently entering incorrect information. Study findings suggest that a wide range of e-prescribing errors is encountered in community pharmacies. Pharmacists and technicians perceive that causes of e-prescribing errors are multidisciplinary and multifactorial, that is to say e-prescribing errors can originate from technology used in prescriber offices and pharmacies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
E-Prescribing Errors in Community Pharmacies: Exploring Consequences and Contributing Factors
Stone, Jamie A.; Chui, Michelle A.
2014-01-01
Objective To explore types of e-prescribing errors in community pharmacies and their potential consequences, as well as the factors that contribute to e-prescribing errors. Methods Data collection involved performing 45 total hours of direct observations in five pharmacies. Follow-up interviews were conducted with 20 study participants. Transcripts from observations and interviews were subjected to content analysis using NVivo 10. Results Pharmacy staff detected 75 e-prescription errors during the 45 hour observation in pharmacies. The most common e-prescribing errors were wrong drug quantity, wrong dosing directions, wrong duration of therapy, and wrong dosage formulation. Participants estimated that 5 in 100 e-prescriptions have errors. Drug classes that were implicated in e-prescribing errors were antiinfectives, inhalers, ophthalmic, and topical agents. The potential consequences of e-prescribing errors included increased likelihood of the patient receiving incorrect drug therapy, poor disease management for patients, additional work for pharmacy personnel, increased cost for pharmacies and patients, and frustrations for patients and pharmacy staff. Factors that contribute to errors included: technology incompatibility between pharmacy and clinic systems, technology design issues such as use of auto-populate features and dropdown menus, and inadvertently entering incorrect information. Conclusion Study findings suggest that a wide range of e-prescribing errors are encountered in community pharmacies. Pharmacists and technicians perceive that causes of e-prescribing errors are multidisciplinary and multifactorial, that is to say e-prescribing errors can originate from technology used in prescriber offices and pharmacies. PMID:24657055
Design of surface-water data networks for regional information
Moss, Marshall E.; Gilroy, E.J.; Tasker, Gary D.; Karlinger, M.R.
1982-01-01
This report describes a technique, Network Analysis of Regional Information (NARI), and the existing computer procedures that have been developed for the specification of the regional information-cost relation for several statistical parameters of streamflow. The measure of information used is the true standard error of estimate of a regional logarithmic regression. The cost is a function of the number of stations at which hydrologic data are collected and the number of years for which the data are collected. The technique can be used to obtain either (1) a minimum cost network that will attain a prespecified accuracy and reliability or (2) a network that maximizes information given a set of budgetary and time constraints.
Man power/cost estimation model: Automated planetary projects
NASA Technical Reports Server (NTRS)
Kitchen, L. D.
1975-01-01
A manpower/cost estimation model is developed which is based on a detailed level of financial analysis of over 30 million raw data points which are then compacted by more than three orders of magnitude to the level at which the model is applicable. The major parameter of expenditure is manpower (specifically direct labor hours) for all spacecraft subsystem and technical support categories. The resultant model is able to provide a mean absolute error of less than fifteen percent for the eight programs comprising the model data base. The model includes cost saving inheritance factors, broken down in four levels, for estimating follow-on type programs where hardware and design inheritance are evident or expected.
Shaw, Andrew J; Ingham, Stephen A; Fudge, Barry W; Folland, Jonathan P
2013-12-01
This study assessed the between-test reliability of oxygen cost (OC) and energy cost (EC) in distance runners, and contrasted it with the smallest worthwhile change (SWC) of these measures. OC and EC displayed similar levels of within-subject variation (typical error < 3.85%). However, the typical error (2.75% vs 2.74%) was greater than the SWC (1.38% vs 1.71%) for both OC and EC, respectively, indicating insufficient sensitivity to confidently detect small, but meaningful, changes in OC and EC.
Response cost, reinforcement, and children's Porteus Maze qualitative performance.
Neenan, D M; Routh, D K
1986-09-01
Sixty fourth-grade children were given two different series of the Porteus Maze Test. The first series was given as a baseline, and the second series was administered under one of four different experimental conditions: control, response cost, positive reinforcement, or negative verbal feedback. Response cost and positive reinforcement, but not negative verbal feedback, led to significant decreases in the number of all types of qualitative errors in relation to the control group. The reduction of nontargeted as well as targeted errors provides evidence for the generalized effects of response cost and positive reinforcement.
Perceived Cost and Intrinsic Motor Variability Modulate the Speed-Accuracy Trade-Off
Bertucco, Matteo; Bhanpuri, Nasir H.; Sanger, Terence D.
2015-01-01
Fitts’ Law describes the speed-accuracy trade-off of human movements, and it is an elegant strategy that compensates for random and uncontrollable noise in the motor system. The control strategy during targeted movements may also take into account the rewards or costs of any outcomes that may occur. The aim of this study was to test the hypothesis that movement time in Fitts’ Law emerges not only from the accuracy constraints of the task, but also depends on the perceived cost of error for missing the targets. Subjects were asked to touch targets on an iPad® screen with different costs for missed targets. We manipulated the probability of error by comparing children with dystonia (who are characterized by increased intrinsic motor variability) to typically developing children. The results show a strong effect of the cost of error on the Fitts’ Law relationship characterized by an increase in movement time as cost increased. In addition, we observed a greater sensitivity to increased cost for children with dystonia, and this behavior appears to minimize the average cost. The findings support a proposed mathematical model that explains how movement time in a Fitts-like task is related to perceived risk. PMID:26447874
Cost comparison of unit dose and traditional drug distribution in a long-term-care facility.
Lepinski, P W; Thielke, T S; Collins, D M; Hanson, A
1986-11-01
Unit dose and traditional drug distribution systems were compared in a 352-bed long-term-care facility by analyzing nursing time, medication-error rate, medication costs, and waste. Time spent by nurses in preparing, administering, charting, and other tasks associated with medications was measured with a stop-watch on four different nursing units during six-week periods before and after the nursing home began using unit dose drug distribution. Medication-error rate before and after implementation of the unit dose system was determined by patient profile audits and medication inventories. Medication costs consisted of patient billing costs (acquisition cost plus fee) and cost of medications destroyed. The unit dose system required a projected 1507.2 hours less nursing time per year. Mean medication-error rates were 8.53% and 0.97% for the traditional and unit dose systems, respectively. Potential annual savings because of decreased medication waste with the unit dose system were $2238.72. The net increase in cost for the unit dose system was estimated at $615.05 per year, or approximately $1.75 per patient. The unit dose system appears safer and more time-efficient than the traditional system, although its costs are higher.
Sustainable Mining Land Use for Lignite Based Energy Projects
NASA Astrophysics Data System (ADS)
Dudek, Michal; Krysa, Zbigniew
2017-12-01
This research aims to discuss complex lignite based energy projects economic viability and its impact on sustainable land use with respect to project risk and uncertainty, economics, optimisation (e.g. Lerchs and Grossmann) and importance of lignite as fuel that may be expressed in situ as deposit of energy. Sensitivity analysis and simulation consist of estimated variable land acquisition costs, geostatistics, 3D deposit block modelling, electricity price considered as project product price, power station efficiency and power station lignite processing unit cost, CO2 allowance costs, mining unit cost and also lignite availability treated as lignite reserves kriging estimation error. Investigated parameters have nonlinear influence on results so that economically viable amount of lignite in optimal pit varies having also nonlinear impact on land area required for mining operation.
Essays in financial economics and econometrics
NASA Astrophysics Data System (ADS)
La Spada, Gabriele
Chapter 1 (my job market paper) asks the following question: Do asset managers reach for yield because of competitive pressures in a low rate environment? I propose a tournament model of money market funds (MMFs) to study this issue. I show that funds with different costs of default respond differently to changes in interest rates, and that it is important to distinguish the role of risk-free rates from that of risk premia. An increase in the risk premium leads funds with lower default costs to increase risk-taking, while funds with higher default costs reduce risk-taking. Without changes in the premium, low risk-free rates reduce risk-taking. My empirical analysis shows that these predictions are consistent with the risk-taking of MMFs during the 2006--2008 period. Chapter 2, co-authored with Fabrizio Lillo and published in Studies in Nonlinear Dynamics and Econometrics (2014), studies the effect of round-off error (or discretization) on stationary Gaussian long-memory process. For large lags, the autocovariance is rescaled by a factor smaller than one, and we compute this factor exactly. Hence, the discretized process has the same Hurst exponent as the underlying one. We show that in presence of round-off error, two common estimators of the Hurst exponent, the local Whittle (LW) estimator and the detrended fluctuation analysis (DFA), are severely negatively biased in finite samples. We derive conditions for consistency and asymptotic normality of the LW estimator applied to discretized processes and compute the asymptotic properties of the DFA for generic long-memory processes that encompass discretized processes. Chapter 3, co-authored with Fabrizio Lillo, studies the effect of round-off error on integrated Gaussian processes with possibly correlated increments. We derive the variance and kurtosis of the realized increment process in the limit of both "small" and "large" round-off errors, and its autocovariance for large lags. We propose novel estimators for the variance and lag-one autocorrelation of the underlying, unobserved increment process. We also show that for fractionally integrated processes, the realized increments have the same Hurst exponent as the underlying ones, but the LW estimator applied to the realized series is severely negatively biased in medium-sized samples.
A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories
NASA Astrophysics Data System (ADS)
Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon
BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.
Vosoughi, Aram; Smith, Paul Taylor; Zeitouni, Joseph A; Sodeman, Gregori M; Jorda, Merce; Gomez-Fernandez, Carmen; Garcia-Buitrago, Monica; Petito, Carol K; Chapman, Jennifer R; Campuzano-Zuluaga, German; Rosenberg, Andrew E; Kryvenko, Oleksandr N
2018-04-30
Frozen section telepathology interpretation experience has been largely limited to practices with locations significantly distant from one another with sporadic need for frozen section diagnosis. In 2010 we established a real-time non-robotic telepathology system in a very active cancer center for daily frozen section service. Herein, we evaluate its accuracy compared to direct microscopic interpretation performed in the main hospital by the same faculty and its cost-efficiency over a 1-year period. From 643 (1416 parts) cases requiring intraoperative consultation, 333 cases (690 parts) were examined by telepathology and 310 cases (726 parts) by direct microscopy. Corresponding discrepancy rates were 2.6% (18 cases: 6 (0.9%) sampling and 12 (1.7%) diagnostic errors) and 3.2% (23 cases: 8 (1.1%) sampling and 15 (2.1%) diagnostic errors), P=.63. The sensitivity and specificity of intraoperative frozen diagnosis were 0.92 and 0.99, respectively, in telepathology, and 0.90 and 0.99, respectively, in direct microscopy. There was no correlation of error incidence with post graduate year level of residents involved in the telepathology service. Cost analysis indicated that the time saved by telepathology was $19691 over one year of the study period while the capital cost for establishing the system was $8924. Thus, real-time non-robotic telepathology is a reliable and easy to use tool for frozen section evaluation in busy clinical settings, especially when frozen section service involves more than one hospital, and it is cost efficient when travel is a component of the service. Copyright © 2018. Published by Elsevier Inc.
48 CFR 36.608 - Liability for Government costs resulting from design errors or deficiencies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Liability for Government costs resulting from design errors or deficiencies. 36.608 Section 36.608 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service...
Robust nonlinear canonical correlation analysis: application to seasonal climate forecasting
NASA Astrophysics Data System (ADS)
Cannon, A. J.; Hsieh, W. W.
2008-02-01
Robust variants of nonlinear canonical correlation analysis (NLCCA) are introduced to improve performance on datasets with low signal-to-noise ratios, for example those encountered when making seasonal climate forecasts. The neural network model architecture of standard NLCCA is kept intact, but the cost functions used to set the model parameters are replaced with more robust variants. The Pearson product-moment correlation in the double-barreled network is replaced by the biweight midcorrelation, and the mean squared error (mse) in the inverse mapping networks can be replaced by the mean absolute error (mae). Robust variants of NLCCA are demonstrated on a synthetic dataset and are used to forecast sea surface temperatures in the tropical Pacific Ocean based on the sea level pressure field. Results suggest that adoption of the biweight midcorrelation can lead to improved performance, especially when a strong, common event exists in both predictor/predictand datasets. Replacing the mse by the mae leads to improved performance on the synthetic dataset, but not on the climate dataset except at the longest lead time, which suggests that the appropriate cost function for the inverse mapping networks is more problem dependent.
Impact of Medicare Part D on out-of-pocket drug costs and medical use for patients with cancer.
Kircher, Sheetal M; Johansen, Michael E; Nimeiri, Halla S; Richardson, Caroline R; Davis, Matthew M
2014-11-01
Medicare Part D was designed to reduce out-of-pocket (OOP) costs for Medicare beneficiaries, but to the authors' knowledge the extent to which this occurred for patients with cancer has not been measured to date. The objective of the current study was to examine the impact of Medicare Part D eligibility on OOP cost for prescription drugs and use of medical services among patients with cancer. Using the Medical Expenditure Panel Survey (MEPS) for the years 2002 through 2010, a differences-in-differences analysis estimated the effects of Medicare Part D eligibility on OOP pharmaceutical costs and medical use. The authors compared per capita OOP cost and use between Medicare beneficiaries (aged ≥65 years) with cancer to near-elderly patients aged 55 years to 64 years with cancer. Statistical weights were used to generate nationally representative estimates. A total of 1878 near-elderly and 4729 individuals with Medicare were included (total of 6607 individuals). The mean OOP pharmaceutical cost for Medicare beneficiaries before the enactment of Part D was $1158 (standard error, ±$52) and decreased to $501 (standard error, ±$30), a decline of 43%. Compared with changes in OOP pharmaceutical costs for nonelderly patients with cancer over the same period, the implementation of Medicare Part D was associated with a further reduction of $356 per person. Medicare Part D appeared to have no significant impact on the use of medications, hospitalizations, or emergency department visits, but was associated with a reduction of 1.55 in outpatient visits. Medicare D has reduced OOP prescription drug costs and outpatient visits for seniors with cancer beyond trends observed for younger patients, with no major impact on the use of other medical services noted. © 2014 American Cancer Society.
Improving laboratory data entry quality using Six Sigma.
Elbireer, Ali; Le Chasseur, Julie; Jackson, Brooks
2013-01-01
The Uganda Makerere University provides clinical laboratory support to over 70 clients in Uganda. With increased volume, manual data entry errors have steadily increased, prompting laboratory managers to employ the Six Sigma method to evaluate and reduce their problems. The purpose of this paper is to describe how laboratory data entry quality was improved by using Six Sigma. The Six Sigma Quality Improvement (QI) project team followed a sequence of steps, starting with defining project goals, measuring data entry errors to assess current performance, analyzing data and determining data-entry error root causes. Finally the team implemented changes and control measures to address the root causes and to maintain improvements. Establishing the Six Sigma project required considerable resources and maintaining the gains requires additional personnel time and dedicated resources. After initiating the Six Sigma project, there was a 60.5 percent reduction in data entry errors from 423 errors a month (i.e. 4.34 Six Sigma) in the first month, down to an average 166 errors/month (i.e. 4.65 Six Sigma) over 12 months. The team estimated the average cost of identifying and fixing a data entry error to be $16.25 per error. Thus, reducing errors by an average of 257 errors per month over one year has saved the laboratory an estimated $50,115 a year. The Six Sigma QI project provides a replicable framework for Ugandan laboratory staff and other resource-limited organizations to promote quality environment. Laboratory staff can deliver excellent care at a lower cost, by applying QI principles. This innovative QI method of reducing data entry errors in medical laboratories may improve the clinical workflow processes and make cost savings across the health care continuum.
GazeParser: an open-source and multiplatform library for low-cost eye tracking and analysis.
Sogo, Hiroyuki
2013-09-01
Eye movement analysis is an effective method for research on visual perception and cognition. However, recordings of eye movements present practical difficulties related to the cost of the recording devices and the programming of device controls for use in experiments. GazeParser is an open-source library for low-cost eye tracking and data analysis; it consists of a video-based eyetracker and libraries for data recording and analysis. The libraries are written in Python and can be used in conjunction with PsychoPy and VisionEgg experimental control libraries. Three eye movement experiments are reported on performance tests of GazeParser. These showed that the means and standard deviations for errors in sampling intervals were less than 1 ms. Spatial accuracy ranged from 0.7° to 1.2°, depending on participant. In gap/overlap tasks and antisaccade tasks, the latency and amplitude of the saccades detected by GazeParser agreed with those detected by a commercial eyetracker. These results showed that the GazeParser demonstrates adequate performance for use in psychological experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Brett W.; Diaz, Kimberly A.; Ochiobi, Chinaza Darlene
2015-09-01
3D printing originally known as additive manufacturing is a process of making 3 dimensional solid objects from a CAD file. This ground breaking technology is widely used for industrial and biomedical purposes such as building objects, tools, body parts and cosmetics. An important benefit of 3D printing is the cost reduction and manufacturing flexibility; complex parts are built at the fraction of the price. However, layer by layer printing of complex shapes adds error due to the surface roughness. Any such error results in poor quality products with inaccurate dimensions. The main purpose of this research is to measure themore » amount of printing errors for parts with different geometric shapes and to analyze them for finding optimal printing settings to minimize the error. We use a Design of Experiments framework, and focus on studying parts with cone and ellipsoid shapes. We found that the orientation and the shape of geometric shapes have significant effect on the printing error. From our analysis, we also determined the optimal orientation that gives the least printing error.« less
Reliability and coverage analysis of non-repairable fault-tolerant memory systems
NASA Technical Reports Server (NTRS)
Cox, G. W.; Carroll, B. D.
1976-01-01
A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
Borycki, Elizabeth M; Kushniruk, Andre W; Kuwata, Shigeki; Kannry, Joseph
2011-01-01
Electronic health records (EHRs) promise to improve and streamline healthcare through electronic entry and retrieval of patient data. Furthermore, based on a number of studies showing their positive benefits, they promise to reduce medical error and make healthcare safer. However, a growing body of literature has clearly documented that if EHRS are not designed properly and with usability as an important goal in their design, rather than reducing error, EHR deployment has the potential to actually increase medical error. In this paper we describe our approach to engineering (and reengineering) EHRs in order to increase their beneficial potential while at the same time improving their safety. The approach described in this paper involves an integration of the methods of usability analysis with video analysis of end users interacting with EHR systems and extends the evaluation of the usability of EHRs to include the assessment of the impact of these systems on work practices. Using clinical simulations, we analyze human-computer interaction in real healthcare settings (in a portable, low-cost and high fidelity manner) and include both artificial and naturalistic data collection to identify potential usability problems and sources of technology-induced error prior to widespread system release. Two case studies where the methods we have developed and refined have been applied at different levels of user-computer interaction are described.
Estimation of spatial-temporal gait parameters using a low-cost ultrasonic motion analysis system.
Qi, Yongbin; Soh, Cheong Boon; Gunawan, Erry; Low, Kay-Soon; Thomas, Rijil
2014-08-20
In this paper, a low-cost motion analysis system using a wireless ultrasonic sensor network is proposed and investigated. A methodology has been developed to extract spatial-temporal gait parameters including stride length, stride duration, stride velocity, stride cadence, and stride symmetry from 3D foot displacements estimated by the combination of spherical positioning technique and unscented Kalman filter. The performance of this system is validated against a camera-based system in the laboratory with 10 healthy volunteers. Numerical results show the feasibility of the proposed system with average error of 2.7% for all the estimated gait parameters. The influence of walking speed on the measurement accuracy of proposed system is also evaluated. Statistical analysis demonstrates its capability of being used as a gait assessment tool for some medical applications.
The Significance of the Record Length in Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Senarath, S. U.
2013-12-01
Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.
The economics of health care quality and medical errors.
Andel, Charles; Davidow, Stephen L; Hollander, Mark; Moreno, David A
2012-01-01
Hospitals have been looking for ways to improve quality and operational efficiency and cut costs for nearly three decades, using a variety of quality improvement strategies. However, based on recent reports, approximately 200,000 Americans die from preventable medical errors including facility-acquired conditions and millions may experience errors. In 2008, medical errors cost the United States $19.5 billion. About 87 percent or $17 billion were directly associated with additional medical cost, including: ancillary services, prescription drug services, and inpatient and outpatient care, according to a study sponsored by the Society for Actuaries and conducted by Milliman in 2010. Additional costs of $1.4 billion were attributed to increased mortality rates with $1.1 billion or 10 million days of lost productivity from missed work based on short-term disability claims. The authors estimate that the economic impact is much higher, perhaps nearly $1 trillion annually when quality-adjusted life years (QALYs) are applied to those that die. Using the Institute of Medicine's (IOM) estimate of 98,000 deaths due to preventable medical errors annually in its 1998 report, To Err Is Human, and an average of ten lost years of life at $75,000 to $100,000 per year, there is a loss of $73.5 billion to $98 billion in QALYs for those deaths--conservatively. These numbers are much greater than those we cite from studies that explore the direct costs of medical errors. And if the estimate of a recent Health Affairs article is correct-preventable death being ten times the IOM estimate-the cost is $735 billion to $980 billion. Quality care is less expensive care. It is better, more efficient, and by definition, less wasteful. It is the right care, at the right time, every time. It should mean that far fewer patients are harmed or injured. Obviously, quality care is not being delivered consistently throughout U.S. hospitals. Whatever the measure, poor quality is costing payers and society a great deal. However, health care leaders and professionals are focusing on quality and patient safety in ways they never have before because the economics of quality have changed substantially.
Risk management and measuring productivity with POAS--point of act system.
Akiyama, Masanori; Kondo, Tatsuya
2007-01-01
The concept of our system is not only to manage material flows, but also to provide an integrated management resource, a means of correcting errors in medical treatment, and applications to EBM through the data mining of medical records. Prior to the development of this system, electronic processing systems in hospitals did a poor job of accurately grasping medical practice and medical material flows. With POAS (Point of Act System), hospital managers can solve the so-called, "man, money, material, and information" issues inherent in the costs of healthcare. The POAS system synchronizes with each department system, from finance and accounting, to pharmacy, to imaging, and allows information exchange. We can manage Man, Material, Money and Information completely by this system. Our analysis has shown that this system has a remarkable investment effect - saving over four million dollars per year - through cost savings in logistics and business process efficiencies. In addition, the quality of care has been improved dramatically while error rates have been reduced - nearly to zero in some cases.
NASA Technical Reports Server (NTRS)
Page, J.
1981-01-01
The effects of an independent verification and integration (V and I) methodology on one class of application are described. Resource profiles are discussed. The development environment is reviewed. Seven measures are presented to test the hypothesis that V and I improve the development and product. The V and I methodology provided: (1) a decrease in requirements ambiguities and misinterpretation; (2) no decrease in design errors; (3) no decrease in the cost of correcting errors; (4) a decrease in the cost of system and acceptance testing; (5) an increase in early discovery of errors; (6) no improvement in the quality of software put into operation; and (7) a decrease in productivity and an increase in cost.
Dalley, C; Basarir, H; Wright, J G; Fernando, M; Pearson, D; Ward, S E; Thokula, P; Krishnankutty, A; Wilson, G; Dalton, A; Talley, P; Barnett, D; Hughes, D; Porter, N R; Reilly, J T; Snowden, J A
2015-04-01
Specialist Integrated Haematological Malignancy Diagnostic Services (SIHMDS) were introduced as a standard of care within the UK National Health Service to reduce diagnostic error and improve clinical outcomes. Two broad models of service delivery have become established: 'co-located' services operating from a single-site and 'networked' services, with geographically separated laboratories linked by common management and information systems. Detailed systematic cost analysis has never been published on any established SIHMDS model. We used Activity Based Costing (ABC) to construct a cost model for our regional 'networked' SIHMDS covering a two-million population based on activity in 2011. Overall estimated annual running costs were £1 056 260 per annum (£733 400 excluding consultant costs), with individual running costs for diagnosis, staging, disease monitoring and end of treatment assessment components of £723 138, £55 302, £184 152 and £94 134 per annum, respectively. The cost distribution by department was 28.5% for haematology, 29.5% for histopathology and 42% for genetics laboratories. Costs of the diagnostic pathways varied considerably; pathways for myelodysplastic syndromes and lymphoma were the most expensive and the pathways for essential thrombocythaemia and polycythaemia vera being the least. ABC analysis enables estimation of running costs of a SIHMDS model comprised of 'networked' laboratories. Similar cost analyses for other SIHMDS models covering varying populations are warranted to optimise quality and cost-effectiveness in delivery of modern haemato-oncology diagnostic services in the UK as well as internationally. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Laxy, Michael; Wilson, Edward C F; Boothby, Clare E; Griffin, Simon J
2017-12-01
There is uncertainty about the cost effectiveness of early intensive treatment versus routine care in individuals with type 2 diabetes detected by screening. To derive a trial-informed estimate of the incremental costs of intensive treatment as delivered in the Anglo-Danish-Dutch Study of Intensive Treatment in People with Screen-Detected Diabetes in Primary Care-Europe (ADDITION) trial and to revisit the long-term cost-effectiveness analysis from the perspective of the UK National Health Service. We analyzed the electronic primary care records of a subsample of the ADDITION-Cambridge trial cohort (n = 173). Unit costs of used primary care services were taken from the published literature. Incremental annual costs of intensive treatment versus routine care in years 1 to 5 after diagnosis were calculated using multilevel generalized linear models. We revisited the long-term cost-utility analyses for the ADDITION-UK trial cohort and reported results for ADDITION-Cambridge using the UK Prospective Diabetes Study Outcomes Model and the trial-informed cost estimates according to a previously developed evaluation framework. Incremental annual costs of intensive treatment over years 1 to 5 averaged £29.10 (standard error = £33.00) for consultations with general practitioners and nurses and £54.60 (standard error = £28.50) for metabolic and cardioprotective medication. For ADDITION-UK, over the 10-, 20-, and 30-year time horizon, adjusted incremental quality-adjusted life-years (QALYs) were 0.014, 0.043, and 0.048, and adjusted incremental costs were £1,021, £1,217, and £1,311, resulting in incremental cost-effectiveness ratios of £71,232/QALY, £28,444/QALY, and £27,549/QALY, respectively. Respective incremental cost-effectiveness ratios for ADDITION-Cambridge were slightly higher. The incremental costs of intensive treatment as delivered in the ADDITION-Cambridge trial were lower than expected. Given UK willingness-to-pay thresholds in patients with screen-detected diabetes, intensive treatment is of borderline cost effectiveness over a time horizon of 20 years and more. Copyright © 2017. Published by Elsevier Inc.
Prevention of medication errors: detection and audit.
Montesi, Germana; Lechi, Alessandro
2009-06-01
1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.
NASA Technical Reports Server (NTRS)
Stewart, R. D.
1979-01-01
Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.
Access Based Cost Estimation for Beddown Analysis
2006-03-23
logic. This research expands upon the existing research by using Visual Basic for Applications ( VBA ) to further customize and streamline the...methods with the use of VBA . Calculations are completed in either underlying Form VBA code or through global modules accessible throughout the...query and SQL referencing. Attempts were made where possible to align data structures with possible external sources to minimize import errors and
Innovative and Cost Effective Remediation of Orbital Debris
2014-04-25
to face international opposition because it could be used offensively to disable spacecraft. 4 Technical Analysis Most of StreamSat’s... LDR ). 5 They demonstrated droplet dispersion of less than 1 micro radian for some generators and devised an instrument for measuring the...error can be limited to less than one micro radian using existing technology and techniques. During transit, external forces will alter the path of
NASA Astrophysics Data System (ADS)
Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun
2018-02-01
Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.
Draft versus finished sequence data for DNA and protein diagnostic signature development
Gardner, Shea N.; Lam, Marisa W.; Smith, Jason R.; Torres, Clinton L.; Slezak, Tom R.
2005-01-01
Sequencing pathogen genomes is costly, demanding careful allocation of limited sequencing resources. We built a computational Sequencing Analysis Pipeline (SAP) to guide decisions regarding the amount of genomic sequencing necessary to develop high-quality diagnostic DNA and protein signatures. SAP uses simulations to estimate the number of target genomes and close phylogenetic relatives (near neighbors or NNs) to sequence. We use SAP to assess whether draft data are sufficient or finished sequencing is required using Marburg and variola virus sequences. Simulations indicate that intermediate to high-quality draft with error rates of 10−3–10−5 (∼8× coverage) of target organisms is suitable for DNA signature prediction. Low-quality draft with error rates of ∼1% (3× to 6× coverage) of target isolates is inadequate for DNA signature prediction, although low-quality draft of NNs is sufficient, as long as the target genomes are of high quality. For protein signature prediction, sequencing errors in target genomes substantially reduce the detection of amino acid sequence conservation, even if the draft is of high quality. In summary, high-quality draft of target and low-quality draft of NNs appears to be a cost-effective investment for DNA signature prediction, but may lead to underestimation of predicted protein signatures. PMID:16243783
Interpolation for de-Dopplerisation
NASA Astrophysics Data System (ADS)
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
Xanthium strumarium L. seed hull as a zero cost alternative for Rhodamine B dye removal.
Khamparia, Shraddha; Jaspal, Dipika Kaur
2017-07-15
Treatment of polluted water has been considered as one of the most important aspects in environmental sciences. Present study explores the decolorization potential of a low cost natural adsorbent Xanthium strumarium L. seed hull for the adsorption of a toxic xanthene dye, Rhodamine B (RHB). The characterization of the adsorbent revealed the presence of high amount of carbon, when exposed to Electron Dispersive Spectroscopy (EDS). Further appreciable decolorization took place which was confirmed by Fourier Transform Infrared Spectroscopy (FTIR) analysis noticing shift in peaks. Isothermal studies indicated multilayer adsorption following Freundlich isotherm. The rate of adsorption was supported by second order kinetics directing a chemical phenomenon during the process with dominance of film diffusion as the rate governing step. Moreover paper aims at correlating the chemical arena to the mathematical aspect providing an in-depth information of the studied treatment process. For proper assessment and validation of the observed data, experimental data has been statistically treated by applying different error functions namely, Chi-square test (χ 2 ), Sum of absolute errors (EABS) and Normalized standard deviation (NSD). Further practical applicability of the low cost adsorbent was evaluated by continuous column mode studies with 72.2% of dye recovery. Xanthium strumarium L. proved to be environment friendly low cost natural adsorbent for decolorizing RHB from aquatic system. Copyright © 2017 Elsevier Ltd. All rights reserved.
Charles, Krista; Cannon, Margaret; Hall, Robert; Coustasse, Alberto
2014-01-01
Computerized provider order entry (CPOE) systems allow physicians to prescribe patient services electronically. In hospitals, CPOE essentially eliminates the need for handwritten paper orders and achieves cost savings through increased efficiency. The purpose of this research study was to examine the benefits of and barriers to CPOE adoption in hospitals to determine the effects on medical errors and adverse drug events (ADEs) and examine cost and savings associated with the implementation of this newly mandated technology. This study followed a methodology using the basic principles of a systematic review and referenced 50 sources. CPOE systems in hospitals were found to be capable of reducing medical errors and ADEs, especially when CPOE systems are bundled with clinical decision support systems designed to alert physicians and other healthcare providers of pending lab or medical errors. However, CPOE systems face major barriers associated with adoption in a hospital system, mainly high implementation costs and physicians' resistance to change.
An arbitrary-order staggered time integrator for the linear acoustic wave equation
NASA Astrophysics Data System (ADS)
Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo
2018-02-01
We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.
White, Robin R; Capper, Judith L
2014-03-01
The objective of this study was to use a precision nutrition model to simulate the relationship between diet formulation frequency and dairy cattle performance across various climates. Agricultural Modeling and Training Systems (AMTS) CattlePro diet-balancing software (Cornell Research Foundation, Ithaca, NY) was used to compare 3 diet formulation frequencies (weekly, monthly, or seasonal) and 3 levels of climate variability (hot, cold, or variable). Predicted daily milk yield (MY), metabolizable energy (ME) balance, and dry matter intake (DMI) were recorded for each frequency-variability combination. Economic analysis was conducted to calculate the predicted revenue over feed and labor costs. Diet formulation frequency affected ME balance and MY but did not affect DMI. Climate variability affected ME balance and DMI but not MY. The interaction between climate variability and formulation frequency did not affect ME balance, MY, or DMI. Formulating diets more frequently increased MY, DMI, and ME balance. Economic analysis showed that formulating diets weekly rather than seasonally could improve returns over variable costs by $25,000 per year for a moderate-sized (300-cow) operation. To achieve this increase in returns, an entire feeding system margin of error of <1% was required. Formulating monthly, rather than seasonally, may be a more feasible alternative as this requires a margin of error of only 2.5% for the entire feeding system. Feeding systems with a low margin of error must be developed to better take advantage of the benefits of precision nutrition. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Mistakes as Stepping Stones: Effects of Errors on Episodic Memory among Younger and Older Adults
ERIC Educational Resources Information Center
Cyr, Andrée-Ann; Anderson, Nicole D.
2015-01-01
The memorial costs and benefits of trial-and-error learning have clear pedagogical implications for students, and increasing evidence shows that generating errors during episodic learning can improve memory among younger adults. Conversely, the aging literature has found that errors impair memory among healthy older adults and has advocated for…
Understanding product cost vs. performance through an in-depth system Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Sanson, Mark C.
2017-08-01
The manner in which an optical system is toleranced and compensated greatly affects the cost to build it. By having a detailed understanding of different tolerance and compensation methods, the end user can decide on the balance of cost and performance. A detailed phased approach Monte Carlo analysis can be used to demonstrate the tradeoffs between cost and performance. In complex high performance optical systems, performance is fine-tuned by making adjustments to the optical systems after they are initially built. This process enables the overall best system performance, without the need for fabricating components to stringent tolerance levels that often can be outside of a fabricator's manufacturing capabilities. A good performance simulation of as built performance can interrogate different steps of the fabrication and build process. Such a simulation may aid the evaluation of whether the measured parameters are within the acceptable range of system performance at that stage of the build process. Finding errors before an optical system progresses further into the build process saves both time and money. Having the appropriate tolerances and compensation strategy tied to a specific performance level will optimize the overall product cost.
Ahmed, Sharmina; Makrides, Maria; Sim, Nicholas; McPhee, Andy; Quinlivan, Julie; Gibson, Robert; Umberger, Wendy
2015-12-01
Recent research emphasized the nutritional benefits of omega-3 long chain polyunsaturated fatty acids (LCPUFAs) during pregnancy. Based on a double-blind randomised controlled trial named "DHA to Optimize Mother and Infant Outcome" (DOMInO), we examined how omega 3 DHA supplementation during pregnancy may affect pregnancy related in-patient hospital costs. We conducted an econometric analysis based on ordinary least square and quantile regressions with bootstrapped standard errors. Using these approaches, we also examined whether smoking, drinking, maternal age and BMI could influence the effect of DHA supplementation during pregnancy on hospital costs. Our regressions showed that in-patient hospital costs could decrease by AUD92 (P<0.05) on average per singleton pregnancy when DHA supplements were consumed during pregnancy. Our regression results also showed that the cost savings to the Australian public hospital system could be between AUD15 - AUD51 million / year. Given that a simple intervention like DHA-rich fish-oil supplementation could generate savings to the public, it may be worthwhile from a policy perspective to encourage DHA supplementation among pregnant women. Copyright © 2015 Elsevier Ltd. All rights reserved.
Acceptance threshold theory can explain occurrence of homosexual behaviour.
Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra
2015-01-01
Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Eliminating US hospital medical errors.
Kumar, Sameer; Steinebach, Marc
2008-01-01
Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.
Medina, K.D.; Tasker, Gary D.
1987-01-01
This report documents the results of an analysis of the surface-water data network in Kansas for its effectiveness in providing regional streamflow information. The network was analyzed using generalized least squares regression. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-, low-, and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow-gaging-station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations, and (or) adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and for discontinued stations for which unregulated flow characteristics, as well as physical and climatic characteristics, were available. The State was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for the three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean-square error for each cost level could be obtained by adding new stations and discontinuing some current network stations. Large reductions in sampling mean-square error for low-flow information could be achieved in all three network areas, the reduction in western Kansas being the most dramatic. The addition of new stations would be most beneficial for mean-flow information in western Kansas. The reduction of sampling mean-square error for high-flow information would benefit most from the addition of new stations in western Kansas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas.
A systematic comparison of error correction enzymes by next-generation sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
A systematic comparison of error correction enzymes by next-generation sequencing
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...
2017-08-01
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
Performance Analysis of Classification Methods for Indoor Localization in Vlc Networks
NASA Astrophysics Data System (ADS)
Sánchez-Rodríguez, D.; Alonso-González, I.; Sánchez-Medina, J.; Ley-Bosch, C.; Díaz-Vilariño, L.
2017-09-01
Indoor localization has gained considerable attention over the past decade because of the emergence of numerous location-aware services. Research works have been proposed on solving this problem by using wireless networks. Nevertheless, there is still much room for improvement in the quality of the proposed classification models. In the last years, the emergence of Visible Light Communication (VLC) brings a brand new approach to high quality indoor positioning. Among its advantages, this new technology is immune to electromagnetic interference and has the advantage of having a smaller variance of received signal power compared to RF based technologies. In this paper, a performance analysis of seventeen machine leaning classifiers for indoor localization in VLC networks is carried out. The analysis is accomplished in terms of accuracy, average distance error, computational cost, training size, precision and recall measurements. Results show that most of classifiers harvest an accuracy above 90 %. The best tested classifier yielded a 99.0 % accuracy, with an average error distance of 0.3 centimetres.
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-01-01
Introduction Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%–38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. Methods We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Results Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). Conclusion A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive. PMID:28874948
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-08-01
Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%-38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive.
NASA Technical Reports Server (NTRS)
Martos, Borja; Kiszely, Paul; Foster, John V.
2011-01-01
As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.
Evaluating performance of stormwater sampling approaches using a dynamic watershed model.
Ackerman, Drew; Stein, Eric D; Ritter, Kerry J
2011-09-01
Accurate quantification of stormwater pollutant levels is essential for estimating overall contaminant discharge to receiving waters. Numerous sampling approaches exist that attempt to balance accuracy against the costs associated with the sampling method. This study employs a novel and practical approach of evaluating the accuracy of different stormwater monitoring methodologies using stormflows and constituent concentrations produced by a fully validated continuous simulation watershed model. A major advantage of using a watershed model to simulate pollutant concentrations is that a large number of storms representing a broad range of conditions can be applied in testing the various sampling approaches. Seventy-eight distinct methodologies were evaluated by "virtual samplings" of 166 simulated storms of varying size, intensity and duration, representing 14 years of storms in Ballona Creek near Los Angeles, California. The 78 methods can be grouped into four general strategies: volume-paced compositing, time-paced compositing, pollutograph sampling, and microsampling. The performances of each sampling strategy was evaluated by comparing the (1) median relative error between the virtually sampled and the true modeled event mean concentration (EMC) of each storm (accuracy), (2) median absolute deviation about the median or "MAD" of the relative error or (precision), and (3) the percentage of storms where sampling methods were within 10% of the true EMC (combined measures of accuracy and precision). Finally, costs associated with site setup, sampling, and laboratory analysis were estimated for each method. Pollutograph sampling consistently outperformed the other three methods both in terms of accuracy and precision, but was the most costly method evaluated. Time-paced sampling consistently underestimated while volume-paced sampling over estimated the storm EMCs. Microsampling performance approached that of pollutograph sampling at a substantial cost savings. The most efficient method for routine stormwater monitoring in terms of a balance between performance and cost was volume-paced microsampling, with variable sample pacing to ensure that the entirety of the storm was captured. Pollutograph sampling is recommended if the data are to be used for detailed analysis of runoff dynamics.
New developments in spatial interpolation methods of Sea-Level Anomalies in the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Troupin, Charles; Barth, Alexander; Beckers, Jean-Marie; Pascual, Ananda
2014-05-01
The gridding of along-track Sea-Level Anomalies (SLA) measured by a constellation of satellites has numerous applications in oceanography, such as model validation, data assimilation or eddy tracking. Optimal Interpolation (OI) is often the preferred method for this task, as it leads to the lowest expected error and provides an error field associated to the analysed field. However, the numerical cost of the method may limit its utilization in situations where the number of data points is significant. Furthermore, the separation of non-adjacent regions with OI requires adaptation of the code, leading to a further increase of the numerical cost. To solve these issues, the Data-Interpolating Variational Analysis (DIVA), a technique designed to produce gridded from sparse in situ measurements, is applied on SLA data in the Mediterranean Sea. DIVA and OI have been shown to be equivalent (provided some assumptions on the covariances are made). The main difference lies in the covariance function, which is not explicitly formulated in DIVA. The particular spatial and temporal distributions of measurements required adaptation in the Software tool (data format, parameter determinations, ...). These adaptation are presented in the poster. The daily analysed and error fields obtained with this technique are compared with available products such as the gridded field from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) data server. The comparison reveals an overall good agreement between the products. The time evolution of the mean error field evidences the need of a large number of simultaneous altimetry satellites: in period during which 4 satellites are available, the mean error is on the order of 17.5%, while when only 2 satellites are available, the error exceeds 25%. Finally, we propose the use sea currents to improve the results of the interpolation, especially in the coastal area. These currents can be constructed from the bathymetry or extracted from a HF radar located in the Balearic Sea.
Low sidelobe level low-cost earth station antennas for the 12 GHz broadcasting satellite service
NASA Technical Reports Server (NTRS)
Collin, R. E.; Gabel, L. R.
1979-01-01
An experimental investigation of the performance of 1.22 m and 1.83 m diameter paraboloid antennas with an f/D ratio of 0.38 and using a feed developed by Kumar is reported. It is found that sidelobes below 30 dB can be obtained only if the paraboloids are relatively free of surface errors. A theoretical analysis of clam shell distortion shows that this is a limiting factor in achieving low sidelobe levels with many commercially available low cost paraboloids. The use of absorbing pads and small reflecting plates for sidelobe reduction is also considered.
Tolerance assignment in optical design
NASA Astrophysics Data System (ADS)
Youngworth, Richard Neil
2002-09-01
Tolerance assignment is necessary in any engineering endeavor because fabricated systems---due to the stochastic nature of manufacturing and assembly processes---necessarily deviate from the nominal design. This thesis addresses the problem of optical tolerancing. The work can logically be split into three different components that all play an essential role. The first part addresses the modeling of manufacturing errors in contemporary fabrication and assembly methods. The second component is derived from the design aspect---the development of a cost-based tolerancing procedure. The third part addresses the modeling of image quality in an efficient manner that is conducive to the tolerance assignment process. The purpose of the first component, modeling manufacturing errors, is twofold---to determine the most critical tolerancing parameters and to understand better the effects of fabrication errors. Specifically, mid-spatial-frequency errors, typically introduced in sub-aperture grinding and polishing fabrication processes, are modeled. The implication is that improving process control and understanding better the effects of the errors makes the task of tolerance assignment more manageable. Conventional tolerancing methods do not directly incorporate cost. Consequently, tolerancing approaches tend to focus more on image quality. The goal of the second part of the thesis is to develop cost-based tolerancing procedures that facilitate optimum system fabrication by generating the loosest acceptable tolerances. This work has the potential to impact a wide range of optical designs. The third element, efficient modeling of image quality, is directly related to the cost-based optical tolerancing method. Cost-based tolerancing requires efficient and accurate modeling of the effects of errors on the performance of optical systems. Thus it is important to be able to compute the gradient and the Hessian, with respect to the parameters that need to be toleranced, of the figure of merit that measures the image quality of a system. An algebraic method for computing the gradient and the Hessian is developed using perturbation theory.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L; Guerin, Bastien
2016-06-01
A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors ("worst-case SAR") is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled "worst-case SAR" in the presence of errors of this magnitude at minor cost of the excitation profile quality. Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. Magn Reson Med 75:2493-2504, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Walton, T R; Layton, D M
2012-09-01
The aim of this study was to apply a novel economic tool (cost satisfaction analysis) to assess the utility of fixed prosthodontics, to review its applicability, and to explore the perceived value of treatment. The cost satisfaction analysis employed the validated Patient Satisfaction Questionnaire (PSQ). Patients with a known prostheses outcome over 1-20 years were mailed the PSQ. Five hundred patients (50·7%) responded. Remembered satisfaction at insertion (initial costs) and current satisfaction (costs in hindsight) were reported on VAS, and the difference calculated (costs with time). Percentage and grouped responses (low, <40%; medium, 40-70%; high, > 70%) were analysed in relation to patient gender, age and willingness to have undergone the same treatment again, and in relation to prostheses age, type, complexity and outcome. Significance was set at P = 0·05. Averages were reported as means ± standard error. Satisfaction with initial costs and costs in hindsight were unrelated to patient gender and age, and prostheses age, type and complexity. Patients with a failure and those who would elect to not undergo the same treatment again were significantly less satisfied with initial costs (P = 0·021, P < 0·001) and costs in hindsight (P = 0·021, P < 0·001) than their counterparts. Patient's cost satisfaction (entire cohort) had significantly improved from 53 ± 1% at insertion to 81 ± 0·9% in hindsight (28 ± 1% improvement, P < 0·001). Patient cost satisfaction had also significantly improved, and the magnitude of improvement was the same within every individual cohort (P = 0·004 to P < 0·001), including patients with failures, and those who in hindsight would not undergo the same treatment again. Low satisfaction was reported by 166 patients initially, but 94% of these reported improvements in hindsight. Fourteen patients (3%) remained dissatisfied in hindsight, although 71% of these would still choose to undergo the same treatment again. Cost satisfaction analysis provided an evaluation of the patient's perspective of the value of fixed prosthodontic treatment. Although fixed prosthodontic treatment was perceived by patients to be expensive, it was also perceived to impart value with time. Cost satisfaction analysis provides a clinically useful insight into patient behaviour. © 2012 Blackwell Publishing Ltd.
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
Cost implications of organizing nursing home workforce in teams.
Mukamel, Dana B; Cai, Shubing; Temkin-Greener, Helena
2009-08-01
To estimate the costs associated with formal and self-managed daily practice teams in nursing homes. Medicaid cost reports for 135 nursing homes in New York State in 2006 and survey data for 6,137 direct care workers. A retrospective statistical analysis: We estimated hybrid cost functions that include team penetration variables. Inference was based on robust standard errors. Formal and self-managed team penetration (i.e., percent of staff working in a team) were calculated from survey responses. Annual variable costs, beds, case mix-adjusted days, admissions, home care visits, outpatient clinic visits, day care days, wages, and ownership were calculated from the cost reports. Formal team penetration was significantly associated with costs, while self-managed teams penetration was not. Costs declined with increasing penetration up to 13 percent of formal teams, and increased above this level. Formal teams in nursing homes in the upward sloping range of the curve were more diverse, with a larger number of participating disciplines and more likely to include physicians. Organization of workforce in formal teams may offer nursing homes a cost-saving strategy. More research is required to understand the relationship between team composition and costs.
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
NASA's X-Plane Database and Parametric Cost Model v 2.0
NASA Technical Reports Server (NTRS)
Sterk, Steve; Ogluin, Anthony; Greenberg, Marc
2016-01-01
The NASA Armstrong Cost Engineering Team with technical assistance from NASA HQ (SID)has gone through the full process in developing new CERs from Version #1 to Version #2 CERs. We took a step backward and reexamined all of the data collected, such as dependent and independent variables, cost, dry weight, length, wingspan, manned versus unmanned, altitude, Mach number, thrust, and skin. We used a well- known statistical analysis tool called CO$TAT instead of using "R" multiple linear or the "Regression" tool found in Microsoft Excel(TradeMark). We setup an "array of data" by adding 21" dummy variables;" we analyzed the standard error (SE) and then determined the "best fit." We have parametrically priced-out several future X-planes and compared our results to those of other resources. More work needs to be done in getting "accurate and traceable cost data" from historical X-plane records!
Probabilistic/Fracture-Mechanics Model For Service Life
NASA Technical Reports Server (NTRS)
Watkins, T., Jr.; Annis, C. G., Jr.
1991-01-01
Computer program makes probabilistic estimates of lifetime of engine and components thereof. Developed to fill need for more accurate life-assessment technique that avoids errors in estimated lives and provides for statistical assessment of levels of risk created by engineering decisions in designing system. Implements mathematical model combining techniques of statistics, fatigue, fracture mechanics, nondestructive analysis, life-cycle cost analysis, and management of engine parts. Used to investigate effects of such engine-component life-controlling parameters as return-to-service intervals, stresses, capabilities for nondestructive evaluation, and qualities of materials.
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
Michaelidis, Constantinos I; Fine, Michael J; Lin, Chyongchiou Jeng; Linder, Jeffrey A; Nowalk, Mary Patricia; Shields, Ryan K; Zimmerman, Richard K; Smith, Kenneth J
2016-11-08
Ambulatory antibiotic prescribing contributes to the development of antibiotic resistance and increases societal costs. Here, we estimate the hidden societal cost of antibiotic resistance per antibiotic prescribed in the United States. In an exploratory analysis, we used published data to develop point and range estimates for the hidden societal cost of antibiotic resistance (SCAR) attributable to each ambulatory antibiotic prescription in the United States. We developed four estimation methods that focused on the antibiotic-resistance attributable costs of hospitalization, second-line inpatient antibiotic use, second-line outpatient antibiotic use, and antibiotic stewardship, then summed the estimates across all methods. The total SCAR attributable to each ambulatory antibiotic prescription was estimated to be $13 (range: $3-$95). The greatest contributor to the total SCAR was the cost of hospitalization ($9; 69 % of the total SCAR). The costs of second-line inpatient antibiotic use ($1; 8 % of the total SCAR), second-line outpatient antibiotic use ($2; 15 % of the total SCAR) and antibiotic stewardship ($1; 8 %). This apperars to be an error.; of the total SCAR) were modest contributors to the total SCAR. Assuming an average antibiotic cost of $20, the total SCAR attributable to each ambulatory antibiotic prescription would increase antibiotic costs by 65 % (range: 15-475 %) if incorporated into antibiotic costs paid by patients or payers. Each ambulatory antibiotic prescription is associated with a hidden SCAR that substantially increases the cost of an antibiotic prescription in the United States. This finding raises concerns regarding the magnitude of misalignment between individual and societal antibiotic costs.
The cost of misremembering: Inferring the loss function in visual working memory.
Sims, Chris R
2015-03-04
Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.
Cost effectiveness of the stream-gaging program in South Carolina
Barker, A.C.; Wright, B.C.; Bennett, C.S.
1985-01-01
The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)
Feature Acquisition with Imbalanced Training Data
NASA Technical Reports Server (NTRS)
Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.; Jones, Dayton L.
2011-01-01
This work considers cost-sensitive feature acquisition that attempts to classify a candidate datapoint from incomplete information. In this task, an agent acquires features of the datapoint using one or more costly diagnostic tests, and eventually ascribes a classification label. A cost function describes both the penalties for feature acquisition, as well as misclassification errors. A common solution is a Cost Sensitive Decision Tree (CSDT), a branching sequence of tests with features acquired at interior decision points and class assignment at the leaves. CSDT's can incorporate a wide range of diagnostic tests and can reflect arbitrary cost structures. They are particularly useful for online applications due to their low computational overhead. In this innovation, CSDT's are applied to cost-sensitive feature acquisition where the goal is to recognize very rare or unique phenomena in real time. Example applications from this domain include four areas. In stream processing, one seeks unique events in a real time data stream that is too large to store. In fault protection, a system must adapt quickly to react to anticipated errors by triggering repair activities or follow- up diagnostics. With real-time sensor networks, one seeks to classify unique, new events as they occur. With observational sciences, a new generation of instrumentation seeks unique events through online analysis of large observational datasets. This work presents a solution based on transfer learning principles that permits principled CSDT learning while exploiting any prior knowledge of the designer to correct both between-class and withinclass imbalance. Training examples are adaptively reweighted based on a decomposition of the data attributes. The result is a new, nonparametric representation that matches the anticipated attribute distribution for the target events.
Flexible reserve markets for wind integration
NASA Astrophysics Data System (ADS)
Fernandez, Alisha R.
The increased interconnection of variable generation has motivated the use of improved forecasting to more accurately predict future production with the purpose to lower total system costs for balancing when the expected output exceeds or falls short of the actual output. Forecasts are imperfect, and the forecast errors associated with utility-scale generation from variable generators need new balancing capabilities that cannot be handled by existing ancillary services. Our work focuses on strategies for integrating large amounts of wind generation under the flex reserve market, a market that would called upon for short-term energy services during an under or oversupply of wind generation to maintain electric grid reliability. The flex reserve market would be utilized for time intervals that fall in-between the current ancillary services markets that would be longer than second-to-second energy services for maintaining system frequency and shorter than reserve capacity services that are called upon for several minutes up to an hour during an unexpected contingency on the grid. In our work, the wind operator would access the flex reserve market as an energy service to correct for unanticipated forecast errors, akin to paying the generators participating in the market to increase generation during a shortfall or paying the other generators to decrease generation during an excess of wind generation. Such a market does not currently exist in the Mid-Atlantic United States. The Pennsylvania-New Jersey-Maryland Interconnection (PJM) is the Mid-Atlantic electric grid case study that was used to examine if a flex reserve market can be utilized for integrating large capacities of wind generation in a lowcost manner for those providing, purchasing and dispatching these short-term balancing services. The following work consists of three studies. The first examines the ability of a hydroelectric facility to provide short-term forecast error balancing services via a flex reserve market, identifying the operational constraints that inhibit a multi-purpose dam facility to meet the desired flexible energy demand. The second study transitions from the hydroelectric facility as the decision maker providing flex reserve services to the wind plant as the decision maker purchasing these services. In this second study, methods for allocating the costs of flex reserve services under different wind policy scenarios are explored that aggregate farms into different groupings to identify the least-cost strategy for balancing the costs of hourly day-ahead forecast errors. The least-cost strategy may be different for an individual wind plant and for the system operator, noting that the least-cost strategy is highly sensitive to cost allocation and aggregation schemes. The latter may also cause cross-subsidies in the cost for balancing wind forecast errors among the different wind farms. The third study builds from the second, with the objective to quantify the amount of flex reserves needed for balancing future forecast errors using a probabilistic approach (quantile regression) to estimating future forecast errors. The results further examine the usefulness of separate flexible markets PJM could use for balancing oversupply and undersupply events, similar to the regulation up and down markets used in Europe. These three studies provide the following results and insights to large-scale wind integration using actual PJM wind farm data that describe the markets and generators within PJM. • Chapter 2 provides an in-depth analysis of the valuable, yet highly-constrained, energy services multi-purpose hydroelectric facilities can provide, though the opportunity cost for providing these services can result in large deviations from the reservoir policies with minimal revenue gain in comparison to dedicating the whole of dam capacity to providing day-ahead, baseload generation. • Chapter 3 quantifies the system-wide efficiency gains and the distributive effects of PJM's decision to act as a single balancing authority, which means that it procures ancillary services across its entire footprint simultaneously. This can be contrasted to Midwest Independent System Operator (MISO), which has several balancing authorities operating under its footprint. • Chapter 4 uses probabilistic methods to estimate the uncertainty in the forecast errors and the quantity of energy needed to balance these forecast errors at a certain percentile. Current practice is to use a point forecast that describes the conditional expectation of the dependent variable at each time step. The approach here uses quantile regression to describe the relationship between independent variable and the conditional quantiles (equivalently the percentiles) of the dependent variable. An estimate of the conditional density is performed, which contains information about the covariate relationship of the sign of the forecast errors (negative for too much wind generation and positive for too little wind generation) and the wind power forecast. This additional knowledge may be implemented in the decision process to more accurately schedule day-ahead wind generation bids and provide an example for using separate markets for balancing an oversupply and undersupply of generation. Such methods are currently used for coordinating large footprints of wind generation in Europe.
Insight into biases and sequencing errors for amplicon sequencing with the Illumina MiSeq platform.
Schirmer, Melanie; Ijaz, Umer Z; D'Amore, Rosalinda; Hall, Neil; Sloan, William T; Quince, Christopher
2015-03-31
With read lengths of currently up to 2 × 300 bp, high throughput and low sequencing costs Illumina's MiSeq is becoming one of the most utilized sequencing platforms worldwide. The platform is manageable and affordable even for smaller labs. This enables quick turnaround on a broad range of applications such as targeted gene sequencing, metagenomics, small genome sequencing and clinical molecular diagnostics. However, Illumina error profiles are still poorly understood and programs are therefore not designed for the idiosyncrasies of Illumina data. A better knowledge of the error patterns is essential for sequence analysis and vital if we are to draw valid conclusions. Studying true genetic variation in a population sample is fundamental for understanding diseases, evolution and origin. We conducted a large study on the error patterns for the MiSeq based on 16S rRNA amplicon sequencing data. We tested state-of-the-art library preparation methods for amplicon sequencing and showed that the library preparation method and the choice of primers are the most significant sources of bias and cause distinct error patterns. Furthermore we tested the efficiency of various error correction strategies and identified quality trimming (Sickle) combined with error correction (BayesHammer) followed by read overlapping (PANDAseq) as the most successful approach, reducing substitution error rates on average by 93%. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R
2016-03-01
This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.
König, H H; Barry, J C; Leidl, R; Zrenner, E
2000-04-01
Orthoptic screening in the kindergarten is one option to improve early detection of amblyopia in children aged 3 years. The purpose of this study was to analyse the cost-effectiveness of such a screening programme in Germany. Based on data from the literature and own experience gained from orthoptic screening in kindergarten a decision-analytic model was developed. According to the model, all children in kindergarten, aged 3 years, who had not been treated for amblyopia before, were subjected to an orthoptic examination. Non-cooperative children were reexamined in kindergarten after one year. Children with positive test results were examined by an ophthalmologist for diagnosis. Effects were measured by the number of newly diagnosed cases of amblyopia, non-obvious strabismus and amblyogenic refractive errors. Direct costs were estimated from a third-party payer perspective. The influence of uncertain model parameters was tested by sensitivity analysis. In the base analysis the cost per orthoptic screening test was DM 15.39. Examination by an ophthalmologist cost DM 71.20. The total cost of the screening programme in all German kindergartens was DM 6.1 million. With a 1.5% age-specific prevalence of undiagnosed cases, a sensitivity of 95% and a specificity of 98%, a total of 4,261 new cases would be detected. The cost-effectiveness ratio was DM 1,421 per case detected. Sensitivity analysis showed considerable influence of prevalence and specificity on the cost-effectiveness ratio. It was more cost-effective to re-screen non-cooperative children in kindergarten than to have them examined by an ophthalmologist straight-away. The decision-analytic model showed stable results which may serve as a basis for discussion on the implementation of orthoptic screening and for planning a field study.
Losses from effluent taxes and quotas under uncertainty
Watson, W.D.; Ridker, R.G.
1984-01-01
Recent theoretical papers by Adar and Griffin (J. Environ. Econ. Manag.3, 178-188 (1976)), Fishelson (J. Environ. Econ. Manag.3, 189-197 (1976)), and Weitzman (Rev. Econ. Studies41, 477-491 (1974)) show that,different expected social losses arise from using effluent taxes and quotas as alternative control instruments when marginal control costs are uncertain. Key assumptions in these analyses are linear marginal cost and benefit functions and an additive error for the marginal cost function (to reflect uncertainty). In this paper, empirically derived nonlinear functions and more realistic multiplicative error terms are used to estimate expected control and damage costs and to identify (empirically) the mix of control instruments that minimizes expected losses. ?? 1984.
Learning from Errors: Critical Incident Reporting in Nursing
ERIC Educational Resources Information Center
Gartmeier, Martin; Ottl, Eva; Bauer, Johannes; Berberat, Pascal Oliver
2017-01-01
Purpose: The purpose of this paper is to conceptualize error reporting as a strategy for informal workplace learning and investigate nurses' error reporting cost/benefit evaluations and associated behaviors. Design/methodology/approach: A longitudinal survey study was carried out in a hospital setting with two measurements (time 1 [t1]:…
Huff, Mark J.; Balota, David A.; Minear, Meredith; Aschenbrenner, Andrew J.; Duchek, Janet M.
2015-01-01
A task-switching paradigm was used to examine differences in attentional control across younger adults, middle-aged adults, healthy older adults, and individuals classified in the earliest detectable stage of Alzheimer's disease (AD). A large sample of participants (570) completed a switching task in which participants were cued to classify the letter (consonant/vowel) or number (odd/even) task-set dimension of a bivalent stimulus (e.g., A 14), respectively. A Pure block consisting of single-task trials and a Switch block consisting of nonswitch and switch trials were completed. Local (switch vs. nonswitch trials) and global (nonswitch vs. pure trials) costs in mean error rates, mean response latencies, underlying reaction time distributions, along with stimulus-response congruency effects were computed. Local costs in errors were group invariant, but global costs in errors systematically increased as a function of age and AD. Response latencies yielded a strong dissociation: Local costs decreased across groups whereas global costs increased across groups. Vincentile distribution analyses revealed that the dissociation of local and global costs primarily occurred in the slowest response latencies. Stimulus-response congruency effects within the Switch block were particularly robust in accuracy in the very mild AD group. We argue that the results are consistent with the notion that the impaired groups show a reduced local cost because the task sets are not as well tuned, and hence produce minimal cost on switch trials. In contrast, global costs increase because of the additional burden on working memory of maintaining two task sets. PMID:26652720
McQueen, Robert Brett; Breton, Marc D; Craig, Joyce; Holmes, Hayden; Whittington, Melanie D; Ott, Markus A; Campbell, Jonathan D
2018-04-01
The objective was to model clinical and economic outcomes of self-monitoring blood glucose (SMBG) devices with varying error ranges and strip prices for type 1 and insulin-treated type 2 diabetes patients in England. We programmed a simulation model that included separate risk and complication estimates by type of diabetes and evidence from in silico modeling validated by the Food and Drug Administration. Changes in SMBG error were associated with changes in hemoglobin A1c (HbA1c) and separately, changes in hypoglycemia. Markov cohort simulation estimated clinical and economic outcomes. A SMBG device with 8.4% error and strip price of £0.30 (exceeding accuracy requirements by International Organization for Standardization [ISO] 15197:2013/EN ISO 15197:2015) was compared to a device with 15% error (accuracy meeting ISO 15197:2013/EN ISO 15197:2015) and price of £0.20. Outcomes were lifetime costs, quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs). With SMBG errors associated with changes in HbA1c only, the ICER was £3064 per QALY in type 1 diabetes and £264 668 per QALY in insulin-treated type 2 diabetes for an SMBG device with 8.4% versus 15% error. With SMBG errors associated with hypoglycemic events only, the device exceeding accuracy requirements was cost-saving and more effective in insulin-treated type 1 and type 2 diabetes. Investment in devices with higher strip prices but improved accuracy (less error) appears to be an efficient strategy for insulin-treated diabetes patients at high risk of severe hypoglycemia.
Modeling Pumped Thermal Energy Storage with Waste Heat Harvesting
NASA Astrophysics Data System (ADS)
Abarr, Miles L. Lindsey
This work introduces a new concept for a utility scale combined energy storage and generation system. The proposed design utilizes a pumped thermal energy storage (PTES) system, which also utilizes waste heat leaving a natural gas peaker plant. This system creates a low cost utility-scale energy storage system by leveraging this dual-functionality. This dissertation first presents a review of previous work in PTES as well as the details of the proposed integrated bottoming and energy storage system. A time-domain system model was developed in Mathworks R2016a Simscape and Simulink software to analyze this system. Validation of both the fluid state model and the thermal energy storage model are provided. The experimental results showed the average error in cumulative fluid energy between simulation and measurement was +/- 0.3% per hour. Comparison to a Finite Element Analysis (FEA) model showed <1% error for bottoming mode heat transfer. The system model was used to conduct sensitivity analysis, baseline performance, and levelized cost of energy of a recently proposed Pumped Thermal Energy Storage and Bottoming System (Bot-PTES) that uses ammonia as the working fluid. This analysis focused on the effects of hot thermal storage utilization, system pressure, and evaporator/condenser size on the system performance. This work presents the estimated performance for a proposed baseline Bot-PTES. Results of this analysis showed that all selected parameters had significant effects on efficiency, with the evaporator/condenser size having the largest effect over the selected ranges. Results for the baseline case showed stand-alone energy storage efficiencies between 51 and 66% for varying power levels and charge states, and a stand-alone bottoming efficiency of 24%. The resulting efficiencies for this case were low compared to competing technologies; however, the dual-functionality of the Bot-PTES enables it to have higher capacity factor, leading to 91-197/MWh levelized cost of energy compared to 262-284/MWh for batteries and $172-254/MWh for Compressed Air Energy Storage.
The impacts of observing flawed and flawless demonstrations on clinical skill learning.
Domuracki, Kurt; Wong, Arthur; Olivieri, Lori; Grierson, Lawrence E M
2015-02-01
Clinical skills expertise can be advanced through accessible and cost-effective video-based observational practice activities. Previous findings suggest that the observation of performances of skills that include flaws can be beneficial to trainees. Observing the scope of variability within a skilled movement allows learners to develop strategies to manage the potential for and consequences associated with errors. This study tests this observational learning approach on the development of the skills of central line insertion (CLI). Medical trainees with no CLI experience (n = 39) were randomised to three observational practice groups: a group which viewed and assessed videos of an expert performing a CLI without any errors (F); a group which viewed and assessed videos that contained a mix of flawless and errorful performances (E), and a group which viewed the same videos as the E group but were also given information concerning the correctness of their assessments (FA). All participants interacted with their observational videos each day for 4 days. Following this period, participants returned to the laboratory and performed a simulation-based insertion, which was assessed using a standard checklist and a global rating scale for the skill. These ratings served as the dependent measures for analysis. The checklist analysis revealed no differences between observational learning groups (grand mean ± standard error: [20.3 ± 0.7]/25). However, the global rating analysis revealed a main effect of group (d.f.2,36 = 4.51, p = 0.018), which describes better CLI performance in the FA group, compared with the F and E groups. Observational practice that includes errors improves the global performance aspects of clinical skill learning as long as learners are given confirmation that what they are observing is errorful. These findings provide a refined perspective on the optimal organisation of skill education programmes that combine physical and observational practice activities. © 2015 John Wiley & Sons Ltd.
A secure RFID authentication protocol adopting error correction code.
Chen, Chien-Ming; Chen, Shuai-Min; Zheng, Xinying; Chen, Pei-Yu; Sun, Hung-Min
2014-01-01
RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance.
A Secure RFID Authentication Protocol Adopting Error Correction Code
Zheng, Xinying; Chen, Pei-Yu
2014-01-01
RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance. PMID:24959619
Cost of Contralateral Prophylactic Mastectomy
Deshmukh, Ashish A.; Cantor, Scott B.; Crosby, Melissa A.; Dong, Wenli; Shen, Yu; Bedrosian, Isabelle; Peterson, Susan K.; Parker, Patricia A.; Brewster, Abenaa M.
2014-01-01
Purpose To compare the health care costs of women with unilateral breast cancer who underwent contralateral prophylactic mastectomy (CPM) with those of women who did not. Methods We conducted a retrospective study of 904 women treated for stage I–III breast cancer with or without CPM. Women were matched according to age, year at diagnosis, stage, and receipt of chemotherapy. We included healthcare costs starting from the date of surgery to 24 months. We identified whether care was immediate or delayed (CPM within 6 months or 6–24 months after initial surgery, respectively). Costs were converted to approximate Medicare reimbursement values and adjusted for inflation. Multivariable regression analysis was performed to evaluate the effect of CPM on total breast cancer care costs adjusting for patient characteristics and accounting for matched pairs. Results The mean difference between the CPM and no-CPM matched groups was $3,573 (standard error [SE]=$455) for professional costs, $4,176 (SE=$1,724) for technical costs, and $7,749 (SE=$2,069) for total costs. For immediate and delayed CPM, the mean difference for total costs was $6,528 (SE =$2,243) and $16,744 (SE=$5,017), respectively. In multivariable analysis, the CPM group had a statistically significant increase of 16.9% in mean total costs compared to the no-CPM group (P<0.0001). HER-2/neu-positive status, receipt of radiation, and reconstruction were associated with increases in total costs. Conclusions CPM significantly increases short-term healthcare costs for women with unilateral breast cancer. These patient-level cost results can be used for future studies that evaluate the influence of costs of CPM on decision making. PMID:24809301
Derks, Marjolein; Hogeveen, Henk; Kooistra, Sake R; van Werven, Tine; Tauer, Loren W
2014-12-01
This paper compares farm efficiencies between dairies who were participating in a veterinary herd health management (VHHM) program with dairies not participating in such a program, to determine whether participation has an association with farm efficiency. In 2011, 572 dairy farmers received a questionnaire concerning the participation and execution of a VHHM program on their farms. Data from the questionnaire were combined with farm accountancy data from 2008 through 2012 from farms that used calendar year accounting periods, and were analyzed using Stochastic Frontier Analysis (SFA). Two separate models were specified: model 1 was the basic stochastic frontier model (output: total revenue; input: feed costs, land costs, cattle costs, non-operational costs), without explanatory variables embedded into the efficiency component of the error term. Model 2 was an expansion of model 1 which included explanatory variables (number of FTE; total kg milk delivered; price of concentrate; milk per hectare; cows per FTE; nutritional yield per hectare) inserted into the efficiency component of the joint error term. Both models were estimated with the financial parameters expressed per 100 kg fat and protein corrected milk and per cow. Land costs, cattle costs, feed costs and non-operational costs were statistically significant and positive in all models (P<0.01). Frequency distributions of the efficiency scores for the VHHM dairies and the non-VHHM dairies were plotted in a kernel density plot, and differences were tested using the Kolmogorov-Smirnov two-sample test. VHHM dairies had higher total revenue per cow, but not per 100 kg milk. For all SFA models, the difference in distribution was not statistically different between VHHM dairies and non-VHHM dairies (P values 0.94, 0.35, 0.95 and 0.89 for the basic and complete model per 100 kg fat and protein corrected milk and per cow respectively). Therefore we conclude that with our data farm participation in VHHM is not related to overall farm efficiency. Copyright © 2014 Elsevier B.V. All rights reserved.
Lerch, Rachel A; Sims, Chris R
2016-06-01
Limitations in visual working memory (VWM) have been extensively studied in psychophysical tasks, but not well understood in terms of how these memory limits translate to performance in more natural domains. For example, in reaching to grasp an object based on a spatial memory representation, overshooting the intended target may be more costly than undershooting, such as when reaching for a cup of hot coffee. The current body of literature lacks a detailed account of how the costs or consequences of memory error influence what we encode in visual memory and how we act on the basis of remembered information. Here, we study how externally imposed monetary costs influence behavior in a motor decision task that involves reach planning based on recalled information from VWM. We approach this from a decision theoretic perspective, viewing decisions of where to aim in relation to the utility of their outcomes given the uncertainty of memory representations. Our results indicate that subjects accounted for the uncertainty in their visual memory, showing a significant difference in their reach planning when monetary costs were imposed for memory errors. However, our findings indicate that subjects memory representations per se were not biased by the imposed costs, but rather subjects adopted a near-optimal post-mnemonic decision strategy in their motor planning.
Hsueh, Ya-seng Arthur; Brando, Alex; Dunt, David; Anjou, Mitchell D; Boudville, Andrea; Taylor, Hugh
2013-12-01
To estimate the costs of the extra resources required to close the gap of vision between Indigenous and non-Indigenous Australians. Constructing comprehensive eye care pathways for Indigenous Australians with their related probabilities, to capture full eye care usage compared with current usage rate for cataract surgery, refractive error and diabetic retinopathy using the best available data. Urban and remote regions of Australia. The provision of eye care for cataract surgery, refractive error and diabetic retinopathy. Estimated cost needed for full access, estimated current spending and estimated extra cost required to close the gaps of cataract surgery, refractive error and diabetic retinopathy for Indigenous Australians. Total cost needed for full coverage of all three major eye conditions is $45.5 million per year in 2011 Australian dollars. Current annual spending is $17.4 million. Additional yearly cost required to close the gap of vision is $28 million. This includes extra-capped funds of $3 million from the Commonwealth Government and $2 million from the State and Territory Governments. Additional coordination costs per year are $13.3 million. Although available data are limited, this study has produced the first estimates that are indicative of the need for planning and provide equity in eye care. © 2013 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.
Explanation Capabilities for Behavior-Based Robot Control
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L.
2012-01-01
A recent study that evaluated issues associated with remote interaction with an autonomous vehicle within the framework of grounding found that missing contextual information led to uncertainty in the interpretation of collected data, and so introduced errors into the command logic of the vehicle. As the vehicles became more autonomous through the activation of additional capabilities, more errors were made. This is an inefficient use of the platform, since the behavior of remotely located autonomous vehicles didn't coincide with the "mental models" of human operators. One of the conclusions of the study was that there should be a way for the autonomous vehicles to describe what action they choose and why. Robotic agents with enough self-awareness to dynamically adjust the information conveyed back to the Operations Center based on a detail level component analysis of requests could provide this description capability. One way to accomplish this is to map the behavior base of the robot into a formal mathematical framework called a cost-calculus. A cost-calculus uses composition operators to build up sequences of behaviors that can then be compared to what is observed using well-known inference mechanisms.
Salamone, Francesco; Danza, Ludovico; Meroni, Italo; Pollastro, Maria Cristina
2017-04-11
nEMoS (nano Environmental Monitoring System) is a 3D-printed device built following the Do-It-Yourself (DIY) approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ). It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters.
Salamone, Francesco; Danza, Ludovico; Meroni, Italo; Pollastro, Maria Cristina
2017-01-01
nEMoS (nano Environmental Monitoring System) is a 3D-printed device built following the Do-It-Yourself (DIY) approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ). It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters. PMID:28398225
Improved estimation of anomalous diffusion exponents in single-particle tracking experiments
NASA Astrophysics Data System (ADS)
Kepten, Eldad; Bronshtein, Irena; Garini, Yuval
2013-05-01
The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.
Bayesian assessment of overtriage and undertriage at a level I trauma centre.
DiDomenico, Paul B; Pietzsch, Jan B; Paté-Cornell, M Elisabeth
2008-07-13
We analysed the trauma triage system at a specific level I trauma centre to assess rates of over- and undertriage and to support recommendations for system improvements. The triage process is designed to estimate the severity of patient injury and allocate resources accordingly, with potential errors of overestimation (overtriage) consuming excess resources and underestimation (undertriage) potentially leading to medical errors.We first modelled the overall trauma system using risk analysis methods to understand interdependencies among the actions of the participants. We interviewed six experienced trauma surgeons to obtain their expert opinion of the over- and undertriage rates occurring in the trauma centre. We then assessed actual over- and undertriage rates in a random sample of 86 trauma cases collected over a six-week period at the same centre. We employed Bayesian analysis to quantitatively combine the data with the prior probabilities derived from expert opinion in order to obtain posterior distributions. The results were estimates of overtriage and undertriage in 16.1 and 4.9% of patients, respectively. This Bayesian approach, which provides a quantitative assessment of the error rates using both case data and expert opinion, provides a rational means of obtaining a best estimate of the system's performance. The overall approach that we describe in this paper can be employed more widely to analyse complex health care delivery systems, with the objective of reduced errors, patient risk and excess costs.
Accuracy Enhancement of Inertial Sensors Utilizing High Resolution Spectral Analysis
Noureldin, Aboelmagd; Armstrong, Justin; El-Shafie, Ahmed; Karamat, Tashfeen; McGaughey, Don; Korenberg, Michael; Hussain, Aini
2012-01-01
In both military and civilian applications, the inertial navigation system (INS) and the global positioning system (GPS) are two complementary technologies that can be integrated to provide reliable positioning and navigation information for land vehicles. The accuracy enhancement of INS sensors and the integration of INS with GPS are the subjects of widespread research. Wavelet de-noising of INS sensors has had limited success in removing the long-term (low-frequency) inertial sensor errors. The primary objective of this research is to develop a novel inertial sensor accuracy enhancement technique that can remove both short-term and long-term error components from inertial sensor measurements prior to INS mechanization and INS/GPS integration. A high resolution spectral analysis technique called the fast orthogonal search (FOS) algorithm is used to accurately model the low frequency range of the spectrum, which includes the vehicle motion dynamics and inertial sensor errors. FOS models the spectral components with the most energy first and uses an adaptive threshold to stop adding frequency terms when fitting a term does not reduce the mean squared error more than fitting white noise. The proposed method was developed, tested and validated through road test experiments involving both low-end tactical grade and low cost MEMS-based inertial systems. The results demonstrate that in most cases the position accuracy during GPS outages using FOS de-noised data is superior to the position accuracy using wavelet de-noising.
Class-specific Error Bounds for Ensemble Classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prenger, R; Lemmond, T; Varshney, K
2009-10-06
The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missedmore » detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.« less
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
Medina, K.D.; Tasker, Gary D.
1985-01-01
The surface water data network in Kansas was analyzed using generalized least squares regression for its effectiveness in providing regional streamflow information. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-flow, low-flow and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow gaging station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations; and/or adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and discontinued stations for which unregulated flow characteristics , as well as physical and climatic characteristics, were available. The state was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean square error for each cost level could be obtained by adding new stations and discontinuing some of the present network stations. Large reductions in sampling mean square error for low-flow information could be accomplished in all three network areas, with western Kansas having the most dramatic reduction. The addition of new stations would be most beneficial for man- flow information in western Kansas, and to lesser degrees in the other two areas. The reduction of sampling mean square error for high-flow information would benefit most from the addition of new stations in western Kansas, and the effect diminishes to lesser degrees in the other two areas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas. (Author 's abstract)
Data processing 1: Advancements in machine analysis of multispectral data
NASA Technical Reports Server (NTRS)
Swain, P. H.
1972-01-01
Multispectral data processing procedures are outlined beginning with the data display process used to accomplish data editing and proceeding through clustering, feature selection criterion for error probability estimation, and sample clustering and sample classification. The effective utilization of large quantities of remote sensing data by formulating a three stage sampling model for evaluation of crop acreage estimates represents an improvement in determining the cost benefit relationship associated with remote sensing technology.
Artificial Intelligence Techniques for Automatic Screening of Amblyogenic Factors
Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard
2008-01-01
Purpose To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. Methods In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. Results The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the “gold standard” specialist examination with a “refer/do not refer” decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than −7. Conclusions Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years. PMID:19277222
Artificial intelligence techniques for automatic screening of amblyogenic factors.
Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard
2008-01-01
To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the "gold standard" specialist examination with a "refer/do not refer" decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than -7. Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years.
Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis.
Pfister, Alexandra; West, Alexandre M; Bronner, Shaw; Noah, Jack Adam
2014-07-01
Biomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was large. Kinect™ knee measurements were somewhat better than hip, but were not consistent enough for clinical assessment. Correlation between Kinect™ and Vicon stride timing was high and error was fairly small. Variability in Kinect™ measurements was smallest at the slowest velocity. The Kinect™ has basic motion capture capabilities and with some minor adjustments will be an acceptable tool to measure stride timing, but sophisticated advances in software and hardware are necessary to improve Kinect™ sensitivity before it can be implemented for clinical use.
NASA Astrophysics Data System (ADS)
Woldesellasse, H. T.; Marpu, P. R.; Ouarda, T.
2016-12-01
Wind is one of the crucial renewable energy sources which is expected to bring solutions to the challenges of clean energy and the global issue of climate change. A number of linear and nonlinear multivariate techniques has been used to predict the stochastic character of wind speed. A wind forecast with good accuracy has a positive impact on the reduction of electricity system cost and is essential for the effective grid management. Over the past years, few studies have been done on the assessment of teleconnections and its possible effects on the long-term wind speed variability in the UAE region. In this study Nonlinear Canonical Correlation Analysis (NLCCA) method is applied to study the relationship between global climate oscillation indices and meteorological variables, with a major emphasis on wind speed and wind direction, of Abu Dhabi, UAE. The wind dataset was obtained from six ground stations. The first mode of NLCCA is capable of capturing the nonlinear mode of the climate indices at different seasons, showing the symmetry between the warm states and the cool states. The strength of the nonlinear canonical correlation between the two sets of variables varies with the lead/lag time. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE) and Mean absolute error (MAE). The results indicated that NLCCA models provide more accurate information about the nonlinear intrinsic behaviour of the dataset of variables than linear CCA model in terms of the correlation and root mean square error. Key words: Nonlinear Canonical Correlation Analysis (NLCCA), Canonical Correlation Analysis, Neural Network, Climate Indices, wind speed, wind direction
Inspection error and its adverse effects - A model with implications for practitioners
NASA Technical Reports Server (NTRS)
Collins, R. D., Jr.; Case, K. E.; Bennett, G. K.
1978-01-01
Inspection error has clearly been shown to have adverse effects upon the results desired from a quality assurance sampling plan. These effects upon performance measures have been well documented from a statistical point of view. However, little work has been presented to convince the QC manager of the unfavorable cost consequences resulting from inspection error. This paper develops a very general, yet easily used, mathematical cost model. The basic format of the well-known Guthrie-Johns model is used. However, it is modified as required to assess the effects of attributes sampling errors of the first and second kind. The economic results, under different yet realistic conditions, will no doubt be of interest to QC practitioners who face similar problems daily. Sampling inspection plans are optimized to minimize economic losses due to inspection error. Unfortunately, any error at all results in some economic loss which cannot be compensated for by sampling plan design; however, improvements over plans which neglect the presence of inspection error are possible. Implications for human performance betterment programs are apparent, as are trade-offs between sampling plan modification and inspection and training improvements economics.
Generalized Variance Function Applications in Forestry
James Alegria; Charles T. Scott; Charles T. Scott
1991-01-01
Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
Annual Cost of U.S. Hospital Visits for Pediatric Abusive Head Trauma.
Peterson, Cora; Xu, Likang; Florence, Curtis; Parks, Sharyn E
2015-08-01
We estimated the frequency and direct medical cost from the provider perspective of U.S. hospital visits for pediatric abusive head trauma (AHT). We identified treat-and-release hospital emergency department (ED) visits and admissions for AHT among patients aged 0-4 years in the Nationwide Emergency Department Sample and Nationwide Inpatient Sample (NIS), 2006-2011. We applied cost-to-charge ratios and estimated professional fee ratios from Truven Health MarketScan(®) to estimate per-visit and total population costs of AHT ED visits and admissions. Regression models assessed cost differences associated with selected patient and hospital characteristics. AHT was diagnosed during 6,827 (95% confidence interval [CI] [6,072, 7,582]) ED visits and 12,533 (95% CI [10,395, 14,671]) admissions (28% originating in the same hospital's ED) nationwide over the study period. The average medical cost per ED visit and admission were US$2,612 (error bound: 1,644-3,581) and US$31,901 (error bound: 29,266-34,536), respectively (2012 USD). The average total annual nationwide medical cost of AHT hospital visits was US$69.6 million (error bound: 56.9-82.3 million) over the study period. Factors associated with higher per-visit costs included patient age <1 year, males, coexisting chronic conditions, discharge to another facility, death, higher household income, public insurance payer, hospital trauma level, and teaching hospitals in urban locations. Study findings emphasize the importance of focused interventions to reduce this type of high-cost child abuse. © The Author(s) 2015.
Reliability and Validity in Hospital Case-Mix Measurement
Pettengill, Julian; Vertrees, James
1982-01-01
There is widespread interest in the development of a measure of hospital output. This paper describes the problem of measuring the expected cost of the mix of inpatient cases treated in a hospital (hospital case-mix) and a general approach to its solution. The solution is based on a set of homogenous groups of patients, defined by a patient classification system, and a set of estimated relative cost weights corresponding to the patient categories. This approach is applied to develop a summary measure of the expected relative costliness of the mix of Medicare patients treated in 5,576 participating hospitals. The Medicare case-mix index is evaluated by estimating a hospital average cost function. This provides a direct test of the hypothesis that the relationship between Medicare case-mix and Medicare cost per case is proportional. The cost function analysis also provides a means of simulating the effects of classification error on our estimate of this relationship. Our results indicate that this general approach to measuring hospital case-mix provides a valid and robust measure of the expected cost of a hospital's case-mix. PMID:10309909
Toward an affordable and user-friendly visual motion capture system.
Bonnet, V; Sylla, N; Cherubini, A; Gonzáles, A; Azevedo Coste, C; Fraisse, P; Venture, G
2014-01-01
The present study aims at designing and evaluating a low-cost, simple and portable system for arm joint angle estimation during grasping-like motions. The system is based on a single RGB-D camera and three customized markers. The automatically detected and tracked marker positions were used as inputs to an offline inverse kinematic process based on bio-mechanical constraints to reduce noise effect and handle marker occlusion. The method was validated on 4 subjects with different motions. The joint angles were estimated both with the proposed low-cost system and, a stereophotogrammetric system. Comparative analysis shows good accuracy with high correlation coefficient (r= 0.92) and low average RMS error (3.8 deg).
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
A variable acceleration calibration system
NASA Astrophysics Data System (ADS)
Johnson, Thomas H.
2011-12-01
A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.
Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer
Andrade-Delgado, Laura; Telich-Tarriba, Jose E.; Fuente-del-Campo, Antonio; Altamirano-Arcos, Carlos A.
2018-01-01
Summary: Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies. PMID:29464171
Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.
Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A
2018-01-01
Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Kedir, Jafer; Girma, Abonesh
2014-10-01
Refractive error is one of the major causes of blindness and visual impairment in children; but community based studies are scarce especially in rural parts of Ethiopia. So, this study aims to assess the prevalence of refractive error and its magnitude as a cause of visual impairment among school-age children of rural community. This community-based cross-sectional descriptive study was conducted from March 1 to April 30, 2009 in rural villages of Goro district of Gurage Zone, found south west of Addis Ababa, the capital of Ethiopia. A multistage cluster sampling method was used with simple random selection of representative villages in the district. Chi-Square and t-tests were used in the data analysis. A total of 570 school-age children (age 7-15) were evaluated, 54% boys and 46% girls. The prevalence of refractive error was 3.5% (myopia 2.6% and hyperopia 0.9%). Refractive error was the major cause of visual impairment accounting for 54% of all causes in the study group. No child was found wearing corrective spectacles during the study period. Refractive error was the commonest cause of visual impairment in children of the district, but no measures were taken to reduce the burden in the community. So, large scale community level screening for refractive error should be conducted and integrated with regular school eye screening programs. Effective strategies need to be devised to provide low cost corrective spectacles in the rural community.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
Modified linear predictive coding approach for moving target tracking by Doppler radar
NASA Astrophysics Data System (ADS)
Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao
2016-07-01
Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.
Yan, Song; Li, Yun
2014-02-15
Despite its great capability to detect rare variant associations, next-generation sequencing is still prohibitively expensive when applied to large samples. In case-control studies, it is thus appealing to sequence only a subset of cases to discover variants and genotype the identified variants in controls and the remaining cases under the reasonable assumption that causal variants are usually enriched among cases. However, this approach leads to inflated type-I error if analyzed naively for rare variant association. Several methods have been proposed in recent literature to control type-I error at the cost of either excluding some sequenced cases or correcting the genotypes of discovered rare variants. All of these approaches thus suffer from certain extent of information loss and thus are underpowered. We propose a novel method (BETASEQ), which corrects inflation of type-I error by supplementing pseudo-variants while keeps the original sequence and genotype data intact. Extensive simulations and real data analysis demonstrate that, in most practical situations, BETASEQ leads to higher testing powers than existing approaches with guaranteed (controlled or conservative) type-I error. BETASEQ and associated R files, including documentation, examples, are available at http://www.unc.edu/~yunmli/betaseq
Rein, David B; Wittenborn, John S; Zhang, Xinzhi; Allaire, Benjamin A; Song, Michael S; Klein, Ronald; Saaddine, Jinan B
2011-01-01
Objective To determine whether biennial eye evaluation or telemedicine screening are cost-effective alternatives to current recommendations for the estimated 10 million people aged 30–84 with diabetes but no or minimal diabetic retinopathy. Data Sources United Kingdom Prospective Diabetes Study, National Health and Nutrition Examination Survey, American Academy of Ophthalmology Preferred Practice Patterns, Medicare Payment Schedule. Study Design Cost-effectiveness Monte Carlo simulation. Data Collection/Extraction Methods Literature review, analysis of existing surveys. Principal Findings Biennial eye evaluation was the most cost-effective treatment option when the ability to detect other eye conditions was included in the model. Telemedicine was most cost-effective when other eye conditions were not considered or when telemedicine was assumed to detect refractive error. The current annual eye evaluation recommendation was costly compared with either treatment alternative. Self-referral was most cost-effective up to a willingness to pay (WTP) of U.S.$37,600, with either biennial or annual evaluation most cost-effective at higher WTP levels. Conclusions Annual eye evaluations are costly and add little benefit compared with either plausible alternative. More research on the ability of telemedicine to detect other eye conditions is needed to determine whether it is more cost-effective than biennial eye evaluation. PMID:21492158
Cost Implications of Organizing Nursing Home Workforce in Teams
Mukamel, Dana B; Cai, Shubing; Temkin-Greener, Helena
2009-01-01
Objective To estimate the costs associated with formal and self-managed daily practice teams in nursing homes. Data Sources/Study Setting Medicaid cost reports for 135 nursing homes in New York State in 2006 and survey data for 6,137 direct care workers. Study Design A retrospective statistical analysis: We estimated hybrid cost functions that include team penetration variables. Inference was based on robust standard errors. Data Collection Formal and self-managed team penetration (i.e., percent of staff working in a team) were calculated from survey responses. Annual variable costs, beds, case mix-adjusted days, admissions, home care visits, outpatient clinic visits, day care days, wages, and ownership were calculated from the cost reports. Principal Findings Formal team penetration was significantly associated with costs, while self-managed teams penetration was not. Costs declined with increasing penetration up to 13 percent of formal teams, and increased above this level. Formal teams in nursing homes in the upward sloping range of the curve were more diverse, with a larger number of participating disciplines and more likely to include physicians. Conclusions Organization of workforce in formal teams may offer nursing homes a cost-saving strategy. More research is required to understand the relationship between team composition and costs. PMID:19486181
Baltussen, Rob; Naus, Jeroen; Limburg, Hans
2009-02-01
To estimate the costs and effects of alternative strategies for annual screening of school children for refractive errors, and the provision of spectacles, in different WHO sub-regions in Africa, Asia, America and Europe. We developed a mathematical simulation model for uncorrected refractive error, using prevailing prevalence and incidence rates. Remission rates reflected the absence or presence of screening strategies for school children. All screening strategies were implemented for a period of 10 years and were compared to a situation were no screening was implemented. Outcome measures were life years adjusted for disability (DALYs), costs of screening and provision of spectacles and follow-up for six different screening strategies, and cost-effectiveness in international dollars per DALY averted. Epidemiological information was derived from the burden of disease study from the World Health Organization (WHO). Cost data were derived from large databases from the WHO. Both univariate and multivariate sensitivity analyses were performed on key parameters to determine the robustness of the model results. In all regions, screening of 5-15 years old children yields most health effects, followed by screening of 11-15 years old, 5-10 years old, and screening of 8 and 13 years old. Screening of broad-age intervals is always more costly than screening of single-age intervals, and there are important economies of scale for simultaneous screening of both 5-10 and 11-15-year-old children. In all regions, screening of 11-15 years old is the most cost-effective intervention, with the cost per DALY averted ranging from I$67 per DALY averted in the Asian sub-region to I$458 per DALY averted in the European sub-region. The incremental cost per DALY averted of screening 5-15 years old ranges between I$111 in the Asian sub-region to I$672 in the European sub-region. Considering the conservative study assumptions and the robustness of study conclusions towards changes in these assumptions, screening of school children for refractive error is economically attractive in all regions in the world.
Reassessing the human health benefits from cleaner air.
Cox, Louis Anthony
2012-05-01
Recent proposals to further reduce permitted levels of air pollution emissions are supported by high projected values of resulting public health benefits. For example, the Environmental Protection Agency recently estimated that the 1990 Clean Air Act Amendment (CAAA) will produce human health benefits in 2020, from reduced mortality rates, valued at nearly $2 trillion per year, compared to compliance costs of $65 billion ($0.065 trillion). However, while compliance costs can be measured, health benefits are unproved: they depend on a series of uncertain assumptions. Among these are that additional life expectancy gained by a beneficiary (with median age of about 80 years) should be valued at about $80,000 per month; that there is a 100% probability that a positive, linear, no-threshold, causal relation exists between PM(2.5) concentration and mortality risk; and that progress in medicine and disease prevention will not greatly diminish this relationship. We present an alternative uncertainty analysis that assigns a positive probability of error to each assumption. This discrete uncertainty analysis suggests (with probability >90% under plausible alternative assumptions) that the costs of CAAA exceed its benefits. Thus, instead of suggesting to policymakers that CAAA benefits are almost certainly far larger than its costs, we believe that accuracy requires acknowledging that the costs purchase a relatively uncertain, possibly much smaller, benefit. The difference between these contrasting conclusions is driven by different approaches to uncertainty analysis, that is, excluding or including discrete uncertainties about the main assumptions required for nonzero health benefits to exist at all. © 2011 Society for Risk Analysis.
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Youfang; Terebus, Anna; Liang, Jie
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-04-22
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
How to Correct a Task Error: Task-Switch Effects Following Different Types of Error Correction
ERIC Educational Resources Information Center
Steinhauser, Marco
2010-01-01
It has been proposed that switch costs in task switching reflect the strengthening of task-related associations and that strengthening is triggered by response execution. The present study tested the hypothesis that only task-related responses are able to trigger strengthening. Effects of task strengthening caused by error corrections were…
Photographic and photometric enhancement of Lunar Orbiter products, projects A, B and C
NASA Technical Reports Server (NTRS)
1972-01-01
A detailed discussion is presented of the framelet joining, photometric data improvement, and statistical error analysis. The Lunar Orbiter film handling system, readout system, and the digitization are described, along with the technique of joining adjacent framelets by a using a digital computer. Time and cost estimates are given. The problems and techniques involved in improving the digitized data are discussed. It was found that spectacular improvements are possible. Program documentations are included.
ADCS controllers comparison for small satellitess in Low Earth Orbit
NASA Astrophysics Data System (ADS)
Calvo, Daniel; Laverón-Simavilla, Ana; Lapuerta, Victoria
2016-07-01
Fuzzy logic controllers are flexible and simple, suitable for small satellites Attitude Determination and Control Subsystems (ADCS). In a previous work, a tailored Fuzzy controller was designed for a nanosatellite. Its performance and efficiency were compared with a traditional Proportional Integrative Derivative (PID) controller within the same specific mission. The orbit height varied along the mission from injection at around 380 km down to 200 km height, and the mission required pointing accuracy over the whole time. Due to both, the requirements imposed by such a low orbit, and the limitations in the power available for the attitude control, an efficient ADCS is required. Both methodologies, fuzzy and PID, were fine-tuned using an automated procedure to grant maximum efficiency with fixed performances. The simulations showed that the Fuzzy controller is much more efficient (up to 65% less power required) in single manoeuvres, achieving similar, or even better, precision than the PID. The accuracy and efficiency improvement of the Fuzzy controller increase with orbit height because the environmental disturbances decrease, approaching the ideal scenario. However, the controllers are meant to be used in a vast range of situations and configurations which exceed those used in the calibration process carried out in the previous work. To assess the suitability and performance of both controllers in a wider framework, parametric and statistical methods have been applied using the Monte Carlo technique. Several parameters have been modified randomly at the beginning of each simulation: the moments of inertia of the whole satellite and of the momentum wheel, the residual magnetic dipole and the initial conditions of the test. These parameters have been chosen because they are the main source of uncertainty during the design phase. The variables used for the analysis are the error (critical for science) and the operation cost (which impacts the mission lifetime and outcome). The analysis of the simulations has shown that, in overall, the PID error is over twice the Fuzzy error and the PID cost is over 40% bigger than the Fuzzy cost. This suggests that a Fuzzy controller may be a better solution in a wider range of configurations than other classical solutions like the PID.
Hess, Lisa M; Cui, Zhanglin Lin; Wu, Yixun; Fang, Yun; Gaynor, Paula J; Oton, Ana B
2017-08-01
The objective of this study was to quantify the current and to project future patient and insurer costs for the care of patients with non-small cell lung cancer in the US. An analysis of administrative claims data among patients diagnosed with non-small cell lung cancer from 2007-2015 was conducted. Future costs were projected through 2040 based on these data using autoregressive models. Analysis of claims data found the average total cost of care during first- and second-line therapy was $1,161.70 and $561.80 for patients, and $45,175.70 and $26,201.40 for insurers, respectively. By 2040, the average total patient out-of-pocket costs are projected to reach $3,047.67 for first-line and $2,211.33 for second-line therapy, and insurance will pay an average of $131,262.39 for first-line and $75,062.23 for second-line therapy. Claims data are not collected for research purposes; therefore, there may be errors in entry and coding. Additionally, claims data do not contain important clinical factors, such as stage of disease at diagnosis, tumor histology, or data on disease progression, which may have important implications on the cost of care. The trajectory of the cost of lung cancer care is growing. This study estimates that the cost of care may double by 2040, with the greatest proportion of increase in patient out-of-pocket costs. Despite the average cost projections, these results suggest that a small sub-set of patients with very high costs could be at even greater risk in the future.
Bozzette, S A; Parker, R; Hay, J
1994-04-01
Treatment with zidovudine has been standard therapy for patients with advanced HIV infection, but intolerance is common. Previously, management of intolerance has consisted of symptomatic therapy, dose interruption/discontinuation, and, when appropriate, transfusion. The availability of new antiretroviral agents such as didanosine as well as adjunctive recombinant hematopoietic growth factors makes additional strategies possible for the zidovudine-intolerant patient. Because all of these agents are costly, we evaluated the cost implications of these various strategies for the management of zidovudine-intolerant individuals within a population of persons with advanced HIV disease. We performed a decision analysis using iterative algorithmic models of 1 year of antiretroviral care under various strategies. The real costs providing antiretroviral therapy were estimated by deflating medical center charges by specific Medi-Cal (Medicaid) charge-to-payment ratios. Clinical data were extracted from the medical literature, product package inserts, investigator updates, and personal communications. Sensitivity analysis was used to test the effect of error in the estimation of parameters. The models predict that a strategy of dose interruption and transfusion for zidovudine intolerance will provide an average of 46 weeks of therapy per year to the average patient at a cost of $5,555/year of therapy provided (1991 U.S. dollars). The models predict that a strategy of adding hematopoietic growth factors to the regimen of appropriate patients would increase the average amount of therapy provided to the average patient by 3 weeks (6%) and the costs attributable to therapy by 77% to $9,805/year of therapy provided.(ABSTRACT TRUNCATED AT 250 WORDS)
The Impact of Soil Sampling Errors on Variable Rate Fertilization
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. L. Hoskinson; R C. Rope; L G. Blackwood
2004-07-01
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and amore » predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.« less
Fast Formal Analysis of Requirements via "Topoi Diagrams"
NASA Technical Reports Server (NTRS)
Menzies, Tim; Powell, John; Houle, Michael E.; Kelly, John C. (Technical Monitor)
2001-01-01
Early testing of requirements can decrease the cost of removing errors in software projects. However, unless done carefully, that testing process can significantly add to the cost of requirements analysis. We show here that requirements expressed as topoi diagrams can be built and tested cheaply using our SP2 algorithm, the formal temporal properties of a large class of topoi can be proven very quickly, in time nearly linear in the number of nodes and edges in the diagram. There are two limitations to our approach. Firstly, topoi diagrams cannot express certain complex concepts such as iteration and sub-routine calls. Hence, our approach is more useful for requirements engineering than for traditional model checking domains. Secondly, out approach is better for exploring the temporal occurrence of properties than the temporal ordering of properties. Within these restrictions, we can express a useful range of concepts currently seen in requirements engineering, and a wide range of interesting temporal properties.
Zhang, W P; Yamauchi, K; Mizuno, S; Zhang, R; Huang, D M
2004-01-01
The purpose of this study was to clarify the implementation and maintenance costs of a computerized patient record (CPR) system by means of a questionnaire survey. Moreover, the benefits of CPR systems were evaluated to determine their contribution to enhancing the quality of medical care and hospital management. Data were collected by a questionnaire survey mailed out to participants. The per-bed mean cost for implementation was 14,308 dollars (range: 3538-38,077 dollars). The mean annual maintenance cost for the CPR system was 457,615 dollars (range: 39,769-2,307,692 dollars). The multivariate analysis (Hayashi's Quantification Type I) revealed high partial correlation coefficients between implementation cost and the CPR system maker. In addition, the multiple correlation coefficient for four factors (CPR system maker, number of servers, institution type and implementation date) in predicting implementation cost was 0.798. Over 60% of respondents replied that their satisfaction with the CPR system was 'very high' or 'high.' Eighty-two percent of the hospitals responded positively that CPR systems improve the quality of medical care, and 70% felt that the systems help prevent medical errors. Our findings indicate that the maker of CPR system, number of servers, institution type and implementation date had a strong influence on per-bed implementation costs in that order. Finally, it was found that CPR systems were considered effective for hospital administration and medical examinations, based on the high assessments of the results of installing a CPR system.
Beam collimation and focusing and error analysis of LD and fiber coupling system based on ZEMAX
NASA Astrophysics Data System (ADS)
Qiao, Lvlin; Zhou, Dejian; Xiao, Lei
2017-10-01
Laser diodde has many advantages, such as high efficiency, small volume, low cost and easy integration, so it is widely used. Because of its poor beam quality, the application of semiconductor laser has also been seriously hampered. In view of the poor beam quality, the ZEMAX optical design software is used to simulate the far field characteristics of the semiconductor laser beam, and the coupling module of the semiconductor laser and the optical fiber is designed and optimized. And the beam is coupled into the fiber core diameter d=200µm, the numerical aperture NA=0.22 optical fiber, the output power can reach 95%. Finally, the influence of the three docking errors on the coupling efficiency during the installation process is analyzed.
Straub, D.E.
1998-01-01
The streamflow-gaging station network in Ohio was evaluated for its effectiveness in providing regional streamflow information. The analysis involved application of the principles of generalized least squares regression between streamflow and climatic and basin characteristics. Regression equations were developed for three flow characteristics: (1) the instantaneous peak flow with a 100-year recurrence interval (P100), (2) the mean annual flow (Qa), and (3) the 7-day, 10-year low flow (7Q10). All active and discontinued gaging stations with 5 or more years of unregulated-streamflow data with respect to each flow characteristic were used to develop the regression equations. The gaging-station network was evaluated for the current (1996) condition of the network and estimated conditions of various network strategies if an additional 5 and 20 years of streamflow data were collected. Any active or discontinued gaging station with (1) less than 5 years of unregulated-streamflow record, (2) previously defined basin and climatic characteristics, and (3) the potential for collection of more unregulated-streamflow record were included in the network strategies involving the additional 5 and 20 years of data. The network analysis involved use of the regression equations, in combination with location, period of record, and cost of operation, to determine the contribution of the data for each gaging station to regional streamflow information. The contribution of each gaging station was based on a cost-weighted reduction of the mean square error (average sampling-error variance) associated with each regional estimating equation. All gaging stations included in the network analysis were then ranked according to their contribution to the regional information for each flow characteristic. The predictive ability of the regression equations developed from the gaging station network could be improved for all three flow characteristics with the collection of additional streamflow data. The addition of new gaging stations to the network would result in an even greater improvement of the accuracy of the regional regression equations. Typically, continued data collection at stations with unregulated streamflow for all flow conditions that had less than 11 years of record with drainage areas smaller than 200 square miles contributed the largest cost-weighted reduction to the average sampling-error variance of the regional estimating equations. The results of the network analyses can be used to prioritize the continued operation of active gaging stations or the reactivation of discontinued gaging stations if the objective is to maximize the regional information content in the streamflow-gaging station network.
Gadoury, R.A.; Smath, J.A.; Fontaine, R.A.
1985-01-01
The report documents the results of a study of the cost-effectiveness of the U.S. Geological Survey 's continuous-record stream-gaging programs in Massachusetts and Rhode Island. Data uses and funding sources were identified for 91 gaging stations being operated in Massachusetts are being operated to provide data for two special purpose hydrologic studies, and they are planned to be discontinued at the conclusion of the studies. Cost-effectiveness analyses were performed on 63 continuous-record gaging stations in Massachusetts and 15 stations in Rhode Island, at budgets of $353,000 and $60,500, respectively. Current operations policies result in average standard errors per station of 12.3% in Massachusetts and 9.7% in Rhode Island. Minimum possible budgets to maintain the present numbers of gaging stations in the two States are estimated to be $340,000 and $59,000, with average errors per station of 12.8% and 10.0%, respectively. If the present budget levels were doubled, average standards errors per station would decrease to 8.1% and 4.2%, respectively. Further budget increases would not improve the standard errors significantly. (USGS)
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
Quality Issues of Court Reporters and Transcriptionists for Qualitative Research
Hennink, Monique; Weber, Mary Beth
2015-01-01
Transcription is central to qualitative research, yet few researchers identify the quality of different transcription methods. We described the quality of verbatim transcripts from traditional transcriptionists and court reporters by reviewing 16 transcripts from 8 focus group discussions using four criteria: transcription errors, cost and time of transcription, and effect on study participants. Transcriptionists made fewer errors, captured colloquial dialogue, and errors were largely influenced by the quality of the recording. Court reporters made more errors, particularly in the omission of topical content and contextual detail and were less able to produce a verbatim transcript; however the potential immediacy of the transcript was advantageous. In terms of cost, shorter group discussions favored a transcriptionist and longer groups a court reporter. Study participants reported no effect by either method of recording. Understanding the benefits and limitations of each method of transcription can help researchers select an appropriate method for each study. PMID:23512435
NASA Astrophysics Data System (ADS)
González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro
2012-07-01
A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.
Interventions to reduce medication errors in neonatal care: a systematic review
Nguyen, Minh-Nha Rhylie; Mosel, Cassandra
2017-01-01
Background: Medication errors represent a significant but often preventable cause of morbidity and mortality in neonates. The objective of this systematic review was to determine the effectiveness of interventions to reduce neonatal medication errors. Methods: A systematic review was undertaken of all comparative and noncomparative studies published in any language, identified from searches of PubMed and EMBASE and reference-list checking. Eligible studies were those investigating the impact of any medication safety interventions aimed at reducing medication errors in neonates in the hospital setting. Results: A total of 102 studies were identified that met the inclusion criteria, including 86 comparative and 16 noncomparative studies. Medication safety interventions were classified into six themes: technology (n = 38; e.g. electronic prescribing), organizational (n = 16; e.g. guidelines, policies, and procedures), personnel (n = 13; e.g. staff education), pharmacy (n = 9; e.g. clinical pharmacy service), hazard and risk analysis (n = 8; e.g. error detection tools), and multifactorial (n = 18; e.g. any combination of previous interventions). Significant variability was evident across all included studies, with differences in intervention strategies, trial methods, types of medication errors evaluated, and how medication errors were identified and evaluated. Most studies demonstrated an appreciable risk of bias. The vast majority of studies (>90%) demonstrated a reduction in medication errors. A similar median reduction of 50–70% in medication errors was evident across studies included within each of the identified themes, but findings varied considerably from a 16% increase in medication errors to a 100% reduction in medication errors. Conclusion: While neonatal medication errors can be reduced through multiple interventions aimed at improving the medication use process, no single intervention appeared clearly superior. Further research is required to evaluate the relative cost-effectiveness of the various medication safety interventions to facilitate decisions regarding uptake and implementation into clinical practice. PMID:29387337
Modeling congenital disease and inborn errors of development in Drosophila melanogaster
Moulton, Matthew J.; Letsou, Anthea
2016-01-01
ABSTRACT Fly models that faithfully recapitulate various aspects of human disease and human health-related biology are being used for research into disease diagnosis and prevention. Established and new genetic strategies in Drosophila have yielded numerous substantial successes in modeling congenital disorders or inborn errors of human development, as well as neurodegenerative disease and cancer. Moreover, although our ability to generate sequence datasets continues to outpace our ability to analyze these datasets, the development of high-throughput analysis platforms in Drosophila has provided access through the bottleneck in the identification of disease gene candidates. In this Review, we describe both the traditional and newer methods that are facilitating the incorporation of Drosophila into the human disease discovery process, with a focus on the models that have enhanced our understanding of human developmental disorders and congenital disease. Enviable features of the Drosophila experimental system, which make it particularly useful in facilitating the much anticipated move from genotype to phenotype (understanding and predicting phenotypes directly from the primary DNA sequence), include its genetic tractability, the low cost for high-throughput discovery, and a genome and underlying biology that are highly evolutionarily conserved. In embracing the fly in the human disease-gene discovery process, we can expect to speed up and reduce the cost of this process, allowing experimental scales that are not feasible and/or would be too costly in higher eukaryotes. PMID:26935104
Fast and low-cost method for VBES bathymetry generation in coastal areas
NASA Astrophysics Data System (ADS)
Sánchez-Carnero, N.; Aceña, S.; Rodríguez-Pérez, D.; Couñago, E.; Fraile, P.; Freire, J.
2012-12-01
Sea floor topography is key information in coastal area management. Nowadays, LiDAR and multibeam technologies provide accurate bathymetries in those areas; however these methodologies are yet too expensive for small customers (fishermen associations, small research groups) willing to keep a periodic surveillance of environmental resources. In this paper, we analyse a simple methodology for vertical beam echosounder (VBES) bathymetric data acquisition and postprocessing, using low-cost means and free customizable tools such as ECOSONS and gvSIG (that is compared with industry standard ArcGIS). Echosounder data was filtered, resampled and, interpolated (using kriging or radial basis functions). Moreover, the presented methodology includes two data correction processes: Monte Carlo simulation, used to reduce GPS errors, and manually applied bathymetric line transformations, both improving the obtained results. As an example, we present the bathymetry of the Ría de Cedeira (Galicia, NW Spain), a good testbed area for coastal bathymetry methodologies given its extension and rich topography. The statistical analysis, performed by direct ground-truthing, rendered an upper bound of 1.7 m error, at 95% confidence level, and 0.7 m r.m.s. (cross-validation provided 30 cm and 25 cm, respectively). The methodology presented is fast and easy to implement, accurate outside transects (accuracy can be estimated), and can be used as a low-cost periodical monitoring method.
Error-Tolerant Quasi-Paraboloidal Solar Concentrator
NASA Technical Reports Server (NTRS)
Wagner, Howard A.
1988-01-01
Scalloping reflector surface reduces sensitivity to manufacturing and aiming errors. Contrary to intuition, most effective shape of concentrating reflector for solar heat engine is not perfect paraboloid. According to design studies for Space Station solar concentrator, scalloped, nonimaging approximation to perfect paraboloid offers better overall performance in view of finite apparent size of Sun, imperfections of real equipment, and cost of accommodating these complexities. Scalloped-reflector concept also applied to improve performance while reducing cost of manufacturing and operation of terrestrial solar concentrator.
Model-based optimization of near-field binary-pixelated beam shapers
Dorrer, C.; Hassett, J.
2017-01-23
The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less
Giduthuri, Joseph G.; Maire, Nicolas; Joseph, Saju; Kudale, Abhay; Schaetti, Christian; Sundaram, Neisha; Schindler, Christian; Weiss, Mitchell G.
2014-01-01
Background Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. Methodology We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. Results In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively). Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. Conclusion An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies. PMID:25233212
An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression
Bhatt, Deepak; Aggarwal, Priyanka; Bhattacharya, Prabir; Devabhaktuni, Vijay
2012-01-01
Micro Electro Mechanical System (MEMS)-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN) is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM) based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches. PMID:23012552
DNA assembly with error correction on a droplet digital microfluidics platform.
Khilko, Yuliya; Weyman, Philip D; Glass, John I; Adams, Mark D; McNeil, Melanie A; Griffin, Peter B
2018-06-01
Custom synthesized DNA is in high demand for synthetic biology applications. However, current technologies to produce these sequences using assembly from DNA oligonucleotides are costly and labor-intensive. The automation and reduced sample volumes afforded by microfluidic technologies could significantly decrease materials and labor costs associated with DNA synthesis. The purpose of this study was to develop a gene assembly protocol utilizing a digital microfluidic device. Toward this goal, we adapted bench-scale oligonucleotide assembly methods followed by enzymatic error correction to the Mondrian™ digital microfluidic platform. We optimized Gibson assembly, polymerase chain reaction (PCR), and enzymatic error correction reactions in a single protocol to assemble 12 oligonucleotides into a 339-bp double- stranded DNA sequence encoding part of the human influenza virus hemagglutinin (HA) gene. The reactions were scaled down to 0.6-1.2 μL. Initial microfluidic assembly methods were successful and had an error frequency of approximately 4 errors/kb with errors originating from the original oligonucleotide synthesis. Relative to conventional benchtop procedures, PCR optimization required additional amounts of MgCl 2 , Phusion polymerase, and PEG 8000 to achieve amplification of the assembly and error correction products. After one round of error correction, error frequency was reduced to an average of 1.8 errors kb - 1 . We demonstrated that DNA assembly from oligonucleotides and error correction could be completely automated on a digital microfluidic (DMF) platform. The results demonstrate that enzymatic reactions in droplets show a strong dependence on surface interactions, and successful on-chip implementation required supplementation with surfactants, molecular crowding agents, and an excess of enzyme. Enzymatic error correction of assembled fragments improved sequence fidelity by 2-fold, which was a significant improvement but somewhat lower than expected compared to bench-top assays, suggesting an additional capacity for optimization.
Lu, Lingbo; Li, Jingshan; Gisler, Paula
2011-06-01
Radiology tests, such as MRI, CT-scan, X-ray and ultrasound, are cost intensive and insurance pre-approvals are necessary to get reimbursement. In some cases, tests may be denied for payments by insurance companies due to lack of pre-approvals, inaccurate or missing necessary information. This can lead to substantial revenue losses for the hospital. In this paper, we present a simulation study of a centralized scheduling process for outpatient radiology tests at a large community hospital (Central Baptist Hospital in Lexington, Kentucky). Based on analysis of the central scheduling process, a simulation model of information flow in the process has been developed. Using such a model, the root causes of financial losses associated with errors and omissions in this process were identified and analyzed, and their impacts were quantified. In addition, "what-if" analysis was conducted to identify potential process improvement strategies in the form of recommendations to the hospital leadership. Such a model provides a quantitative tool for continuous improvement and process control in radiology outpatient test scheduling process to reduce financial losses associated with process error. This method of analysis is also applicable to other departments in the hospital.
Goulet, Eric D B; Baker, Lindsay B
2017-12-01
The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and -0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.
The fitness cost of mis-splicing is the main determinant of alternative splicing patterns.
Saudemont, Baptiste; Popa, Alexandra; Parmley, Joanna L; Rocher, Vincent; Blugeon, Corinne; Necsulea, Anamaria; Meyer, Eric; Duret, Laurent
2017-10-30
Most eukaryotic genes are subject to alternative splicing (AS), which may contribute to the production of protein variants or to the regulation of gene expression via nonsense-mediated messenger RNA (mRNA) decay (NMD). However, a fraction of splice variants might correspond to spurious transcripts and the question of the relative proportion of splicing errors to functional splice variants remains highly debated. We propose a test to quantify the fraction of AS events corresponding to errors. This test is based on the fact that the fitness cost of splicing errors increases with the number of introns in a gene and with expression level. We analyzed the transcriptome of the intron-rich eukaryote Paramecium tetraurelia. We show that in both normal and in NMD-deficient cells, AS rates strongly decrease with increasing expression level and with increasing number of introns. This relationship is observed for AS events that are detectable by NMD as well as for those that are not, which invalidates the hypothesis of a link with the regulation of gene expression. Our results show that in genes with a median expression level, 92-98% of observed splice variants correspond to errors. We observed the same patterns in human transcriptomes and we further show that AS rates correlate with the fitness cost of splicing errors. These observations indicate that genes under weaker selective pressure accumulate more maladaptive substitutions and are more prone to splicing errors. Thus, to a large extent, patterns of gene expression variants simply reflect the balance between selection, mutation, and drift.
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Elliott, Rachel; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Murray, Scott A; Prescott, Robin J; Cresswell, Kathrin; Sheikh, Aziz
2009-01-01
Background Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken. Trial registration Current controlled trials ISRCTN21785299 PMID:19409095
Incorporating approximation error in surrogate based Bayesian inversion
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.; Li, W.; Wu, L.
2015-12-01
There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.
Cost-effectiveness analysis of a hospital electronic medication management system.
Westbrook, Johanna I; Gospodarevskaya, Elena; Li, Ling; Richardson, Katrina L; Roffe, David; Heywood, Maureen; Day, Richard O; Graves, Nicholas
2015-07-01
To conduct a cost-effectiveness analysis of a hospital electronic medication management system (eMMS). We compared costs and benefits of paper-based prescribing with a commercial eMMS (CSC MedChart) on one cardiology ward in a major 326-bed teaching hospital, assuming a 15-year time horizon and a health system perspective. The eMMS implementation and operating costs were obtained from the study site. We used data on eMMS effectiveness in reducing potential adverse drug events (ADEs), and potential ADEs intercepted, based on review of 1 202 patient charts before (n = 801) and after (n = 401) eMMS. These were combined with published estimates of actual ADEs and their costs. The rate of potential ADEs following eMMS fell from 0.17 per admission to 0.05; a reduction of 71%. The annualized eMMS implementation, maintenance, and operating costs for the cardiology ward were A$61 741 (US$55 296). The estimated reduction in ADEs post eMMS was approximately 80 actual ADEs per year. The reduced costs associated with these ADEs were more than sufficient to offset the costs of the eMMS. Estimated savings resulting from eMMS implementation were A$63-66 (US$56-59) per admission (A$97 740-$102 000 per annum for this ward). Sensitivity analyses demonstrated results were robust when both eMMS effectiveness and costs of actual ADEs were varied substantially. The eMMS within this setting was more effective and less expensive than paper-based prescribing. Comparison with the few previous full economic evaluations available suggests a marked improvement in the cost-effectiveness of eMMS, largely driven by increased effectiveness of contemporary eMMs in reducing medication errors. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
NASA Astrophysics Data System (ADS)
Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.
2018-03-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.
2017-12-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.
Boes, Kelsey S; Roberts, Michael S; Vinueza, Nelson R
2018-03-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R 2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R 2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. Graphical Abstract ᅟ.
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Cost-effectiveness of the U.S. Geological Survey stream-gaging program in Indiana
Stewart, J.A.; Miller, R.L.; Butch, G.K.
1986-01-01
Analysis of the stream gaging program in Indiana was divided into three phases. The first phase involved collecting information concerning the data need and the funding source for each of the 173 surface water stations in Indiana. The second phase used alternate methods to produce streamflow records at selected sites. Statistical models were used to generate stream flow data for three gaging stations. In addition, flow routing models were used at two of the sites. Daily discharges produced from models did not meet the established accuracy criteria and, therefore, these methods should not replace stream gaging procedures at those gaging stations. The third phase of the study determined the uncertainty of the rating and the error at individual gaging stations, and optimized travel routes and frequency of visits to gaging stations. The annual budget, in 1983 dollars, for operating the stream gaging program in Indiana is $823,000. The average standard error of instantaneous discharge for all continuous record gaging stations is 25.3%. A budget of $800,000 could maintain this level of accuracy if stream gaging stations were visited according to phase III results. A minimum budget of $790,000 is required to operate the gaging network. At this budget, the average standard error of instantaneous discharge would be 27.7%. A maximum budget of $1 ,000,000 was simulated in the analysis and the average standard error of instantaneous discharge was reduced to 16.8%. (Author 's abstract)
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Erickson, W. K.; Donovan, W. E.
1984-01-01
Image Display and Analysis Systems (MIDAS) developed at NASA/Ames for the analysis of Landsat MSS images is described. The MIDAS computer power and memory, graphics, resource-sharing, expansion and upgrade, environment and maintenance, and software/user-interface requirements are outlined; the implementation hardware (including 32-bit microprocessor, 512K error-correcting RAM, 70 or 140-Mbyte formatted disk drive, 512 x 512 x 24 color frame buffer, and local-area-network transceiver) and applications software (ELAS, CIE, and P-EDITOR) are characterized; and implementation problems, performance data, and costs are examined. Planned improvements in MIDAS hardware and design goals and areas of exploration for MIDAS software are discussed.
NASA Astrophysics Data System (ADS)
Sousa, Andre R.; Schneider, Carlos A.
2001-09-01
A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.
Yilmaz, Yildiz E; Bull, Shelley B
2011-11-29
Use of trait-dependent sampling designs in whole-genome association studies of sequence data can reduce total sequencing costs with modest losses of statistical efficiency. In a quantitative trait (QT) analysis of data from the Genetic Analysis Workshop 17 mini-exome for unrelated individuals in the Asian subpopulation, we investigate alternative designs that sequence only 50% of the entire cohort. In addition to a simple random sampling design, we consider extreme-phenotype designs that are of increasing interest in genetic association analysis of QTs, especially in studies concerned with the detection of rare genetic variants. We also evaluate a novel sampling design in which all individuals have a nonzero probability of being selected into the sample but in which individuals with extreme phenotypes have a proportionately larger probability. We take differential sampling of individuals with informative trait values into account by inverse probability weighting using standard survey methods which thus generalizes to the source population. In replicate 1 data, we applied the designs in association analysis of Q1 with both rare and common variants in the FLT1 gene, based on knowledge of the generating model. Using all 200 replicate data sets, we similarly analyzed Q1 and Q4 (which is known to be free of association with FLT1) to evaluate relative efficiency, type I error, and power. Simulation study results suggest that the QT-dependent selection designs generally yield greater than 50% relative efficiency compared to using the entire cohort, implying cost-effectiveness of 50% sample selection and worthwhile reduction of sequencing costs.
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.
Ballistic intercept missions to Comet Encke
NASA Technical Reports Server (NTRS)
Mumma, M. (Compiler)
1975-01-01
The optimum ballistic intercept of a spacecraft with the comet Encke is determined. The following factors are considered in the analysis: energy requirements, encounter conditions, targeting error, comet activity, spacecraft engineering requirements and restraints, communications, and scientific return of the mission. A baseline model is formulated which includes the basic elements necessary to estimate the scientific return for the different missions considered. Tradeoffs which have major impact on the cost and/or scientific return of a ballistic mission to comet Encke are identified and discussed. Recommendations are included.
Data Analysis and Its Impact on Predicting Schedule & Cost Risk
2006-03-01
variance of the error term by performing a Breusch - Pagan test for constant variance (Neter et al., 1996:239). In order to test the normality of...is constant variance. Using Microsoft Excel®, we calculate a p- 68 value of 0.225678 for the Breusch - Pagan test . We again compare this p-value to...calculate a p-value of 0.121211092 Breusch - Pagan test . We again compare this p-value to an alpha of 0.05 indicating our assumption of constant variance
NASA Astrophysics Data System (ADS)
Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris
2017-04-01
Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.
The Grapefruit: An Alternative Arthroscopic Tool Skill Platform.
Molho, David A; Sylvia, Stephen M; Schwartz, Daniel L; Merwin, Sara L; Levy, I Martin
2017-08-01
To establish the construct validity of an arthroscopic training model that teaches arthroscopic tool skills including triangulation, grasping, precision biting, implant delivery and ambidexterity and uses a whole grapefruit for its training platform. For the grapefruit training model (GTM), an arthroscope and arthroscopic instruments were introduced through portals cut in the grapefruit skin of a whole prepared grapefruit. After institutional review board approval, participants performed a set of tasks inside the grapefruit. Performance for each component was assessed by recording errors, achievement of criteria, and time to completion. A total of 19 medical students, orthopaedic surgery residents, and fellowship-trained orthopaedic surgeons were included in the analysis and were divided into 3 groups based on arthroscopic experience. One-way analysis of variance (ANOVA) and the post hoc Tukey test were used for statistical analysis. One-way ANOVA showed significant differences in both time to completion and errors between groups, F(2, 16) = 16.10, P < .001; F(2, 16) = 17.43, P < .001. Group A had a longer time to completion and more errors than group B (P = .025, P = .019), and group B had a longer time to completion and more errors than group C (P = .023, P = .018). The GTM is an easily assembled and an alternative arthroscopic training model that bridges the gap between box trainers, cadavers, and virtual reality simulators. Our findings suggest construct validity when evaluating its use for teaching the basic arthroscopic tool skills. As such, it is a useful addition to the arthroscopic training toolbox. There is a need for validated low-cost arthroscopic training models that are easily accessible. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Clinical Laboratory Automation: A Case Study
Archetti, Claudia; Montanelli, Alessandro; Finazzi, Dario; Caimi, Luigi; Garrafa, Emirena
2017-01-01
Background This paper presents a case study of an automated clinical laboratory in a large urban academic teaching hospital in the North of Italy, the Spedali Civili in Brescia, where four laboratories were merged in a unique laboratory through the introduction of laboratory automation. Materials and Methods The analysis compares the preautomation situation and the new setting from a cost perspective, by considering direct and indirect costs. It also presents an analysis of the turnaround time (TAT). The study considers equipment, staff and indirect costs. Results The introduction of automation led to a slight increase in equipment costs which is highly compensated by a remarkable decrease in staff costs. Consequently, total costs decreased by 12.55%. The analysis of the TAT shows an improvement of nonemergency exams while emergency exams are still validated within the maximum time imposed by the hospital. Conclusions The strategy adopted by the management, which was based on re-using the available equipment and staff when merging the pre-existing laboratories, has reached its goal: introducing automation while minimizing the costs. Significance for public health Automation is an emerging trend in modern clinical laboratories with a positive impact on service level to patients and on staff safety as shown by different studies. In fact, it allows process standardization which, in turn, decreases the frequency of outliers and errors. In addition, it induces faster processing times, thus improving the service level. On the other side, automation decreases the staff exposition to accidents strongly improving staff safety. In this study, we analyse a further potential benefit of automation, that is economic convenience. We study the case of the automated laboratory of one of the biggest hospital in Italy and compare the cost related to the pre and post automation situation. Introducing automation lead to a cost decrease without affecting the service level to patients. This was a key goal of the hospital which, as public health entities in general, is constantly struggling with budget constraints. PMID:28660178
Akiyama, M
2007-01-01
The concept of our system is not only to manage material flows, but also to provide an integrated management resource, a means of correcting errors in medical treatment, and applications to EBM (evidence-based medicine) through the data mining of medical records. Prior to the development of this system, electronic processing systems in hospitals did a poor job of accurately grasping medical practice and medical material flows. With POAS (Point of Act System), hospital managers can solve the so-called, "man, money, material, and information" issues inherent in the costs of healthcare. The POAS system synchronizes with each department system, from finance and accounting, to pharmacy, to imaging, and allows information exchange. We can manage Man (Business Process), Material (Medical Materials and Medicine), Money (Expenditure for purchase and Receipt), and Information (Medical Records) completely by this system. Our analysis has shown that this system has a remarkable investment effect - saving over four million dollars per year - through cost savings in logistics and business process efficiencies. In addition, the quality of care has been improved dramatically while error rates have been reduced - nearly to zero in some cases.
Preliminary design of the redundant software experiment
NASA Technical Reports Server (NTRS)
Campbell, Roy; Deimel, Lionel; Eckhardt, Dave, Jr.; Kelly, John; Knight, John; Lauterbach, Linda; Lee, Larry; Mcallister, Dave; Mchugh, John
1985-01-01
The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle.
NASA Astrophysics Data System (ADS)
Kim, Shin-Woo; Noh, Nam-Kyu; Lim, Gyu-Ho
2013-04-01
This study presents the introduction of retrospective optimal interpolation (ROI) and its application with Weather Research and Forecasting model (WRF). Song et al. (2009) suggested ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. The assimilation window of ROI algorithm is gradually increased, similar with that of the quasi-static variational assimilation (QSVA; Pires et al., 1996). Unlike QSVA method, however, ROI method assimilates the data at post analysis time using perturbation method (Verlaan and Heemink, 1997) without adjoint model. Song and Lim (2011) improved this method by incorporating eigen-decomposition and covariance inflation. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance which can concentrate ROI analyses on the error variances of governing eigenmodes by transforming the control variables into eigenspace. A total energy norm is used for the normalization of each control variables. In this study, ROI method is applied to WRF model with Observing System Simulation Experiment (OSSE) to validate the algorithm and to investigate the capability. Horizontal wind, pressure, potential temperature, and water vapor mixing ratio are used for control variables and observations. Firstly, 1-profile assimilation experiment is performed. Subsequently, OSSE's are performed using the virtual observing system which consists of synop, ship, and sonde data. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error with the assimilation by ROI. The characteristics and strength/weakness of ROI method are also investigated by conducting the experiments with 3D-Var (3-dimensional variational) method and 4D-Var (4-dimensional variational) method. In the initial time, ROI produces a larger forecast error than that of 4D-Var. However, the difference between the two experimental results is decreased gradually with time, and the ROI shows apparently better result (i.e., smaller forecast error) than that of 4D-Var after 9-hour forecast.
Improving hospital billing and receivables management: principles for profitability.
Hemmer, E
1992-01-01
For many hospitals, billing and receivables management are inefficient and costly. Economic recession, increasing costs for patient and provider alike, and cost-containment strategies will only compound difficulties. The author describes the foundations of an automated billing system that would save hospitals time, error, and, most importantly, money.
An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base
NASA Astrophysics Data System (ADS)
Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi
Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.
Mukasa, Oscar; Mushi, Hildegalda P; Maire, Nicolas; Ross, Amanda; de Savigny, Don
2017-01-01
Data entry at the point of collection using mobile electronic devices may make data-handling processes more efficient and cost-effective, but there is little literature to document and quantify gains, especially for longitudinal surveillance systems. To examine the potential of mobile electronic devices compared with paper-based tools in health data collection. Using data from 961 households from the Rufiji Household and Demographic Survey in Tanzania, the quality and costs of data collected on paper forms and electronic devices were compared. We also documented, using qualitative approaches, field workers, whom we called 'enumerators', and households' members on the use of both methods. Existing administrative records were combined with logistics expenditure measured directly from comparison households to approximate annual costs per 1,000 households surveyed. Errors were detected in 17% (166) of households for the paper records and 2% (15) for the electronic records (p < 0.001). There were differences in the types of errors (p = 0.03). Of the errors occurring, a higher proportion were due to accuracy in paper surveys (79%, 95% CI: 72%, 86%) compared with electronic surveys (58%, 95% CI: 29%, 87%). Errors in electronic surveys were more likely to be related to completeness (32%, 95% CI 12%, 56%) than in paper surveys (11%, 95% CI: 7%, 17%).The median duration of the interviews ('enumeration'), per household was 9.4 minutes (90% central range 6.4, 12.2) for paper and 8.3 (6.1, 12.0) for electronic surveys (p = 0.001). Surveys using electronic tools, compared with paper-based tools, were less costly by 28% for recurrent and 19% for total costs. Although there were technical problems with electronic devices, there was good acceptance of both methods by enumerators and members of the community. Our findings support the use of mobile electronic devices for large-scale longitudinal surveys in resource-limited settings.
Akazawa, Manabu; Stearns, Sally C; Biddle, Andrea K
2008-01-01
Objective To assess costs, effectiveness, and cost-effectiveness of inhaled corticosteroids (ICS) augmenting bronchodilator treatment for chronic obstructive pulmonary disease (COPD). Data Sources Claims between 1997 and 2005 from a large managed care database. Study Design Individual-level, fixed-effects regression models estimated the effects of initiating ICS on medical expenses and likelihood of severe exacerbation. Bootstrapping provided estimates of the incremental cost per severe exacerbation avoided. Data Extraction Methods COPD patients aged 40 or older with ≥15 months of continuous eligibility were identified. Monthly observations for 1 year before and up to 2 years following initiation of bronchodilators were constructed. Principal Findings ICS treatment reduced monthly risk of severe exacerbation by 25 percent. Total costs with ICS increased for 16 months, but declined thereafter. ICS use was cost saving 46 percent of the time, with an incremental cost-effectiveness ratio of $2,973 per exacerbation avoided; for patients ≥50 years old, ICS was cost saving 57 percent of time. Conclusions ICS treatment reduces exacerbations, with an increase in total costs initially for the full sample. Compared with younger patients with COPD, patients aged 50 or older have reduced costs and improved outcomes. The estimated cost per severe exacerbation avoided, however, may be high for either group because of uncertainty as reflected by the large standard errors of the parameter estimates. PMID:18671750
Interdisciplinary Coordination Reviews: A Process to Reduce Construction Costs.
ERIC Educational Resources Information Center
Fewell, Dennis A.
1998-01-01
Interdisciplinary Coordination design review is instrumental in detecting coordination errors and omissions in construction documents. Cleansing construction documents of interdisciplinary coordination errors reduces time extensions, the largest source of change orders, and limits exposure to liability claims. Improving the quality of design…
Fast online generalized multiscale finite element method using constraint energy minimization
NASA Astrophysics Data System (ADS)
Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat
2018-02-01
Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.
NASA Astrophysics Data System (ADS)
Becker, Roland; Vexler, Boris
2005-06-01
We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.
Human performance cognitive-behavioral modeling: a benefit for occupational safety.
Gore, Brian F
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Analysis of forecasting and inventory control of raw material supplies in PT INDAC INT’L
NASA Astrophysics Data System (ADS)
Lesmana, E.; Subartini, B.; Riaman; Jabar, D. A.
2018-03-01
This study discusses the data forecasting sales of carbon electrodes at PT. INDAC INT L uses winters and double moving average methods, while for predicting the amount of inventory and cost required in ordering raw material of carbon electrode next period using Economic Order Quantity (EOQ) model. The result of error analysis shows that winters method for next period gives result of MAE, MSE, and MAPE, the winters method is a better forecasting method for forecasting sales of carbon electrode products. So that PT. INDAC INT L is advised to provide products that will be sold following the sales amount by the winters method.
Human performance cognitive-behavioral modeling: a benefit for occupational safety
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
All-digital precision processing of ERTS images
NASA Technical Reports Server (NTRS)
Bernstein, R. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Digital techniques have been developed and used to apply precision-grade radiometric and geometric corrections to ERTS MSS and RBV scenes. Geometric accuracies sufficient for mapping at 1:250,000 scale have been demonstrated. Radiometric quality has been superior to ERTS NDPF precision products. A configuration analysis has shown that feasible, cost-effective all-digital systems for correcting ERTS data are easily obtainable. This report contains a summary of all results obtained during this study and includes: (1) radiometric and geometric correction techniques, (2) reseau detection, (3) GCP location, (4) resampling, (5) alternative configuration evaluations, and (6) error analysis.
Reconciling uncertain costs and benefits in bayes nets for invasive species management
Burgman, M.A.; Wintle, B.A.; Thompson, C.A.; Moilanen, A.; Runge, M.C.; Ben-Haim, Y.
2010-01-01
Bayes nets are used increasingly to characterize environmental systems and formalize probabilistic reasoning to support decision making. These networks treat probabilities as exact quantities. Sensitivity analysis can be used to evaluate the importance of assumptions and parameter estimates. Here, we outline an application of info-gap theory to Bayes nets that evaluates the sensitivity of decisions to possibly large errors in the underlying probability estimates and utilities. We apply it to an example of management and eradication of Red Imported Fire Ants in Southern Queensland, Australia and show how changes in management decisions can be justified when uncertainty is considered. ?? 2009 Society for Risk Analysis.
Blaya, J A; Gomez, W; Rodriguez, P; Fraser, H
2008-08-01
One hundred and twenty-six public health centers and laboratories in Lima, Peru, without internet. We have previously shown that a personal digital assistant (PDA) based system reduces data collection delays and errors for tuberculosis (TB) laboratory results when compared to a paper system. To assess the data collection efficiency of each system and the resources required to develop, implement and transfer the PDA-based system to a resource-poor setting. Time-motion study of data collectors using the PDA-based and paper systems. Cost analysis of developing, implementing and transferring the PDA-based system to a local organization and their redeployment of the system. Work hours spent collecting and processing results decreased by 60% (P < 0.001). Users perceived this decrease to be 70% and had no technical problems they failed to fix. The total cost and time to develop and implement the intervention was US$26092 and 22 weeks. The cost to extend the system to cover nine more districts was $1125 and to implement collecting patient weights was $4107. A PDA-based system drastically reduced the effort required to collect TB laboratory results from remote locations. With the framework described, open-source software and local development, organizations in resource-poor settings could reap the benefits of this technology.
Evaluation of the CEAS model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Position Tracking During Human Walking Using an Integrated Wearable Sensing System.
Zizzo, Giulio; Ren, Lei
2017-12-10
Progress has been made enabling expensive, high-end inertial measurement units (IMUs) to be used as tracking sensors. However, the cost of these IMUs is prohibitive to their widespread use, and hence the potential of low-cost IMUs is investigated in this study. A wearable low-cost sensing system consisting of IMUs and ultrasound sensors was developed. Core to this system is an extended Kalman filter (EKF), which provides both zero-velocity updates (ZUPTs) and Heuristic Drift Reduction (HDR). The IMU data was combined with ultrasound range measurements to improve accuracy. When a map of the environment was available, a particle filter was used to impose constraints on the possible user motions. The system was therefore composed of three subsystems: IMUs, ultrasound sensors, and a particle filter. A Vicon motion capture system was used to provide ground truth information, enabling validation of the sensing system. Using only the IMU, the system showed loop misclosure errors of 1% with a maximum error of 4-5% during walking. The addition of the ultrasound sensors resulted in a 15% reduction in the total accumulated error. Lastly, the particle filter was capable of providing noticeable corrections, which could keep the tracking error below 2% after the first few steps.
Cost-effectiveness of the stream-gaging program in North Carolina
Mason, R.R.; Jackson, N.M.
1985-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.
Partially Overlapping Mechanisms of Language and Task Control in Young and Older Bilinguals
Weissberger, Gali H.; Wierenga, Christina E.; Bondi, Mark W.; Gollan, Tamar H.
2012-01-01
The current study tested the hypothesis that bilinguals rely on domain-general mechanisms of executive control to achieve language control by asking if linguistic and nonlinguistic switching tasks exhibit similar patterns of aging-related decline. Thirty young and 30 aging bilinguals completed a cued language-switching task and a cued color-shape switching task. Both tasks demonstrated significant aging effects, but aging-related slowing and the aging-related increase in errors were significantly larger on the color-shape than on the language task. In the language task, aging increased language-switching costs in both response times and errors, and language-mixing costs only in response times. In contrast, the color-shape task exhibited an aging-related increase in costs only in mixing errors. Additionally, a subset of the older bilinguals could not do the color-shape task, but were able to do the language task, and exhibited significantly larger language-switching costs than matched controls. These differences, and some subtle similarities, in aging effects observed across tasks imply that mechanisms of nonlinguistic task and language control are only partly shared and demonstrate relatively preserved language control in aging. More broadly, these data suggest that age deficits in switching and mixing costs may depend on task expertise, with mixing deficits emerging for less-practiced tasks and switching deficits for highly practiced, possibly “expert” tasks (i.e., language). PMID:22582883
Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.
Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W
2016-01-01
To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo
A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
Multiplate Radiation Shields: Investigating Radiational Heating Errors
NASA Astrophysics Data System (ADS)
Richardson, Scott James
1995-01-01
Multiplate radiation shield errors are examined using the following techniques: (1) analytic heat transfer analysis, (2) optical ray tracing, (3) numerical fluid flow modeling, (4) laboratory testing, (5) wind tunnel testing, and (6) field testing. Guidelines for reducing radiational heating errors are given that are based on knowledge of the temperature sensor to be used, with the shield being chosen to match the sensor design. Small, reflective sensors that are exposed directly to the air stream (not inside a filter as is the case for many temperature and relative humidity probes) should be housed in a shield that provides ample mechanical and rain protection while impeding the air flow as little as possible; protection from radiation sources is of secondary importance. If a sensor does not meet the above criteria (i.e., is large or absorbing), then a standard Gill shield performs reasonably well. A new class of shields, called part-time aspirated multiplate radiation shields, are introduced. This type of shield consists of a multiplate design usually operated in a passive manner but equipped with a fan-forced aspiration capability to be used when necessary (e.g., low wind speed). The fans used here are 12 V DC that can be operated with a small dedicated solar panel. This feature allows the fan to operate when global solar radiation is high, which is when the largest radiational heating errors usually occur. A prototype shield was constructed and field tested and an example is given in which radiational heating errors were reduced from 2 ^circC to 1.2 ^circC. The fan was run continuously to investigate night-time low wind speed errors and the prototype shield reduced errors from 1.6 ^ circC to 0.3 ^circC. Part-time aspirated shields are an inexpensive alternative to fully aspirated shields and represent a good compromise between cost, power consumption, reliability (because they should be no worse than a standard multiplate shield if the fan fails), and accuracy. In addition, it is possible to modify existing passive shields to incorporate part-time aspiration, thus making them even more cost-effective. Finally, a new shield is described that incorporates a large diameter top plate that is designed to shade the lower portion of the shield. This shield increases flow through it by 60%, compared to the Gill design and it is likely to reduce radiational heating errors, although it has not been tested.
NASA Astrophysics Data System (ADS)
Pack, Robert C.; Standiford, Keith; Lukanc, Todd; Ning, Guo Xiang; Verma, Piyush; Batarseh, Fadi; Chua, Gek Soon; Fujimura, Akira; Pang, Linyong
2014-10-01
A methodology is described wherein a calibrated model-based `Virtual' Variable Shaped Beam (VSB) mask writer process simulator is used to accurately verify complex Optical Proximity Correction (OPC) and Inverse Lithography Technology (ILT) mask designs prior to Mask Data Preparation (MDP) and mask fabrication. This type of verification addresses physical effects which occur in mask writing that may impact lithographic printing fidelity and variability. The work described here is motivated by requirements for extreme accuracy and control of variations for today's most demanding IC products. These extreme demands necessitate careful and detailed analysis of all potential sources of uncompensated error or variation and extreme control of these at each stage of the integrated OPC/ MDP/ Mask/ silicon lithography flow. The important potential sources of variation we focus on here originate on the basis of VSB mask writer physics and other errors inherent in the mask writing process. The deposited electron beam dose distribution may be examined in a manner similar to optical lithography aerial image analysis and image edge log-slope analysis. This approach enables one to catch, grade, and mitigate problems early and thus reduce the likelihood for costly long-loop iterations between OPC, MDP, and wafer fabrication flows. It moreover describes how to detect regions of a layout or mask where hotspots may occur or where the robustness to intrinsic variations may be improved by modification to the OPC, choice of mask technology, or by judicious design of VSB shots and dose assignment.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Human Error and the International Space Station: Challenges and Triumphs in Science Operations
NASA Technical Reports Server (NTRS)
Harris, Samantha S.; Simpson, Beau C.
2016-01-01
Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.
da Silva, Brianna A; Krishnamurthy, Mahesh
2016-01-01
A 71-year-old female accidentally received thiothixene (Navane), an antipsychotic, instead of her anti-hypertensive medication amlodipine (Norvasc) for 3 months. She sustained physical and psychological harm including ambulatory dysfunction, tremors, mood swings, and personality changes. Despite the many opportunities for intervention, multiple health care providers overlooked her symptoms. Errors occurred at multiple care levels, including prescribing, initial pharmacy dispensation, hospitalization, and subsequent outpatient follow-up. This exemplifies the Swiss Cheese Model of how errors can occur within a system. Adverse drug events (ADEs) account for more than 3.5 million physician office visits and 1 million emergency department visits each year. It is believed that preventable medication errors impact more than 7 million patients and cost almost $21 billion annually across all care settings. About 30% of hospitalized patients have at least one discrepancy on discharge medication reconciliation. Medication errors and ADEs are an underreported burden that adversely affects patients, providers, and the economy. Medication reconciliation including an 'indication review' for each prescription is an important aspect of patient safety. The decreasing frequency of pill bottle reviews, suboptimal patient education, and poor communication between healthcare providers are factors that threaten patient safety. Medication error and ADEs cost billions of health care dollars and are detrimental to the provider-patient relationship.
Cost analysis serves many purposes.
Finger, W R
1998-01-01
This article discusses the utility of performing cost analysis of family planning (FP) personnel resources by relying on a system analysis framework in developing countries. A study of a national provider that distributes 16% of all FP services in Mexico found that more efficient use of staff would increase the number of clients served. Nurses and doctors worked slightly more than 6 hours/day, and 38% of a nurse's time and 47% of a physician's time was spent in meetings, administrative duties, unoccupied work time, and personal time. The Mexican government proposed increasing the work day to 8 hours and increasing to 66% the portion of the work day spent on direct client activity. With this change, services would increase from 1.5 million couple-years of protection (CYP) to 1.8 million CYP in 2010, without additional staff, and CYP cost would decline. CYP costs could potentially be reduced by increasing the number of contraceptive units provided per visit and switching from a 1-month- to a 3-month-duration injectable contraceptive. A Bangladesh study found that CYP costs could be reduced by eliminating absenteeism and increasing work time/day by 1 hour. Cost studies can address specific human resource issues. A study in Thailand found that Norplant was more expensive per CYP than injectables and the IUD, and Norplant acceptors were willing to switch to other effective modern methods. The Thai government decided to target Norplant to a few target groups. Staff time use evaluations can be conducted by requiring staff to record their time or by having clients maintain records of staff time on their health cards. The time-motion study, which involves direct observations of how staff spend their time, is costly but avoids estimation error. A CEMOPLAF study in Ecuador found that 1 visit detected almost as many health problems as 4 visits. Some studies examine cost savings related to other services.
An Analysis of U.S. Civil Rotorcraft Accidents by Cost and Injury (1990-1996)
NASA Technical Reports Server (NTRS)
Iseler, Laura; DeMaio, Joe; Rutkowski, Michael (Technical Monitor)
2002-01-01
A study of rotorcraft accidents was conducted to identify safety issues and research areas that might lead to a reduction in rotorcraft accidents and fatalities. The primary source of data was summaries of National Transportation Safety Board (NTSB) accident reports. From 1990 to 1996, the NTSB documented 1396 civil rotorcraft accidents in the United States in which 491 people were killed. The rotorcraft data were compared to airline and general aviation data to determine the relative safety of rotorcraft compared to other segments of the aviation industry. In depth analysis of the rotorcraft data addressed demographics, mission, and operational factors. Rotorcraft were found to have an accident rate about ten times that of commercial airliners and about the same as that of general aviation. The likelihood that an accident would be fatal was about equal for all three classes of operation. The most dramatic division in rotorcraft accidents is between flights flown by private pilots versus professional pilots. Private pilots, flying low cost aircraft in benign environments, have accidents that are due, in large part, to their own errors. Professional pilots, in contrast, are more likely to have accidents that are a result of exacting missions or use of specialized equipment. For both groups judgement error is more likely to lead to a fatal accident than are other types of causes. Several approaches to improving the rotorcraft accident rate are recommended. These mostly address improvement in the training of new pilots and improving the safety awareness of private pilots.
Design of Complex BPF with Automatic Digital Tuning Circuit for Low-IF Receivers
NASA Astrophysics Data System (ADS)
Kondo, Hideaki; Sawada, Masaru; Murakami, Norio; Masui, Shoichi
This paper describes the architecture and implementations of an automatic digital tuning circuit for a complex bandpass filter (BPF) in a low-power and low-cost transceiver for applications such as personal authentication and wireless sensor network systems. The architectural design analysis demonstrates that an active RC filter in a low-IF architecture can be at least 47.7% smaller in area than a conventional gm-C filter; in addition, it features a simple implementation of an associated tuning circuit. The principle of simultaneous tuning of both the center frequency and bandwidth through calibration of a capacitor array is illustrated as based on an analysis of filter characteristics, and a scalable automatic digital tuning circuit with simple analog blocks and control logic having only 835 gates is introduced. The developed capacitor tuning technique can achieve a tuning error of less than ±3.5% and lower a peaking in the passband filter characteristics. An experimental complex BPF using 0.18µm CMOS technology can successfully reduce the tuning error from an initial value of -20% to less than ±2.5% after tuning. The filter block dimensions are 1.22mm × 1.01mm; and in measurement results of the developed complex BPF with the automatic digital tuning circuit, current consumption is 705µA and the image rejection ratio is 40.3dB. Complete evaluation of the BPF indicates that this technique can be applied to low-power, low-cost transceivers.
NASA Astrophysics Data System (ADS)
Greenough, J. A.; Rider, W. J.
2004-05-01
A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.
Neonatal screening for inborn errors of metabolism: cost, yield and outcome.
Pollitt, R J; Green, A; McCabe, C J; Booth, A; Cooper, N J; Leonard, J V; Nicholl, J; Nicholson, P; Tunaley, J R; Virdi, N K
1997-01-01
OBJECTIVES. To systematically review the literature on inborn errors of metabolism, neonatal screening technology and screening programmes in order to analyse the costs and benefits of introducing screening based on tandem mass-spectrometry (tandem MS) for a wide range of disorders of amino acid and organic acid metabolism in the UK. To evaluate screening for cystic fibrosis, Duchenne muscular dystrophy and other disorders which are tested on an individual basis. HOW THE RESEARCH WAS CONDUCTED. Systematic searches were carried out of the literature on inborn errors of metabolism, neonatal screening programmes, tandem MS-based neonatal screening technology, economic evaluations of neonatal screening programmes and psychological aspects of neonatal screening. Background material on the biology of inherited metabolic disease, the basic philosophy, and the history and current status of the UK screening programme was also collected. Relevant papers in the grey literature and recent publications were identified by hand-searching. Each paper was graded. For each disease an aggregate grade for the state of knowledge in six key areas was awarded. Additional data were prospectively collected on activity and costs in UK neonatal screening laboratories, and expert clinical opinion on current treatment modalities and outcomes. These data were used to construct a decision-analysis model of neonatal screening technologies, comparing tandem MS with the existing phenylketonuria screening methods. This model determined the cost per additional case identified and, for each disease, the additional treatment costs per case, and the cost per life-year saved. All costs and benefits were discounted at 6% per annum. One-way sensitivity analysis was performed showing the effect of varying the discount rate, the incidence rate of each disorder, the number of neonates screened and the cost of tandem MS, on the cost per life-year gained. RESEARCH FINDINGS. The UK screening programmes for phenylketonuria and congenital hypothyroidism have largely achieved the expected objectives and are cost-effective. Current concerns are the difficulty of maintaining adequate coverage, perceived organisational weaknesses, and a lack of overview. For many of the organic acid disorders it was necessary to rely on data obtained from clinically-diagnosed cases. Many of these diseases can be treated very effectively and a sensitive screening test was available for most of the diseases. Except for cystic fibrosis, there have been no randomised controlled trials of the overall effectiveness of neonatal screening. Despite the anxiety generated by the screening process, there is strong parental support for screening. The effects of diagnosis through screening on subsequent reproductive behaviour is less clear. Conflicts exist between current concepts and the traditional principles of screening. The availability of effective treatment is not an absolute prerequisite: early diagnosis is of value to the family concerned and, to the extent that is leads to increased use of prenatal diagnosis, may help to reduce the overall burden of disease. Neonatal screening is also of value in diseases which present early but with non-specific symptoms. Indeed, almost all of the diseases considered could merit neonatal screening. The majority of economic evaluations failed to incorporate the health benefits from screening, and therefore failed to address the value of the information which the screening programmes provided to parents. The marginal cost of changing from present technology to tandem MS would be approximately 0.60 pounds per baby at a workload of 100,000 samples a year, and 0.87 pounds at 50,000 samples per year. The ability to screen for a wider range of diseases would lead to the identification of some 20 additional cases per 100,000 infants screened, giving a laboratory cost per additional diagnosis of 3000 pounds at an annual workload of 100,000 babies per year.(ABSTRACT TRUNCATED)
Text Classification for Assisting Moderators in Online Health Communities
Huh, Jina; Yetisgen-Yildiz, Meliha; Pratt, Wanda
2013-01-01
Objectives Patients increasingly visit online health communities to get help on managing health. The large scale of these online communities makes it impossible for the moderators to engage in all conversations; yet, some conversations need their expertise. Our work explores low-cost text classification methods to this new domain of determining whether a thread in an online health forum needs moderators’ help. Methods We employed a binary classifier on WebMD’s online diabetes community data. To train the classifier, we considered three feature types: (1) word unigram, (2) sentiment analysis features, and (3) thread length. We applied feature selection methods based on χ2 statistics and under sampling to account for unbalanced data. We then performed a qualitative error analysis to investigate the appropriateness of the gold standard. Results Using sentiment analysis features, feature selection methods, and balanced training data increased the AUC value up to 0.75 and the F1-score up to 0.54 compared to the baseline of using word unigrams with no feature selection methods on unbalanced data (0.65 AUC and 0.40 F1-score). The error analysis uncovered additional reasons for why moderators respond to patients’ posts. Discussion We showed how feature selection methods and balanced training data can improve the overall classification performance. We present implications of weighing precision versus recall for assisting moderators of online health communities. Our error analysis uncovered social, legal, and ethical issues around addressing community members’ needs. We also note challenges in producing a gold standard, and discuss potential solutions for addressing these challenges. Conclusion Social media environments provide popular venues in which patients gain health-related information. Our work contributes to understanding scalable solutions for providing moderators’ expertise in these large-scale, social media environments. PMID:24025513
MacNeil Vroomen, Janet; Eekhout, Iris; Dijkgraaf, Marcel G; van Hout, Hein; de Rooij, Sophia E; Heymans, Martijn W; Bosmans, Judith E
2016-11-01
Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing cost-effectiveness data in a randomized controlled trial. Three incomplete data sets were generated from a complete reference data set with 17, 35 and 50 % missing data in effects and costs. The strategies evaluated included complete case analysis (CCA), multiple imputation with predictive mean matching (MI-PMM), MI-PMM on log-transformed costs (log MI-PMM), and a two-step MI. Mean cost and effect estimates, standard errors and incremental net benefits were compared with the results of the analyses on the complete reference data set. The CCA, MI-PMM, and the two-step MI strategy diverged from the results for the reference data set when the amount of missing data increased. In contrast, the estimates of the Log MI-PMM strategy remained stable irrespective of the amount of missing data. MI provided better estimates than CCA in all scenarios. With low amounts of missing data the MI strategies appeared equivalent but we recommend using the log MI-PMM with missing data greater than 35 %.
Tailoring a Human Reliability Analysis to Your Industry Needs
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2016-01-01
Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.
Cost effectiveness of ergonomic redesign of electronic motherboard.
Sen, Rabindra Nath; Yeow, Paul H P
2003-09-01
A case study to illustrate the cost effectiveness of ergonomic redesign of electronic motherboard was presented. The factory was running at a loss due to the high costs of rejects and poor quality and productivity. Subjective assessments and direct observations were made on the factory. Investigation revealed that due to motherboard design errors, the machine had difficulty in placing integrated circuits onto the pads, the operators had much difficulty in manual soldering certain components and much unproductive manual cleaning (MC) was required. Consequently, there were high rejects and occupational health and safety (OHS) problems, such as, boredom and work discomfort. Also, much labour and machine costs were spent on repairs. The motherboard was redesigned to correct the design errors, to allow more components to be machine soldered and to reduce MC. This eliminated rejects, reduced repairs, saved US dollars 581495/year and improved operators' OHS. The customer also saved US dollars 142105/year on loss of business.
Forecasting Construction Cost Index based on visibility graph: A network approach
NASA Astrophysics Data System (ADS)
Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong
2018-03-01
Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.
NASA Astrophysics Data System (ADS)
Saga, R. S.; Jauhari, W. A.; Laksono, P. W.
2017-11-01
This paper presents an integrated inventory model which consists of single vendor and buyer. The buyer managed its inventory periodically and orders products from the vendor to satisfy the end customer’s demand, where the annual demand and the ordering cost were in the fuzzy environment. The buyer used a service level constraint instead of the stock-out cost term, so that the stock-out level per cycle was bounded. Then, the vendor produced and delivered products to the buyer. The vendor had a choice to commit an investment to reduce the setup cost. However, the vendor’s production process was imperfect, thus the lot delivered contained some defective products. Moreover, the buyer’s inspection process was not error-free since the inspector could be mistaken in categorizing the product’s quality. The objective was to find the optimum value for the review period, the setup cost, and the number of deliveries in one production cycle which might minimize the joint total cost. Furthermore, the algorithm and numerical example were provided to illustrate the application of the model.
NASA Astrophysics Data System (ADS)
Fillion, Anthony; Bocquet, Marc; Gratton, Serge
2018-04-01
The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.
Error modeling and analysis of star cameras for a class of 1U spacecraft
NASA Astrophysics Data System (ADS)
Fowler, David M.
As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities. Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field. The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.
Cost-effectiveness analysis of a hospital electronic medication management system
Gospodarevskaya, Elena; Li, Ling; Richardson, Katrina L; Roffe, David; Heywood, Maureen; Day, Richard O; Graves, Nicholas
2015-01-01
Objective To conduct a cost–effectiveness analysis of a hospital electronic medication management system (eMMS). Methods We compared costs and benefits of paper-based prescribing with a commercial eMMS (CSC MedChart) on one cardiology ward in a major 326-bed teaching hospital, assuming a 15-year time horizon and a health system perspective. The eMMS implementation and operating costs were obtained from the study site. We used data on eMMS effectiveness in reducing potential adverse drug events (ADEs), and potential ADEs intercepted, based on review of 1 202 patient charts before (n = 801) and after (n = 401) eMMS. These were combined with published estimates of actual ADEs and their costs. Results The rate of potential ADEs following eMMS fell from 0.17 per admission to 0.05; a reduction of 71%. The annualized eMMS implementation, maintenance, and operating costs for the cardiology ward were A$61 741 (US$55 296). The estimated reduction in ADEs post eMMS was approximately 80 actual ADEs per year. The reduced costs associated with these ADEs were more than sufficient to offset the costs of the eMMS. Estimated savings resulting from eMMS implementation were A$63–66 (US$56–59) per admission (A$97 740–$102 000 per annum for this ward). Sensitivity analyses demonstrated results were robust when both eMMS effectiveness and costs of actual ADEs were varied substantially. Conclusion The eMMS within this setting was more effective and less expensive than paper-based prescribing. Comparison with the few previous full economic evaluations available suggests a marked improvement in the cost–effectiveness of eMMS, largely driven by increased effectiveness of contemporary eMMs in reducing medication errors. PMID:25670756
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
The Effect of N-3 on N-2 Repetition Costs in Task Switching
ERIC Educational Resources Information Center
Schuch, Stefanie; Grange, James A.
2015-01-01
N-2 task repetition cost is a response time and error cost returning to a task recently performed after one intervening trial (i.e., an ABA task sequence) compared with returning to a task not recently performed (i.e., a CBA task sequence). This cost is considered a robust measure of inhibitory control during task switching. The present article…
Modeling the topography of shallow braided rivers using Structure-from-Motion photogrammetry
NASA Astrophysics Data System (ADS)
Javernick, L.; Brasington, J.; Caruso, B.
2014-05-01
Recent advances in computer vision and image analysis have led to the development of a novel, fully automated photogrammetric method to generate dense 3d point cloud data. This approach, termed Structure-from-Motion or SfM, requires only limited ground-control and is ideally suited to imagery obtained from low-cost, non-metric cameras acquired either at close-range or using aerial platforms. Terrain models generated using SfM have begun to emerge recently and with a growing spectrum of software now available, there is an urgent need to provide a robust quality assessment of the data products generated using standard field and computational workflows. To address this demand, we present a detailed error analysis of sub-meter resolution terrain models of two contiguous reaches (1.6 and 1.7 km long) of the braided Ahuriri River, New Zealand, generated using SfM. A six stage methodology is described, involving: i) hand-held image acquisition from an aerial platform, ii) 3d point cloud extraction modeling using Agisoft PhotoScan, iii) georeferencing on a redundant network of GPS-surveyed ground-control points, iv) point cloud filtering to reduce computational demand as well as reduce vegetation noise, v) optical bathymetric modeling of inundated areas; and vi) data fusion and surface modeling to generate sub-meter raster terrain models. Bootstrapped geo-registration as well as extensive distributed GPS and sonar-based bathymetric check-data were used to quantify the quality of the models generated after each processing step. The results obtained provide the first quantified analysis of SfM applied to model the complex terrain of a braided river. Results indicate that geo-registration errors of 0.04 m (planar) and 0.10 m (elevation) and vertical surface errors of 0.10 m in non-vegetation areas can be achieved from a dataset of photographs taken at 600 m and 800 m above the ground level. These encouraging results suggest that this low-cost, logistically simple method can deliver high quality terrain datasets competitive with those obtained with significantly more expensive laser scanning, and suitable for geomorphic change detection and hydrodynamic modeling.
Continuous Process Improvement Transformation Guidebook
2006-05-01
except full-scale im- plementation. Error Proofing ( Poka Yoke ) Finding and correcting defects caused by errors costs more and more as a system or...proofing. Shigeo Shingo introduced the concept of Poka - Yoke at Toyota Motor Corporation. Poka Yoke (pronounced “poh-kah yoh-kay”) translates to “avoid
NASA Astrophysics Data System (ADS)
Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng
2013-04-01
This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.
Wasim, Fatima; Mahmood, Tariq; Ayub, Khurshid
2016-07-28
Density functional theory (DFT) calculations have been performed to study the response of polypyrrole towards nitrate ions in gas and aqueous phases. First, an accurate estimate of interaction energies is obtained by methods calibrated against the gold standard CCSD(T) method. Then, a number of low cost DFT methods are also evaluated for their ability to accurately estimate the binding energies of polymer-nitrate complexes. The low cost methods evaluated here include dispersion corrected potential (DCP), Grimme's D3 correction, counterpoise correction of the B3LYP method, and Minnesota functionals (M05-2X). The interaction energies calculated using the counterpoise (CP) correction and DCP methods at the B3LYP level are in better agreement with the interaction energies calculated using the calibrated methods. The interaction energies of an infinite polymer (polypyrrole) with nitrate ions are calculated by a variety of low cost methods in order to find the associated errors. The electronic and spectroscopic properties of polypyrrole oligomers nPy (where n = 1-9) and nPy-NO3(-) complexes are calculated, and then extrapolated for an infinite polymer through a second degree polynomial fit. Charge analysis, frontier molecular orbital (FMO) analysis and density of state studies also reveal the sensing ability of polypyrrole towards nitrate ions. Interaction energies, charge analysis and density of states analyses illustrate that the response of polypyrrole towards nitrate ions is considerably reduced in the aqueous medium (compared to the gas phase).
Developing appropriate methods for cost-effectiveness analysis of cluster randomized trials.
Gomes, Manuel; Ng, Edmond S-W; Grieve, Richard; Nixon, Richard; Carpenter, James; Thompson, Simon G
2012-01-01
Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering--seemingly unrelated regression (SUR) without a robust standard error (SE)--and 4 methods that recognized clustering--SUR and generalized estimating equations (GEEs), both with robust SE, a "2-stage" nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92-0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters.
Developing Appropriate Methods for Cost-Effectiveness Analysis of Cluster Randomized Trials
Gomes, Manuel; Ng, Edmond S.-W.; Nixon, Richard; Carpenter, James; Thompson, Simon G.
2012-01-01
Aim. Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Methods. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering—seemingly unrelated regression (SUR) without a robust standard error (SE)—and 4 methods that recognized clustering—SUR and generalized estimating equations (GEEs), both with robust SE, a “2-stage” nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Results. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92–0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. Conclusions. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters. PMID:22016450
Troëng, T; Bergqvist, D; Norrving, B; Ahari, A
1999-07-01
to study possible relations between indications, contraindications and surgical technique and stroke and/or death within 30 days of carotid endarterectomy (CEA). analysis of hospital records for patients identified in a national vascular registry. during 1995-1996, 1518 patients were reported to the Swedish Vascular Registry - Swedvasc. Among these the sixty-five with a stroke and/or death within 30 days were selected for study. Complete surgical records were reviewed by three approved reviewers using predetermined criteria for indications and possible errors. an error of surgical technique or postoperative management was found in eleven patients (17%). In six cases (9%) the indication was inappropriate or there was an obvious contraindication. The indication was questionable in fourteen (21.5%). Half of the patients (52.5%) had surgery for an appropriate indication, and no contraindication or error in surgical technique or management was identified. more than half the complications of CEA represent the "method cost", i.e. the indication, risk and surgical technique were correct. However, the stroke and/or death rate might be reduced if all operations conformed to agreed criteria. Copyright 1999 W.B. Saunders Company Ltd.
A Kalman Filter Implementation for Precision Improvement in Low-Cost GPS Positioning of Tractors
Gomez-Gil, Jaime; Ruiz-Gonzalez, Ruben; Alonso-Garcia, Sergio; Gomez-Gil, Francisco Javier
2013-01-01
Low-cost GPS receivers provide geodetic positioning information using the NMEA protocol, usually with eight digits for latitude and nine digits for longitude. When these geodetic coordinates are converted into Cartesian coordinates, the positions fit in a quantization grid of some decimeters in size, the dimensions of which vary depending on the point of the terrestrial surface. The aim of this study is to reduce the quantization errors of some low-cost GPS receivers by using a Kalman filter. Kinematic tractor model equations were employed to particularize the filter, which was tuned by applying Monte Carlo techniques to eighteen straight trajectories, to select the covariance matrices that produced the lowest Root Mean Square Error in these trajectories. Filter performance was tested by using straight tractor paths, which were either simulated or real trajectories acquired by a GPS receiver. The results show that the filter can reduce the quantization error in distance by around 43%. Moreover, it reduces the standard deviation of the heading by 75%. Data suggest that the proposed filter can satisfactorily preprocess the low-cost GPS receiver data when used in an assistance guidance GPS system for tractors. It could also be useful to smooth tractor GPS trajectories that are sharpened when the tractor moves over rough terrain. PMID:24217355
Learning-based landmarks detection for osteoporosis analysis
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Zhu, Ling; Yang, Jie; Azhari, Azhari; Sitam, Suhardjo; Liang, Xin; Megalooikonomou, Vasileios; Ling, Haibin
2016-03-01
Osteoporosis is the common cause for a broken bone among senior citizens. Early diagnosis of osteoporosis requires routine examination which may be costly for patients. A potential low cost diagnosis is to identify a senior citizen at high risk of osteoporosis by pre-screening during routine dental examination. Therefore, osteoporosis analysis using dental radiographs severs as a key step in routine dental examination. The aim of this study is to localize landmarks in dental radiographs which are helpful to assess the evidence of osteoporosis. We collect eight landmarks which are critical in osteoporosis analysis. Our goal is to localize these landmarks automatically for a given dental radiographic image. To address the challenges such as large variations of appearances in subjects, in this paper, we formulate the task into a multi-class classification problem. A hybrid feature pool is used to represent these landmarks. For the discriminative classification problem, we use a random forest to fuse the hybrid feature representation. In the experiments, we also evaluate the performances of individual feature component and the hybrid fused feature. Our proposed method achieves average detection error of 2:9mm.
Impact of Robotic Antineoplastic Preparation on Safety, Workflow, and Costs
Seger, Andrew C.; Churchill, William W.; Keohane, Carol A.; Belisle, Caryn D.; Wong, Stephanie T.; Sylvester, Katelyn W.; Chesnick, Megan A.; Burdick, Elisabeth; Wien, Matt F.; Cotugno, Michael C.; Bates, David W.; Rothschild, Jeffrey M.
2012-01-01
Purpose: Antineoplastic preparation presents unique safety concerns and consumes significant pharmacy staff time and costs. Robotic antineoplastic and adjuvant medication compounding may provide incremental safety and efficiency advantages compared with standard pharmacy practices. Methods: We conducted a direct observation trial in an academic medical center pharmacy to compare the effects of usual/manual antineoplastic and adjuvant drug preparation (baseline period) with robotic preparation (intervention period). The primary outcomes were serious medication errors and staff safety events with the potential for harm of patients and staff, respectively. Secondary outcomes included medication accuracy determined by gravimetric techniques, medication preparation time, and the costs of both ancillary materials used during drug preparation and personnel time. Results: Among 1,421 and 972 observed medication preparations, we found nine (0.7%) and seven (0.7%) serious medication errors (P = .8) and 73 (5.1%) and 28 (2.9%) staff safety events (P = .007) in the baseline and intervention periods, respectively. Drugs failed accuracy measurements in 12.5% (23 of 184) and 0.9% (one of 110) of preparations in the baseline and intervention periods, respectively (P < .001). Mean drug preparation time increased by 47% when using the robot (P = .009). Labor costs were similar in both study periods, although the ancillary material costs decreased by 56% in the intervention period (P < .001). Conclusion: Although robotically prepared antineoplastic and adjuvant medications did not reduce serious medication errors, both staff safety and accuracy of medication preparation were improved significantly. Future studies are necessary to address the overall cost effectiveness of these robotic implementations. PMID:23598843
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Cost-effectiveness of the stream-gaging program in Nebraska
Engel, G.B.; Wahl, K.L.; Boohar, J.A.
1984-01-01
This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)
Souvestre, P A; Landrock, C K; Blaber, A P
2008-08-01
Human factors centered aviation accident analyses report that skill based errors are known to be cause of 80% of all accidents, decision making related errors 30% and perceptual errors 6%1. In-flight decision making error is a long time recognized major avenue leading to incidents and accidents. Through the past three decades, tremendous and costly efforts have been developed to attempt to clarify causation, roles and responsibility as well as to elaborate various preventative and curative countermeasures blending state of the art biomedical, technological advances and psychophysiological training strategies. In-flight related statistics have not been shown significantly changed and a significant number of issues remain not yet resolved. Fine Postural System and its corollary, Postural Deficiency Syndrome (PDS), both defined in the 1980's, are respectively neurophysiological and medical diagnostic models that reflect central neural sensory-motor and cognitive controls regulatory status. They are successfully used in complex neurotraumatology and related rehabilitation for over two decades. Analysis of clinical data taken over a ten-year period from acute and chronic post-traumatic PDS patients shows a strong correlation between symptoms commonly exhibited before, along side, or even after error, and sensory-motor or PDS related symptoms. Examples are given on how PDS related central sensory-motor control dysfunction can be correctly identified and monitored via a neurophysiological ocular-vestibular-postural monitoring system. The data presented provides strong evidence that a specific biomedical assessment methodology can lead to a better understanding of in-flight adaptive neurophysiological, cognitive and perceptual dysfunctional status that could induce in flight-errors. How relevant human factors can be identified and leveraged to maintain optimal performance will be addressed.
Emergency nurse practitioners: a three part study in clinical and cost effectiveness
Sakr, M; Kendall, R; Angus, J; Saunders, A; Nicholl, J; Wardrope, J
2003-01-01
Aims: To compare the clinical effectiveness and costs of minor injury services provided by nurse practitioners with minor injury care provided by an accident and emergency (A&E) department. Methods: A three part prospective study in a city where an A&E department was closing and being replaced by a nurse led minor injury unit (MIU). The first part of the study took a sample of patients attending the A&E department. The second part of the study was a sample of patients from a nurse led MIU that had replaced the A&E department. In each of these samples the clinical effectiveness was judged by comparing the "gold standard" of a research assessment with the clinical assessment. Primary outcome measures were the number of errors in clinical assessment, treatment, and disposal. The third part of the study used routine data whose collection had been prospectively configured to assess the costs and cost consequences of both models of care. Results: The minor injury unit produced a safe service where the total package of care was equal to or in some cases better than the A&E care. Significant process errors were made in 191 of 1447 (13.2%) patients treated by medical staff in the A&E department and 126 of 1313 (9.6%) of patients treated by nurse practitioners in the MIU. Very significant errors were rare (one error). Waiting times were much better at the MIU (mean MIU 19 minutes, A&E department 56.4 minutes). The revenue costs were greater in the MIU (MIU £41.1, A&E department £40.01) and there was a great difference in the rates of follow up and with the nurses referring 47% of patients for follow up and the A&E department referring only 27%. Thus the costs and cost consequences were greater for MIU care compared with A&E care (MIU £12.7 per minor injury case, A&E department £9.66 per minor injury case). Conclusion: A nurse practitioner minor injury service can provide a safe and effective service for the treatment of minor injury. However, the costs of such a service are greater and there seems to be an increased use of outpatient services. PMID:12642530
Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis
NASA Technical Reports Server (NTRS)
Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.
2015-01-01
This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.
Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M
2016-03-01
Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.
Rhythmic chaos: irregularities of computer ECG diagnosis.
Wang, Yi-Ting Laureen; Seow, Swee-Chong; Singh, Devinder; Poh, Kian-Keong; Chai, Ping
2017-09-01
Diagnostic errors can occur when physicians rely solely on computer electrocardiogram interpretation. Cardiologists often receive referrals for computer misdiagnoses of atrial fibrillation. Patients may have been inappropriately anticoagulated for pseudo atrial fibrillation. Anticoagulation carries significant risks, and such errors may carry a high cost. Have we become overreliant on machines and technology? In this article, we illustrate three such cases and briefly discuss how we can reduce these errors. Copyright: © Singapore Medical Association.
Cost-effective surgical registration using consumer depth cameras
NASA Astrophysics Data System (ADS)
Potter, Michael; Yaniv, Ziv
2016-03-01
The high costs associated with technological innovation have been previously identified as both a major contributor to the rise of health care expenses, and as a limitation for widespread adoption of new technologies. In this work we evaluate the use of two consumer grade depth cameras, the Microsoft Kinect v1 and 3DSystems Sense, as a means for acquiring point clouds for registration. These devices have the potential to replace professional grade laser range scanning devices in medical interventions that do not require sub-millimetric registration accuracy, and may do so at a significantly reduced cost. To facilitate the use of these devices we have developed a near real-time (1-4 sec/frame) rigid registration framework combining several alignment heuristics with the Iterative Closest Point (ICP) algorithm. Using nearest neighbor registration error as our evaluation criterion we found the optimal scanning distances for the Sense and Kinect to be 50-60cm and 70-80cm respectively. When imaging a skull phantom at these distances, RMS error values of 1.35mm and 1.14mm were obtained. The registration framework was then evaluated using cranial MR scans of two subjects. For the first subject, the RMS error using the Sense was 1.28 +/- 0.01 mm. Using the Kinect this error was 1.24 +/- 0.03 mm. For the second subject, whose MR scan was significantly corrupted by metal implants, the errors increased to 1.44 +/- 0.03 mm and 1.74 +/- 0.06 mm but the system nonetheless performed within acceptable bounds.
[Organization of safe cost-effective blood transfusion: experience APHM-EFSAM].
Ferrera-Tourenc, V; Dettori, I; Chiaroni, J; Lassale, B
2013-03-01
Blood transfusion safety depends on strict compliance with each step of a process beginning with the order for labile blood products and related immunohematologic testing and ending with administration and follow-up of the receiver. This process is governed by stringent regulatory texts and guidelines. Despite precautions, processing errors are still reported. Analysis of incident reports shows that the most common cause involves patient identification and that most errors occur at two levels, i.e. the entry of patient information and management of multiple regulatory crosschecks and record-keeping using different systems. The purpose of this report is to describe the collaborative approach implemented by the Établissement français du Sang Alpes-Méditerranée (EFSAM) and the Assistance publique des Hôpitaux de Marseille (APHM) to secure the blood transfusion process and protect interfaces while simplifying and facilitating exchanges. Close cooperation has had a threefold impact with simplification of administration, improvement of experience feedback, and better management of test ordering. The organization implemented between the two institutions has minimized document redundancy and interfaces between immunohematologic testing and delivery. Collaboration based on experience feedback has improved the level of quality and cost control. In the domain of blood transfusion safety, the threshold of 10(-5) has been reached with regard to the risk of ABO errors in the distribution concentrated red cells (CRC). In addition, this collaborative organization has created further opportunity for improvement by deploying new methods to identify simplification measures and by controlling demand and usage. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Remer, D. S.
1977-01-01
The described mathematical model calculates life-cycle costs for projects with operating costs increasing or decreasing linearly with time. The cost factors involved in the life-cycle cost are considered, and the errors resulting from the assumption of constant rather than uniformly varying operating costs are examined. Parameters in the study range from 2 to 30 years, for project life; 0 to 15% per year, for interest rate; and 5 to 90% of the initial operating cost, for the operating cost gradient. A numerical example is presented.
Stereotype threat can reduce older adults' memory errors.
Barber, Sarah J; Mather, Mara
2013-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research, we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment. Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 and 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well.
Object-based image analysis for cadastral mapping using satellite images
NASA Astrophysics Data System (ADS)
Kohli, D.; Crommelinck, S.; Bennett, R.; Koeva, M.; Lemmen, C.
2017-10-01
Cadasters together with land registry form a core ingredient of any land administration system. Cadastral maps comprise of the extent, ownership and value of land which are essential for recording and updating land records. Traditional methods for cadastral surveying and mapping often prove to be labor, cost and time intensive: alternative approaches are thus being researched for creating such maps. With the advent of very high resolution (VHR) imagery, satellite remote sensing offers a tremendous opportunity for (semi)-automation of cadastral boundaries detection. In this paper, we explore the potential of object-based image analysis (OBIA) approach for this purpose by applying two segmentation methods, i.e. MRS (multi-resolution segmentation) and ESP (estimation of scale parameter) to identify visible cadastral boundaries. Results show that a balance between high percentage of completeness and correctness is hard to achieve: a low error of commission often comes with a high error of omission. However, we conclude that the resulting segments/land use polygons can potentially be used as a base for further aggregation into tenure polygons using participatory mapping.
2016-04-30
costs of new defense systems. An inappropriate price index can introduce errors in both development of cost estimating relationships ( CERs ) and in...indexes derived from CERs . These indexes isolate changes in price due to factors other than changes in quality over time. We develop a “Baseline” CER ...The hedonic index application has commonalities with cost estimating relationships ( CERs ), which also model system costs as a function of quality
Qureshi, N A; Neyaz, Y; Khoja, T; Magzoub, M A; Haycox, A; Walley, T
2011-02-01
Medication errors are globally huge in magnitude and associated with high morbidity and mortality together with high costs and legal problems. Medication errors are caused by multiple factors related to health providers, consumers and health system, but most prescribing errors are preventable. This paper is the third of 3 review articles that form the background for a series of 5 interconnected studies of prescribing patterns and medication errors in the public and private primary health care sectors of Saudi Arabia. A MEDLINE search was conducted to identify papers published in peer-reviewed journals over the previous 3 decades. The paper reviews the etiology, prevention strategies, reporting mechanisms and the myriad consequences of medication errors.
Design and Field Test of a WSN Platform Prototype for Long-Term Environmental Monitoring
Lazarescu, Mihai T.
2015-01-01
Long-term wildfire monitoring using distributed in situ temperature sensors is an accurate, yet demanding environmental monitoring application, which requires long-life, low-maintenance, low-cost sensors and a simple, fast, error-proof deployment procedure. We present in this paper the most important design considerations and optimizations of all elements of a low-cost WSN platform prototype for long-term, low-maintenance pervasive wildfire monitoring, its preparation for a nearly three-month field test, the analysis of the causes of failure during the test and the lessons learned for platform improvement. The main components of the total cost of the platform (nodes, deployment and maintenance) are carefully analyzed and optimized for this application. The gateways are designed to operate with resources that are generally used for sensor nodes, while the requirements and cost of the sensor nodes are significantly lower. We define and test in simulation and in the field experiment a simple, but effective communication protocol for this application. It helps to lower the cost of the nodes and field deployment procedure, while extending the theoretical lifetime of the sensor nodes to over 16 years on a single 1 Ah lithium battery. PMID:25912349
Clinical consequences and economic costs of untreated obstructive sleep apnea syndrome.
Knauert, Melissa; Naik, Sreelatha; Gillespie, M Boyd; Kryger, Meir
2015-09-01
To provide an overview of the healthcare and societal consequences and costs of untreated obstructive sleep apnea syndrome. PubMed database for English-language studies with no start date restrictions and with an end date of September 2014. A comprehensive literature review was performed to identify all studies that discussed the physiologic, clinical and societal consequences of obstructive sleep apnea syndrome as well as the costs associated with these consequences. There were 106 studies that formed the basis of this analysis. Undiagnosed and untreated obstructive sleep apnea syndrome can lead to abnormal physiology that can have serious implications including increased cardiovascular disease, stroke, metabolic disease, excessive daytime sleepiness, work-place errors, traffic accidents and death. These consequences result in significant economic burden. Both, the health and societal consequences and their costs can be decreased with identification and treatment of sleep apnea. Treatment of obstructive sleep apnea syndrome, despite its consequences, is limited by lack of diagnosis, poor patient acceptance, lack of access to effective therapies, and lack of a variety of effective therapies. Newer modes of therapy that are effective, cost efficient and more accepted by patients need to be developed.
van den Broek, Frank J C; de Graaf, Eelco J R; Dijkgraaf, Marcel G W; Reitsma, Johannes B; Haringsma, Jelle; Timmer, Robin; Weusten, Bas L A M; Gerhards, Michael F; Consten, Esther C J; Schwartz, Matthijs P; Boom, Maarten J; Derksen, Erik J; Bijnen, A Bart; Davids, Paul H P; Hoff, Christiaan; van Dullemen, Hendrik M; Heine, G Dimitri N; van der Linde, Klaas; Jansen, Jeroen M; Mallant-Hent, Rosalie C H; Breumelhof, Ronald; Geldof, Han; Hardwick, James C H; Doornebosch, Pascal G; Depla, Annekatrien C T M; Ernst, Miranda F; van Munster, Ivo P; de Hingh, Ignace H J T; Schoon, Erik J; Bemelman, Willem A; Fockens, Paul; Dekker, Evelien
2009-03-13
Recent non-randomized studies suggest that extended endoscopic mucosal resection (EMR) is equally effective in removing large rectal adenomas as transanal endoscopic microsurgery (TEM). If equally effective, EMR might be a more cost-effective approach as this strategy does not require expensive equipment, general anesthesia and hospital admission. Furthermore, EMR appears to be associated with fewer complications.The aim of this study is to compare the cost-effectiveness and cost-utility of TEM and EMR for the resection of large rectal adenomas. Multicenter randomized trial among 15 hospitals in the Netherlands. Patients with a rectal adenoma > or = 3 cm, located between 1-15 cm ab ano, will be randomized to a TEM- or EMR-treatment strategy. For TEM, patients will be treated under general anesthesia, adenomas will be dissected en-bloc by a full-thickness excision, and patients will be admitted to the hospital. For EMR, no or conscious sedation is used, lesions will be resected through the submucosal plane in a piecemeal fashion, and patients will be discharged from the hospital. Residual adenoma that is visible during the first surveillance endoscopy at 3 months will be removed endoscopically in both treatment strategies and is considered as part of the primary treatment. Primary outcome measure is the proportion of patients with recurrence after 3 months. Secondary outcome measures are: 2) number of days not spent in hospital from initial treatment until 2 years afterwards; 3) major and minor morbidity; 4) disease specific and general quality of life; 5) anorectal function; 6) health care utilization and costs. A cost-effectiveness and cost-utility analysis of EMR against TEM for large rectal adenomas will be performed from a societal perspective with respectively the costs per recurrence free patient and the cost per quality adjusted life year as outcome measures. Based on comparable recurrence rates for TEM and EMR of 3.3% and considering an upper-limit of 10% for EMR to be non-inferior (beta-error 0.2 and one-sided alpha-error 0.05), 89 patients are needed per group. The TREND study is the first randomized trial evaluating whether TEM or EMR is more cost-effective for the treatment of large rectal adenomas. (trialregister.nl) NTR1422.
van den Broek, Frank JC; de Graaf, Eelco JR; Dijkgraaf, Marcel GW; Reitsma, Johannes B; Haringsma, Jelle; Timmer, Robin; Weusten, Bas LAM; Gerhards, Michael F; Consten, Esther CJ; Schwartz, Matthijs P; Boom, Maarten J; Derksen, Erik J; Bijnen, A Bart; Davids, Paul HP; Hoff, Christiaan; van Dullemen, Hendrik M; Heine, G Dimitri N; van der Linde, Klaas; Jansen, Jeroen M; Mallant-Hent, Rosalie CH; Breumelhof, Ronald; Geldof, Han; Hardwick, James CH; Doornebosch, Pascal G; Depla, Annekatrien CTM; Ernst, Miranda F; van Munster, Ivo P; de Hingh, Ignace HJT; Schoon, Erik J; Bemelman, Willem A; Fockens, Paul; Dekker, Evelien
2009-01-01
Background Recent non-randomized studies suggest that extended endoscopic mucosal resection (EMR) is equally effective in removing large rectal adenomas as transanal endoscopic microsurgery (TEM). If equally effective, EMR might be a more cost-effective approach as this strategy does not require expensive equipment, general anesthesia and hospital admission. Furthermore, EMR appears to be associated with fewer complications. The aim of this study is to compare the cost-effectiveness and cost-utility of TEM and EMR for the resection of large rectal adenomas. Methods/design Multicenter randomized trial among 15 hospitals in the Netherlands. Patients with a rectal adenoma ≥ 3 cm, located between 1–15 cm ab ano, will be randomized to a TEM- or EMR-treatment strategy. For TEM, patients will be treated under general anesthesia, adenomas will be dissected en-bloc by a full-thickness excision, and patients will be admitted to the hospital. For EMR, no or conscious sedation is used, lesions will be resected through the submucosal plane in a piecemeal fashion, and patients will be discharged from the hospital. Residual adenoma that is visible during the first surveillance endoscopy at 3 months will be removed endoscopically in both treatment strategies and is considered as part of the primary treatment. Primary outcome measure is the proportion of patients with recurrence after 3 months. Secondary outcome measures are: 2) number of days not spent in hospital from initial treatment until 2 years afterwards; 3) major and minor morbidity; 4) disease specific and general quality of life; 5) anorectal function; 6) health care utilization and costs. A cost-effectiveness and cost-utility analysis of EMR against TEM for large rectal adenomas will be performed from a societal perspective with respectively the costs per recurrence free patient and the cost per quality adjusted life year as outcome measures. Based on comparable recurrence rates for TEM and EMR of 3.3% and considering an upper-limit of 10% for EMR to be non-inferior (beta-error 0.2 and one-sided alpha-error 0.05), 89 patients are needed per group. Discussion The TREND study is the first randomized trial evaluating whether TEM or EMR is more cost-effective for the treatment of large rectal adenomas. Trial registration number (trialregister.nl) NTR1422 PMID:19284647
An Evaluation of the Utility and Cost of Computerized Library Catalogs. Final Report.
ERIC Educational Resources Information Center
Dolby, J.L.; And Others
This study analyzes the basic cost factors in the automation of library catalogs, with a separate examination of the influence of typography on the cost of printed catalogs and the use of efficient automatic error detection procedures in processing bibliographic records. The utility of automated catalogs is also studied, based on data from a…
Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data
CHEN, SHUAI; ZHAO, HONGWEI
2013-01-01
Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869
A Case Study of 4 & 5 Cost Effectiveness
NASA Technical Reports Server (NTRS)
Neal, Ralph D.; McCaugherty, Dan; Joshi, Tulasi; Callahan, John
1997-01-01
This paper looks at the Independent Verification and Validation (IV&V) of NASA's Space Shuttle Day of Launch I-Load Update (DoLILU) project. IV&V is defined. The system's development life cycle is explained. Data collection and analysis are described. DoLILU Issue Tracking Reports (DITRs) authored by IV&V personnel are analyzed to determine the effectiveness of IV&V in finding errors before the code, testing, and integration phase of the software development life cycle. The study's findings are reported along with the limitations of the study and planned future research.
Odlum, Michelle
2016-01-01
Health Information Technology (HIT) adoption by clinicians, including nurses, will lead to reduction in healthcare costs and clinical errors and improve health outcomes. Understanding the importance of technology adoption, the current study utilized the Technology Readiness Index to explore technology perceptions of nursing students. Our analysis identifies factors that may influence perceptions of technology, including decreased optimism for students with clinical experience and increased discomfort of US born students. Our study provides insight to inform training programs to further meet the increasing demands of skilled nursing staff.
AIRSAR Automated Web-based Data Processing and Distribution System
NASA Technical Reports Server (NTRS)
Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen
2005-01-01
In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.
NASA Astrophysics Data System (ADS)
Soto-López, Carlos D.; Meixner, Thomas; Ferré, Ty P. A.
2011-12-01
From its inception in the mid-1960s, the use of temperature time series (thermographs) to estimate vertical fluxes has found increasing use in the hydrologic community. Beginning in 2000, researchers have examined the impacts of measurement and parameter uncertainty on the estimates of vertical fluxes. To date, the effects of temperature measurement discretization (resolution), a characteristic of all digital temperature loggers, on the determination of vertical fluxes has not been considered. In this technical note we expand the analysis of recently published work to include the effects of temperature measurement resolution on estimates of vertical fluxes using temperature amplitude and phase shift information. We show that errors in thermal front velocity estimation introduced by discretizing thermographs differ when amplitude or phase shift data are used to estimate vertical fluxes. We also show that under similar circumstances sensor resolution limits the range over which vertical velocities are accurately reproduced more than uncertainty in temperature measurements, uncertainty in sensor separation distance, and uncertainty in the thermal diffusivity combined. These effects represent the baseline error present and thus the best-case scenario when discrete temperature measurements are used to infer vertical fluxes. The errors associated with measurement resolution can be minimized by using the highest-resolution sensors available. But thoughtful experimental design could allow users to select the most cost-effective temperature sensors to fit their measurement needs.
Gonzalez, Claudia C; Mon-Williams, Mark; Burke, Melanie R
2015-01-01
Numerous activities require an individual to respond quickly to the correct stimulus. The provision of advance information allows response priming but heightened responses can cause errors (responding too early or reacting to the wrong stimulus). Thus, a balance is required between the online cognitive mechanisms (inhibitory and anticipatory) used to prepare and execute a motor response at the appropriate time. We investigated the use of advance information in 71 participants across four different age groups: (i) children, (ii) young adults, (iii) middle-aged adults, and (iv) older adults. We implemented 'cued' and 'non-cued' conditions to assess age-related changes in saccadic and touch responses to targets in three movement conditions: (a) Eyes only; (b) Hands only; (c) Eyes and Hand. Children made less saccade errors compared to young adults, but they also exhibited longer response times in cued versus non-cued conditions. In contrast, older adults showed faster responses in cued conditions but exhibited more errors. The results indicate that young adults (18-25 years) achieve an optimal balance between anticipation and execution. In contrast, children show benefits (few errors) and costs (slow responses) of good inhibition when preparing a motor response based on advance information; whilst older adults show the benefits and costs associated with a prospective response strategy (i.e., good anticipation).
Thin film concentrator panel development
NASA Technical Reports Server (NTRS)
Zimmerman, D. K.
1982-01-01
The development and testing of a rigid panel concept that utilizes a thin film reflective surface for application to a low-cost point-focusing solar concentrator is discussed. It is shown that a thin film reflective surface is acceptable for use on solar concentrators, including 1500 F applications. Additionally, it is shown that a formed steel sheet substrate is a good choice for concentrator panels. The panel has good optical properties, acceptable forming tolerances, environmentally resistant substrate and stiffeners, and adaptability to low to mass production rates. Computer simulations of the concentrator optics were run using the selected reflector panel design. Experimentally determined values for reflector surface specularity and reflectivity along with dimensional data were used in the analysis. The simulations provided intercept factor and net energy into the aperture as a function of aperture size for different surface errors and pointing errors. Point source and Sun source optical tests were also performed.
[Does clinical risk management require a structured conflict management?].
Neumann, Stefan
2015-01-01
A key element of clinical risk management is the analysis of errors causing near misses or patient damage. After analyzing the causes and circumstances, measures for process improvement have to be taken. Process management, human resource development and other established methods are used. If an interpersonal conflict is a contributory factor to the error, there is usually no structured conflict management available which includes selection criteria for various methods of conflict processing. The European University Viadrina in Frankfurt (Oder) has created a process model for introducing a structured conflict management system which is suitable for hospitals and could fill the gap in the methodological spectrum of clinical risk management. There is initial evidence that a structured conflict management reduces staff fluctuation and hidden conflict costs. This article should be understood as an impulse for discussion on to what extent the range of methods of clinical risk management should be complemented by conflict management.
The Treatment of Capital Costs in Educational Projects
ERIC Educational Resources Information Center
Bezeau, Lawrence
1975-01-01
Failure to account for the cost and depreciation of capital leads to suboptimal investments in education, specifically to excessively capital intensive instructional technologies. This type of error, which is particularly serious when planning for developing countries, can be easily avoided. (Author)
Cost awareness of physicians in intensive care units: a multicentric national study.
Hernu, Romain; Cour, Martin; de la Salle, Sylvie; Robert, Dominique; Argaud, Laurent
2015-08-01
Physicians play an important role in strategies to control health care spending. Being aware of the cost of prescriptions is surely the first step to incorporating cost-consciousness into medical practice. The aim of this study was to evaluate current intensivists' knowledge of the costs of common prescriptions and to identify factors influencing the accuracy of cost estimations. Junior and senior physicians in 99 French intensive care units were asked, by questionnaire, to estimate the true hospital costs of 46 selected prescriptions commonly used in critical care practice. With an 83% response rate, 1092 questionnaires were examined, completed by 575 (53%) and 517 (47%) junior and senior intensivists, respectively. Only 315 (29%) of the overall estimates were within 50% of the true cost. Response errors included a 14,756 ± 301 € underestimation, i.e., -58 ± 1% of the total sum (25,595 €). High-cost drugs (>1000 €) were significantly (p < 0.001) the most underestimated prescriptions (-67 ± 1%). Junior grade physicians underestimated more costs than senior physicians (p < 0.001). Using multivariate analysis, junior physicians [odds ratio (OR), 2.1; 95% confidence interval (95% CI), 1.43-3.08; p = 0.0002] and female gender (OR, 1.4; 95% CI, 1.04-1.89; p = 0.02) were both independently associated with incorrect cost estimations. ICU physicians have a poor awareness of prescriptions costs, especially with regards to high-cost drugs. Considerable emphasis and effort are still required to integrate the cost-containment problem into the daily prescriptions in ICUs.
NASA Astrophysics Data System (ADS)
Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang
2018-01-01
With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.
Photometric method for determination of acidity constants through integral spectra analysis
NASA Astrophysics Data System (ADS)
Zevatskiy, Yuriy Eduardovich; Ruzanov, Daniil Olegovich; Samoylov, Denis Vladimirovich
2015-04-01
An express method for determination of acidity constants of organic acids, based on the analysis of the integral transmittance vs. pH dependence is developed. The integral value is registered as a photocurrent of photometric device simultaneously with potentiometric titration. The proposed method allows to obtain pKa using only simple and low-cost instrumentation. The optical part of the experimental setup has been optimized through the exclusion of the monochromator device. Thus it only takes 10-15 min to obtain one pKa value with the absolute error of less than 0.15 pH units. Application limitations and reliability of the method have been tested for a series of organic acids of various nature.
A digital flight control system verification laboratory
NASA Technical Reports Server (NTRS)
De Feo, P.; Saib, S.
1982-01-01
A NASA/FAA program has been established for the verification and validation of digital flight control systems (DFCS), with the primary objective being the development and analysis of automated verification tools. In order to enhance the capabilities, effectiveness, and ease of using the test environment, software verification tools can be applied. Tool design includes a static analyzer, an assertion generator, a symbolic executor, a dynamic analysis instrument, and an automated documentation generator. Static and dynamic tools are integrated with error detection capabilities, resulting in a facility which analyzes a representative testbed of DFCS software. Future investigations will ensue particularly in the areas of increase in the number of software test tools, and a cost effectiveness assessment.
Design and performance evaluation of a master controller for endovascular catheterization.
Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori
2016-01-01
It is difficult to manipulate a flexible catheter to target a position within a patient's complicated and delicate vessels. However, few researchers focused on the controller designs with much consideration of the natural catheter manipulation skills obtained from manual catheterization. Also, the existing catheter motion measurement methods probably lead to the difficulties in designing the force feedback device. Additionally, the commercially available systems are too expensive which makes them cost prohibitive to most hospitals. This paper presents a simple and cost-effective master controller for endovascular catheterization that can allow the interventionalists to apply the conventional pull, push and twist of the catheter used in current practice. A catheter-sensing unit (used to measure the motion of the catheter) and a force feedback unit (used to provide a sense of resistance force) are both presented. A camera was used to allow a contactless measurement avoiding additional friction, and the force feedback in the axial direction was provided by the magnetic force generated between the permanent magnets and the powered coil. Performance evaluation of the controller was evaluated by first conducting comparison experiments to quantify the accuracy of the catheter-sensing unit, and then conducting several experiments to evaluate the force feedback unit. From the experimental results, the minimum and the maximum errors of translational displacement were 0.003 mm (0.01 %) and 0.425 mm (1.06 %), respectively. The average error was 0.113 mm (0.28 %). In terms of rotational angles, the minimum and the maximum errors were 0.39°(0.33 %) and 7.2°(6 %), respectively. The average error was 3.61°(3.01 %). The force resolution was approximately 25 mN and a maximum current of 3A generated an approximately 1.5 N force. Based on analysis of requirements and state-of-the-art computer-assisted and robot-assisted training systems for endovascular catheterization, a new master controller with force feedback interface was proposed to maintain the natural endovascular catheterization skills of the interventionalists.
Round-off error in long-term orbital integrations using multistep methods
NASA Technical Reports Server (NTRS)
Quinlan, Gerald D.
1994-01-01
Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.
An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida
Gain, W.S.
1997-01-01
Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.
NASA Astrophysics Data System (ADS)
Khalilinezhad, Mahdieh; Minaei, Behrooz; Vernazza, Gianni; Dellepiane, Silvana
2015-03-01
Data mining (DM) is the process of discovery knowledge from large databases. Applications of data mining in Blood Transfusion Organizations could be useful for improving the performance of blood donation service. The aim of this research is the prediction of healthiness of blood donors in Blood Transfusion Organization (BTO). For this goal, three famous algorithms such as Decision Tree C4.5, Naïve Bayesian classifier, and Support Vector Machine have been chosen and applied to a real database made of 11006 donors. Seven fields such as sex, age, job, education, marital status, type of donor, results of blood tests (doctors' comments and lab results about healthy or unhealthy blood donors) have been selected as input to these algorithms. The results of the three algorithms have been compared and an error cost analysis has been performed. According to this research and the obtained results, the best algorithm with low error cost and high accuracy is SVM. This research helps BTO to realize a model from blood donors in each area in order to predict the healthy blood or unhealthy blood of donors. This research could be useful if used in parallel with laboratory tests to better separate unhealthy blood.
Lorenzo, Rosa A.; Carro, Antonia M.; Alvarez-Lorenzo, Carmen; Concheiro, Angel
2011-01-01
Template removal is a critical step in the preparation of most molecularly imprinted polymers (MIPs). The polymer network itself and the affinity of the imprinted cavities for the template make its removal hard. If there are remaining template molecules in the MIPs, less cavities will be available for rebinding, which decreases efficiency. Furthermore, if template bleeding occurs during analytical applications, errors will arise. Despite the relevance to the MIPs performance, template removal has received scarce attention and is currently the least cost-effective step of the MIP development. Attempts to reach complete template removal may involve the use of too drastic conditions in conventional extraction techniques, resulting in the damage or the collapse of the imprinted cavities. Advances in the extraction techniques in the last decade may provide optimized tools. The aim of this review is to analyze the available data on the efficiency of diverse extraction techniques for template removal, paying attention not only to the removal yield but also to MIPs performance. Such an analysis is expected to be useful for opening a way to rational approaches for template removal (minimizing the costs of solvents and time) instead of the current trial-and-error methods. PMID:21845081
Palta, Jatinder R; Liu, Chihray; Li, Jonathan G
2008-01-01
The traditional prescriptive quality assurance (QA) programs that attempt to ensure the safety and reliability of traditional external beam radiation therapy are limited in their applicability to such advanced radiation therapy techniques as three-dimensional conformal radiation therapy, intensity-modulated radiation therapy, inverse treatment planning, stereotactic radiosurgery/radiotherapy, and image-guided radiation therapy. The conventional QA paradigm, illustrated by the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 40 (TG-40) report, consists of developing a consensus menu of tests and device performance specifications from a generic process model that is assumed to apply to all clinical applications of the device. The complexity, variation in practice patterns, and level of automation of high-technology radiotherapy renders this "one-size-fits-all" prescriptive QA paradigm ineffective or cost prohibitive if the high-probability error pathways of all possible clinical applications of the device are to be covered. The current approaches to developing comprehensive prescriptive QA protocols can be prohibitively time consuming and cost ineffective and may sometimes fail to adequately safeguard patients. It therefore is important to evaluate more formal error mitigation and process analysis methods of industrial engineering to more optimally focus available QA resources on process components that have a significant likelihood of compromising patient safety or treatment outcomes.
Darvasi, A.; Soller, M.
1994-01-01
Selective genotyping is a method to reduce costs in marker-quantitative trait locus (QTL) linkage determination by genotyping only those individuals with extreme, and hence most informative, quantitative trait values. The DNA pooling strategy (termed: ``selective DNA pooling'') takes this one step further by pooling DNA from the selected individuals at each of the two phenotypic extremes, and basing the test for linkage on marker allele frequencies as estimated from the pooled samples only. This can reduce genotyping costs of marker-QTL linkage determination by up to two orders of magnitude. Theoretical analysis of selective DNA pooling shows that for experiments involving backcross, F(2) and half-sib designs, the power of selective DNA pooling for detecting genes with large effect, can be the same as that obtained by individual selective genotyping. Power for detecting genes with small effect, however, was found to decrease strongly with increase in the technical error of estimating allele frequencies in the pooled samples. The effect of technical error, however, can be markedly reduced by replication of technical procedures. It is also shown that a proportion selected of 0.1 at each tail will be appropriate for a wide range of experimental conditions. PMID:7896115
D Capturing Performances of Low-Cost Range Sensors for Mass-Market Applications
NASA Astrophysics Data System (ADS)
Guidi, G.; Gonizzi, S.; Micoli, L.
2016-06-01
Since the advent of the first Kinect as motion controller device for the Microsoft XBOX platform (November 2010), several similar active and low-cost range sensing devices have been introduced on the mass-market for several purposes, including gesture based interfaces, 3D multimedia interaction, robot navigation, finger tracking, 3D body scanning for garment design and proximity sensors for automotive. However, given their capability to generate a real time stream of range images, these has been used in some projects also as general purpose range devices, with performances that for some applications might be satisfying. This paper shows the working principle of the various devices, analyzing them in terms of systematic errors and random errors for exploring the applicability of them in standard 3D capturing problems. Five actual devices have been tested featuring three different technologies: i) Kinect V1 by Microsoft, Structure Sensor by Occipital, and Xtion PRO by ASUS, all based on different implementations of the Primesense sensor; ii) F200 by Intel/Creative, implementing the Realsense pattern projection technology; Kinect V2 by Microsoft, equipped with the Canesta TOF Camera. A critical analysis of the results tries first of all to compare them, and secondarily to focus the range of applications for which such devices could actually work as a viable solution.
Lorenzo, Rosa A; Carro, Antonia M; Alvarez-Lorenzo, Carmen; Concheiro, Angel
2011-01-01
Template removal is a critical step in the preparation of most molecularly imprinted polymers (MIPs). The polymer network itself and the affinity of the imprinted cavities for the template make its removal hard. If there are remaining template molecules in the MIPs, less cavities will be available for rebinding, which decreases efficiency. Furthermore, if template bleeding occurs during analytical applications, errors will arise. Despite the relevance to the MIPs performance, template removal has received scarce attention and is currently the least cost-effective step of the MIP development. Attempts to reach complete template removal may involve the use of too drastic conditions in conventional extraction techniques, resulting in the damage or the collapse of the imprinted cavities. Advances in the extraction techniques in the last decade may provide optimized tools. The aim of this review is to analyze the available data on the efficiency of diverse extraction techniques for template removal, paying attention not only to the removal yield but also to MIPs performance. Such an analysis is expected to be useful for opening a way to rational approaches for template removal (minimizing the costs of solvents and time) instead of the current trial-and-error methods.
Taming the Hurricane of Acquisition Cost Growth - Or at Least Predicting It
2015-01-01
the practice of generating two different cost estimates dubbed Will Cost and Should Cost. The Should Cost estimate is “based on realistic tech...to predict estimate error in similar future programs. This method is dubbed “macro-stochastic” estimation (Ryan, Schubert Kabban, Jacques...mph Potential Day 1-3 Track Area Tropical Storm Warning OK AR TN AL FL Mexico MS LA TX 30 N 35 N 25 N 95 W 90 W 85 W 80 W True at 30.00N Approx
Colen, Hadewig B; Neef, Cees; Schuring, Roel W
2003-06-01
Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incident reports involving medication errors. Medisch Spectrum Twente is a top primary-care, clinical, teaching hospital. The hospital pharmacy takes care of 1070 internal beds and 1120 beds in an affiliated psychiatric hospital and nursing homes. In the beginning of 1999, our pharmacy group started a large interdisciplinary research project to develop a safe, effective and efficient drug distribution system by using systematic process redesign. The process redesign includes both organisational and technological components. This article describes the identification and verification of critical performance dimensions for the design of drug distribution processes in hospitals (phase 1 of the systematic process redesign of drug distribution). Based on reported errors and related causes, we suggested six generic performance domains. To assess the role of the performance dimensions, we used three approaches: flowcharts, interviews with stakeholders and review of the existing performance using time studies and medication error studies. We were able to set targets for costs, quality of information, responsiveness, employee satisfaction, and degree of innovation. We still have to establish what drug distribution system, in respect of quality and cost-effectiveness, represents the best and most cost-effective way of preventing medication errors. We intend to develop an evaluation model, using the critical performance dimensions as a starting point. This model can be used as a simulation template to compare different drug distribution concepts in order to define the differences in quality and cost-effectiveness.
Urban rail transit projects : forecast versus actual ridership and costs. final report
DOT National Transportation Integrated Search
1989-10-01
Substantial errors in forecasting ridership and costs for the ten rail transit projects reviewed in this report, put forth the possibility that more accurate forecasts would have led decision-makers to select projects other than those reviewed in thi...
Evaluation of structure from motion for soil microtopography measurement
USDA-ARS?s Scientific Manuscript database
Recent developments in low cost structure from motion (SFM) technologies offer new opportunities for geoscientists to acquire high resolution soil microtopography data at a fraction of the cost of conventional techniques. However, these new methodologies often lack easily accessible error metrics an...
[Proximate analysis of straw by near infrared spectroscopy (NIRS)].
Huang, Cai-jin; Han, Lu-jia; Liu, Xian; Yang, Zeng-ling
2009-04-01
Proximate analysis is one of the routine analysis procedures in utilization of straw for biomass energy use. The present paper studied the applicability of rapid proximate analysis of straw by near infrared spectroscopy (NIRS) technology, in which the authors constructed the first NIRS models to predict volatile matter and fixed carbon contents of straw. NIRS models were developed using Foss 6500 spectrometer with spectra in the range of 1,108-2,492 nm to predict the contents of moisture, ash, volatile matter and fixed carbon in the directly cut straw samples; to predict ash, volatile matter and fixed carbon in the dried milled straw samples. For the models based on directly cut straw samples, the determination coefficient of independent validation (R2v) and standard error of prediction (SEP) were 0.92% and 0.76% for moisture, 0.94% and 0.84% for ash, 0.88% and 0.82% for volatile matter, and 0.75% and 0.65% for fixed carbon, respectively. For the models based on dried milled straw samples, the determination coefficient of independent validation (R2v) and standard error of prediction (SEP) were 0.98% and 0.54% for ash, 0.95% and 0.57% for volatile matter, and 0.78% and 0.61% for fixed carbon, respectively. It was concluded that NIRS models can predict accurately as an alternative analysis method, therefore rapid and simultaneous analysis of multicomponents can be achieved by NIRS technology, decreasing the cost of proximate analysis for straw.
25+ Years of the Hubble Space Telescope and a Simple Error That Cost Millions
ERIC Educational Resources Information Center
Shakerin, Said
2016-01-01
A simple mistake in properly setting up a measuring device caused millions of dollars to be spent in correcting the initial optical failure of the Hubble Space Telescope (HST). This short article is intended as a lesson for a physics laboratory and discussion of errors in measurement.
On the role of cost-sensitive learning in multi-class brain-computer interfaces.
Devlaminck, Dieter; Waegeman, Willem; Wyns, Bart; Otte, Georges; Santens, Patrick
2010-06-01
Brain-computer interfaces (BCIs) present an alternative way of communication for people with severe disabilities. One of the shortcomings in current BCI systems, recently put forward in the fourth BCI competition, is the asynchronous detection of motor imagery versus resting state. We investigated this extension to the three-class case, in which the resting state is considered virtually lying between two motor classes, resulting in a large penalty when one motor task is misclassified into the other motor class. We particularly focus on the behavior of different machine-learning techniques and on the role of multi-class cost-sensitive learning in such a context. To this end, four different kernel methods are empirically compared, namely pairwise multi-class support vector machines (SVMs), two cost-sensitive multi-class SVMs and kernel-based ordinal regression. The experimental results illustrate that ordinal regression performs better than the other three approaches when a cost-sensitive performance measure such as the mean-squared error is considered. By contrast, multi-class cost-sensitive learning enables us to control the number of large errors made between two motor tasks.
Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students
NASA Astrophysics Data System (ADS)
Priyani, H. A.; Ekawati, R.
2018-01-01
Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Therrell, Bradford L.; Lloyd-Puryear, Michele A.; Camp, Kathryn M.; Mann, Marie Y.
2014-01-01
Inborn errors of metabolism (IEM) are genetic disorders in which specific enzyme defects interfere with the normal metabolism of exogenous (dietary) or endogenous protein, carbohydrate, or fat. In the U.S., many IEM are detected through state newborn screening (NBS) programs. To inform research on IEM and provide necessary resources for researchers, we are providing: tabulation of ten-year state NBS data for selected IEM detected through NBS; costs of medical foods used in the management of IEM; and an assessment of corporate policies regarding provision of nutritional interventions at no or reduced cost to individuals with IEM. The calculated IEM incidences are based on analyses of ten-year data (2001–2011) from the National Newborn Screening Information System (NNSIS). Costs to feed an average person with an IEM were approximated by determining costs to feed an individual with an IEM, minus the annual expenditure for food for an individual without an IEM. Both the incidence and costs of nutritional intervention data will be useful in future research concerning the impact of IEM disorders on families, individuals and society. PMID:25085281
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Elliott, Rachel; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Murray, Scott A; Prescott, Robin J; Cresswell, Kathrin; Sheikh, Aziz
2009-05-01
Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. RESEARCH SUBJECT GROUP: "At-risk" patients registered with computerised general practices in two geographical regions in England. Parallel group pragmatic cluster randomised trial. Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs; - with a computer-recorded diagnosis of asthma being prescribed beta-blockers; - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. SECONDARY OUTCOME MEASURES; These relate to a number of other examples of potentially hazardous prescribing and medicines management. An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. QUALITATIVE ANALYSIS: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.
Olson, Scott A.
2003-01-01
The stream-gaging network in New Hampshire was analyzed for its effectiveness in providing regional information on peak-flood flow, mean-flow, and low-flow frequency. The data available for analysis were from stream-gaging stations in New Hampshire and selected stations in adjacent States. The principles of generalized-least-squares regression analysis were applied to develop regional regression equations that relate streamflow-frequency characteristics to watershed characteristics. Regression equations were developed for (1) the instantaneous peak flow with a 100-year recurrence interval, (2) the mean-annual flow, and (3) the 7-day, 10-year low flow. Active and discontinued stream-gaging stations with 10 or more years of flow data were used to develop the regression equations. Each stream-gaging station in the network was evaluated and ranked on the basis of how much the data from that station contributed to the cost-weighted sampling-error component of the regression equation. The potential effect of data from proposed and new stream-gaging stations on the sampling error also was evaluated. The stream-gaging network was evaluated for conditions in water year 2000 and for estimated conditions under various network strategies if an additional 5 years and 20 years of streamflow data were collected. The effectiveness of the stream-gaging network in providing regional streamflow information could be improved for all three flow characteristics with the collection of additional flow data, both temporally and spatially. With additional years of data collection, the greatest reduction in the average sampling error of the regional regression equations was found for the peak- and low-flow characteristics. In general, additional data collection at stream-gaging stations with unregulated flow, relatively short-term record (less than 20 years), and drainage areas smaller than 45 square miles contributed the largest cost-weighted reduction to the average sampling error of the regional estimating equations. The results of the network analyses can be used to prioritize the continued operation of active stations, the reactivation of discontinued stations, or the activation of new stations to maximize the regional information content provided by the stream-gaging network. Final decisions regarding altering the New Hampshire stream-gaging network would require the consideration of the many uses of the streamflow data serving local, State, and Federal interests.
Stereotype threat can reduce older adults' memory errors
Barber, Sarah J.; Mather, Mara
2014-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment (Seibt & Förster, 2004). Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 & 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well. PMID:24131297
Lambe, Tosin; Frew, Emma; Ives, Natalie J; Woolley, Rebecca L; Cummins, Carole; Brettell, Elizabeth A; Barsoum, Emma N; Webb, Nicholas J A
2018-04-01
The Paediatric Quality of Life Inventory (PedsQL™) questionnaire is a widely used, generic instrument designed for measuring health-related quality of life (HRQoL); however, it is not preference-based and therefore not suitable for cost-utility analysis. The Child Health Utility Index-9 Dimension (CHU-9D), however, is a preference-based instrument that has been primarily developed to support cost-utility analysis. This paper presents a method for estimating CHU-9D index scores from responses to the PedsQL™ using data from a randomised controlled trial of prednisolone therapy for treatment of childhood corticosteroid-sensitive nephrotic syndrome. HRQoL data were collected from children at randomisation, week 16, and months 12, 18, 24, 36 and 48. Observations on children aged 5 years and older were pooled across all data collection timepoints and were then randomised into an estimation (n = 279) and validation (n = 284) sample. A number of models were developed using the estimation data before internal validation. The best model was chosen using multi-stage selection criteria. Most of the models developed accurately predicted the CHU-9D mean index score. The best performing model was a generalised linear model (mean absolute error = 0.0408; mean square error = 0.0035). The proportion of index scores deviating from the observed scores by < 0.03 was 53%. The mapping algorithm provides an empirical tool for estimating CHU-9D index scores and for conducting cost-utility analyses within clinical studies that have only collected PedsQL™ data. It is valid for children aged 5 years or older. Caution should be exercised when using this with children younger than 5 years, older adolescents (> 13 years) or patient groups with particularly poor quality of life. 16645249.
NASA Technical Reports Server (NTRS)
Joiner, J.; Dee, D. P.
1998-01-01
One of the outstanding problems in data assimilation has been and continues to be how best to utilize satellite data while balancing the tradeoff between accuracy and computational cost. A number of weather prediction centers have recently achieved remarkable success in improving their forecast skill by changing the method by which satellite data are assimilated into the forecast model from the traditional approach of assimilating retrievals to the direct assimilation of radiances in a variational framework. The operational implementation of such a substantial change in methodology involves a great number of technical details, e.g., pertaining to quality control procedures, systematic error correction techniques, and tuning of the statistical parameters in the analysis algorithm. Although there are clear theoretical advantages to the direct radiance assimilation approach, it is not obvious at all to what extent the improvements that have been obtained so far can be attributed to the change in methodology, or to various technical aspects of the implementation. The issue is of interest because retrieval assimilation retains many practical and logistical advantages which may become even more significant in the near future when increasingly high-volume data sources become available. The central question we address here is: how much improvement can we expect from assimilating radiances rather than retrievals, all other things being equal? We compare the two approaches in a simplified one-dimensional theoretical framework, in which problems related to quality control and systematic error correction are conveniently absent. By assuming a perfect radiative transfer model and perfect knowledge of radiance and background error covariances, we are able to formulate a nonlinear local error analysis for each assimilation method. Direct radiance assimilation is optimal in this idealized context, while the traditional method of assimilating retrievals is suboptimal because it ignores the cross-covariances between background errors and retrieval errors. We show that interactive retrieval assimilation (where the same background used for assimilation is also used in the retrieval step) is equivalent to direct assimilation of radiances with suboptimal analysis weights. We illustrate and extend these theoretical arguments with several one-dimensional assimilation experiments, where we estimate vertical atmospheric profiles using simulated data from both the High-resolution InfraRed Sounder 2 (HIRS2) and the future Atmospheric InfraRed Sounder (AIRS).
Litigation related to anaesthesia: an analysis of claims against the NHS in England 1995-2007.
Cook, T M; Bland, L; Mihai, R; Scott, S
2009-07-01
The distribution of medico-legal claims in English anaesthetic practice is unreported. We studied National Health Service Litigation Authority claims related to anaesthesia since 1995. All claims were reviewed by three clinicians and variously categorised, including by type of incident, claimed outcome and cost. Anaesthesia-related claims account for 2.5% of all claims and 2.4% of the value of all claims. Of 841 relevant claims 366 (44%) were related to regional anaesthesia, 245 (29%) obstetric anaesthesia, 164 (20%) inadequate anaesthesia, 95 (11%) dental damage, 71 (8%) airway (excluding dental damage), 63 (7%) drug related (excluding allergy), 31 (4%) drug allergy related, 31 (4%) positioning, 29 (3%) respiratory, 26 (3%) consent, 21 (2%) central venous cannulation and 18 (2%) peripheral venous cannulation. Defining which cases are, from a medico-legal viewpoint, 'high risk' is uncertain, but the clinical categories with the largest number of claims were regional anaesthesia, obstetric anaesthesia, inadequate anaesthesia, dental damage and airway, those with the highest overall cost were regional anaesthesia, obstetric anaesthesia, and airway and those with the highest mean cost per closed claim were respiratory, central venous cannulation and drug error excluding allergy. The data currently available have limitations but offer useful information. A closed claims analysis similar to that in the USA would improve the clinical usefulness of analysis.
Risk Analysis of Underestimate Cost Offer to The Project Quality in Aceh Province
NASA Astrophysics Data System (ADS)
Rani, Hafnidar A.
2016-11-01
The possibility of errors in the process of offer price determination could be enormous, so it can affect the possibility of project underestimate cost which can impact and reduce the profit if being implementing. Government Equipment/Service Procurement Policy Institution (LKPP) assesses that the practices of cheaper price in the government equipment/service procurement are still highly found and can be potential to decrease the project quality. This study aimed to analyze the most dominant factors happened in underestimate cost offer practice, to analyze the relationship of underestimate cost offer risk factors to road construction project quality in Aceh Province and to analyze the most potential factors of underestimate cost offer risk affecting road construction project quality in Aceh Province. Road construction projects observed the projects which have been implemented in Aceh Province since 2013 - 2015. This study conducted by interviewing Government Budget Authority (KPA), and distributing the questionnaire to the road construction contractors with the qualification of K1, K2, K3, M1, M2 and B1. Based on the data from Construction Service Development Institution (LPJK) of Aceh Province on 2016, the populations obtained are 2,717 constructors. By using Slovin Equation, the research samples obtained are 97 contractors. The most dominant factors in underestimate cost offer risk of the road construction projects in Aceh Province is Contingency Cost Factor which the mean is 4.374.
A Simple Exoskeleton That Assists Plantarflexion Can Reduce the Metabolic Cost of Human Walking
Malcolm, Philippe; Derave, Wim; Galle, Samuel; De Clercq, Dirk
2013-01-01
Background Even though walking can be sustained for great distances, considerable energy is required for plantarflexion around the instant of opposite leg heel contact. Different groups attempted to reduce metabolic cost with exoskeletons but none could achieve a reduction beyond the level of walking without exoskeleton, possibly because there is no consensus on the optimal actuation timing. The main research question of our study was whether it is possible to obtain a higher reduction in metabolic cost by tuning the actuation timing. Methodology/Principal Findings We measured metabolic cost by means of respiratory gas analysis. Test subjects walked with a simple pneumatic exoskeleton that assists plantarflexion with different actuation timings. We found that the exoskeleton can reduce metabolic cost by 0.18±0.06 W kg−1 or 6±2% (standard error of the mean) (p = 0.019) below the cost of walking without exoskeleton if actuation starts just before opposite leg heel contact. Conclusions/Significance The optimum timing that we found concurs with the prediction from a mathematical model of walking. While the present exoskeleton was not ambulant, measurements of joint kinetics reveal that the required power could be recycled from knee extension deceleration work that occurs naturally during walking. This demonstrates that it is theoretically possible to build future ambulant exoskeletons that reduce metabolic cost, without power supply restrictions. PMID:23418524
Clinical laboratory: bigger is not always better.
Plebani, Mario
2018-06-27
Laboratory services around the world are undergoing substantial consolidation and changes through mechanisms ranging from mergers, acquisitions and outsourcing, primarily based on expectations to improve efficiency, increasing volumes and reducing the cost per test. However, the relationship between volume and costs is not linear and numerous variables influence the end cost per test. In particular, the relationship between volumes and costs does not span the entire platter of clinical laboratories: high costs are associated with low volumes up to a threshold of 1 million test per year. Over this threshold, there is no linear association between volumes and costs, as laboratory organization rather than test volume more significantly affects the final costs. Currently, data on laboratory errors and associated diagnostic errors and risk for patient harm emphasize the need for a paradigmatic shift: from a focus on volumes and efficiency to a patient-centered vision restoring the nature of laboratory services as an integral part of the diagnostic and therapy process. Process and outcome quality indicators are effective tools to measure and improve laboratory services, by stimulating a competition based on intra- and extra-analytical performance specifications, intermediate outcomes and customer satisfaction. Rather than competing with economic value, clinical laboratories should adopt a strategy based on a set of harmonized quality indicators and performance specifications, active laboratory stewardship, and improved patient safety.
7 CFR 272.10 - ADP/CIS Model Plan.
Code of Federal Regulations, 2011 CFR
2011-01-01
... those which result in effective programs or in cost effective reductions in errors and improvements in management efficiency, such as decreases in program administrative costs. Thus, for those State agencies which operate exceptionally efficient and effective programs, a lesser degree of automation may be...
Medication errors with electronic prescribing (eP): Two views of the same picture
2010-01-01
Background Quantitative prospective methods are widely used to evaluate the impact of new technologies such as electronic prescribing (eP) on medication errors. However, they are labour-intensive and it is not always feasible to obtain pre-intervention data. Our objective was to compare the eP medication error picture obtained with retrospective quantitative and qualitative methods. Methods The study was carried out at one English district general hospital approximately two years after implementation of an integrated electronic prescribing, administration and records system. Quantitative: A structured retrospective analysis was carried out of clinical records and medication orders for 75 randomly selected patients admitted to three wards (medicine, surgery and paediatrics) six months after eP implementation. Qualitative: Eight doctors, 6 nurses, 8 pharmacy staff and 4 other staff at senior, middle and junior grades, and 19 adult patients on acute surgical and medical wards were interviewed. Staff interviews explored experiences of developing and working with the system; patient interviews focused on experiences of medicine prescribing and administration on the ward. Interview transcripts were searched systematically for accounts of medication incidents. A classification scheme was developed and applied to the errors identified in the records review. Results The two approaches produced similar pictures of the drug use process. Interviews identified types of error identified in the retrospective notes review plus two eP-specific errors which were not detected by record review. Interview data took less time to collect than record review, and provided rich data on the prescribing process, and reasons for delays or non-administration of medicines, including "once only" orders and "as required" medicines. Conclusions The qualitative approach provided more understanding of processes, and some insights into why medication errors can happen. The method is cost-effective and could be used to supplement information from anonymous error reporting schemes. PMID:20497532
Barone, V; Verdini, F; Burattini, L; Di Nardo, F; Fioretti, S
2016-03-01
A markerless low cost prototype has been developed for the determination of some spatio-temporal parameters of human gait: step-length, step-width and cadence have been considered. Only a smartphone and a high-definition webcam have been used. The signals obtained by the accelerometer embedded in the smartphone are used to recognize the heel strike events, while the feet positions are calculated through image processing of the webcam stream. Step length and width are computed during gait trials on a treadmill at various speeds (3, 4 and 5 km/h). Six subjects have been tested for a total of 504 steps. Results were compared with those obtained by a stereo-photogrammetric system (Elite, BTS Engineering). The maximum average errors were 3.7 cm (5.36%) for the right step length and 1.63 cm (15.16%) for the right step width at 5 km/h. The maximum average error for step duration was 0.02 s (1.69%) at 5 km/h for the right steps. The system is characterized by a very high level of automation that allows its use by non-expert users in non-structured environments. A low cost system able to automatically provide a reliable and repeatable evaluation of some gait events and parameters during treadmill walking, is relevant also from a clinical point of view because it allows the analysis of hundreds of steps and consequently an analysis of their variability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Cost effectiveness of the stream-gaging program in Pennsylvania
Flippo, H.N.; Behrendt, T.E.
1985-01-01
This report documents a cost-effectiveness study of the stream-gaging program in Pennsylvania. Data uses and funding were identified for 223 continuous-record stream gages operated in 1983; four are planned for discontinuance at the close of water-year 1985; two are suggested for conversion, at the beginning of the 1985 water year, for the collection of only continuous stage records. Two of 11 special-purpose short-term gages are recommended for continuation when the supporting project ends; eight of these gages are to be discontinued and the other will be converted to a partial-record type. Current operation costs for the 212 stations recommended for continued operation is $1,199,000 per year in 1983. The average standard error of estimation for instantaneous streamflow is 15.2%. An overall average standard error of 9.8% could be attained on a budget of $1,271,000, which is 6% greater than the 1983 budget, by adopted cost-effective stream-gaging operations. (USGS)
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
A framework of medical equipment management system for in-house clinical engineering department.
Chien, Chia-Hung; Huang, Yi-You; Chong, Fok-Ching
2010-01-01
Medical equipment management is an important issue for safety and cost in modern hospital operation. In addition, the use of an efficient information system effectively promotes the managing performance. In this study, we designed a framework of medical equipment management system used for in-house clinical engineering department. The system was web-based, and it integrated clinical engineering and hospital information system components. Through related information application, it efficiently improved the operation management of medical devices immediately and continuously. This system has run in the National Taiwan University Hospital. The results showed only few examples in the error analysis of medical equipment by the maintenance sub-system. The information can be used to improve work quality, to reduce the maintenance cost, and to promote the safety of medical device used in patients and clinical staffs.
TOOKUIL: A case study in user interface development for safety code application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, D.L.; Harkins, C.K.; Hoole, J.G.
1997-07-01
Traditionally, there has been a very high learning curve associated with using nuclear power plant (NPP) analysis codes. Even for seasoned plant analysts and engineers, the process of building or modifying an input model for present day NPP analysis codes is tedious, error prone, and time consuming. Current cost constraints and performance demands place an additional burden on today`s safety analysis community. Advances in graphical user interface (GUI) technology have been applied to obtain significant productivity and quality assurance improvements for the Transient Reactor Analysis Code (TRAC) input model development. KAPL Inc. has developed an X Windows-based graphical user interfacemore » named TOOKUIL which supports the design and analysis process, acting as a preprocessor, runtime editor, help system, and post processor for TRAC. This paper summarizes the objectives of the project, the GUI development process and experiences, and the resulting end product, TOOKUIL.« less
Awareness of surgical costs: a multicenter cross-sectional survey.
Bade, Kim; Hoogerbrug, Jonathan
2015-01-01
Resource scarcity continues to be an important problem in modern surgical practice. Studies in North America and Europe have found that medical professionals have limited understanding of the costs of medical care. No cost awareness studies have been undertaken in Australasia or specifically focusing on the surgical team. This study determined the cost of a range of commonly used diagnostic tests, procedures, and hospital resources associated with care of the surgical patient. The surgical teams' awareness of these costs was then assessed in a multicenter cross-sectional survey. In total, 14 general surgical consultants, 14 registrars, and 25 house officers working in three New Zealand hospitals were asked to estimate the costs of 14 items commonly associated with patient care. Cost estimations were considered correct if within 25% plus or minus of the actual cost. Accuracy was assessed by calculating the median, mean, and absolute percentage discrepancy. A total of 57 surveys were completed. Of which, four were incomplete and were not included in the analysis. Cost awareness was generally poor, and members of the surgical team were rarely able to estimate the costs to within 25%. The mean absolute percentage error was 0.87 (95% CI: 0.58-1.18) and underestimates were most common. There was no significant difference in estimate accuracy between consultants, registrars, or house officers, or between consultants working in both public/private practice compared with those working in public practice alone. There is poor awareness of surgical costs among consultant surgeons, registrars, and junior physicians working in Australasia. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Bapat, Prashant M; Das, Debasish; Dave, Nishant N; Wangikar, Pramod P
2006-12-15
Antibiotic fermentation processes are raw material cost intensive and the profitability is greatly dependent on the product yield per unit substrate consumed. In order to reduce costs, industrial processes use organic nitrogen substrates (ONS) such as corn steep liquor and yeast extract. Thus, although the stoichiometric analysis is the first logical step in process development, it is often difficult to achieve due to the ill-defined nature of the medium. Here, we present a black-box stoichiometric model for rifamycin B production via Amycolatopsis mediterranei S699 fermentation in complex multi-substrate medium. The stoichiometric coefficients have been experimentally evaluated for nine different media compositions. The ONS was quantified in terms of the amino acid content that it provides. Note that the black box stoichiometric model is an overall result of the metabolic reactions that occur during growth. Hence, the observed stoichiometric coefficients are liable to change during the batch cycle. To capture the shifts in stoichiometry, we carried out the stoichiometric analysis over short intervals of 8-16 h in a batch cycle of 100-200 h. An error analysis shows that there are no systematic errors in the measurements and that there are no unaccounted products in the process. The growth stoichiometry shows a shift from one substrate combination to another during the batch cycle. The shifts were observed to correlate well with the shifts in the trends of pH and exit carbon dioxide profiles. To exemplify, the ammonia uptake and nitrate uptake phases were marked by a decreasing pH trend and an increasing pH trend, respectively. Further, we find the product yield per unit carbon substrate to be greatly dependent on the nature of the nitrogen substrate. The analysis presented here can be readily applied to other fermentation systems that employ multi-substrate complex media.
A Compact VLSI System for Bio-Inspired Visual Motion Estimation.
Shi, Cong; Luo, Gang
2018-04-01
This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
A Cost-Effective Geodetic Strainmeter Based on Dual Coaxial Cable Bragg Gratings
Fu, Jihua; Wang, Xu; Wei, Tao; Wei, Meng; Shen, Yang
2017-01-01
Observations of surface deformation are essential for understanding a wide range of geophysical problems, including earthquakes, volcanoes, landslides, and glaciers. Current geodetic technologies, such as global positioning system (GPS), interferometric synthetic aperture radar (InSAR), borehole and laser strainmeters, are costly and limited in their temporal or spatial resolutions. Here we present a new type of strainmeters based on the coaxial cable Bragg grating (CCBG) sensing technology that provides cost-effective strain measurements. Two CCBGs are introduced into the geodetic strainmeter: one serves as a sensor to measure the strain applied on it, and the other acts as a reference to detect environmental noises. By integrating the sensor and reference signals in a mixer, the environmental noises are minimized and a lower mixed frequency is obtained. The lower mixed frequency allows for measurements to be taken with a portable spectrum analyzer, rather than an expensive spectrum analyzer or a vector network analyzer (VNA). Analysis of laboratory experiments shows that the strain can be measured by the CCBG sensor, and the portable spectrum analyzer can make measurements with the accuracy similar to the expensive spectrum analyzer, whose relative error to the spectrum analyzer R3272 is less than ±0.4%. The outputs of the geodetic strainmeter show a linear relationship with the strains that the CCBG sensor experienced. The measured sensitivity of the geodetic strainmeter is about −0.082 kHz/με; it can cover a large dynamic measuring range up to 2%, and its nonlinear errors can be less than 5.3%. PMID:28417925
A Cost-Effective Geodetic Strainmeter Based on Dual Coaxial Cable Bragg Gratings.
Fu, Jihua; Wang, Xu; Wei, Tao; Wei, Meng; Shen, Yang
2017-04-12
Observations of surface deformation are essential for understanding a wide range of geophysical problems, including earthquakes, volcanoes, landslides, and glaciers. Current geodetic technologies, such as global positioning system (GPS), interferometric synthetic aperture radar (InSAR), borehole and laser strainmeters, are costly and limited in their temporal or spatial resolutions. Here we present a new type of strainmeters based on the coaxial cable Bragg grating (CCBG) sensing technology that provides cost-effective strain measurements. Two CCBGs are introduced into the geodetic strainmeter: one serves as a sensor to measure the strain applied on it, and the other acts as a reference to detect environmental noises. By integrating the sensor and reference signals in a mixer, the environmental noises are minimized and a lower mixed frequency is obtained. The lower mixed frequency allows for measurements to be taken with a portable spectrum analyzer, rather than an expensive spectrum analyzer or a vector network analyzer (VNA). Analysis of laboratory experiments shows that the strain can be measured by the CCBG sensor, and the portable spectrum analyzer can make measurements with the accuracy similar to the expensive spectrum analyzer, whose relative error to the spectrum analyzer R3272 is less than ±0.4%. The outputs of the geodetic strainmeter show a linear relationship with the strains that the CCBG sensor experienced. The measured sensitivity of the geodetic strainmeter is about -0.082 kHz/με; it can cover a large dynamic measuring range up to 2%, and its nonlinear errors can be less than 5.3%.
Analysis of the Accuracy of Ballistic Descent from a Circular Circumterrestrial Orbit
NASA Astrophysics Data System (ADS)
Sikharulidze, Yu. G.; Korchagin, A. N.
2002-01-01
The problem of the transportation of the results of experiments and observations to Earth every so often appears in space research. Its simplest and low-cost solution is the employment of a small ballistic reentry spacecraft. Such a spacecraft has no system of control of the descent trajectory in the atmosphere. This can result in a large spread of landing points, which make it difficult to search for the spacecraft and very often a safe landing. In this work, a choice of a compromise scheme of the flight is considered, which includes the optimum braking maneuver, adequate conditions of the entry into the atmosphere with limited heating and overload, and also the possibility of landing within the limits of a circle with a radius of 12.5 km. The following disturbing factors were taken into account in the analysis of the accuracy of landing: the errors of the braking impulse execution, the variations of the atmosphere density and the wind, the error of the specification of the ballistic coefficient of the reentry spacecraft, and a displacement of its center of mass from the symmetry axis. It is demonstrated that the optimum maneuver assures the maximum absolute value of the reentry angle and the insensitivity of the trajectory of descent with respect to small errors of orientation of the braking engine in the plane of the orbit. It is also demonstrated that the possible error of the landing point due to the error of specification of the ballistic coefficient does not depend (in the linear approximation) upon its value and depends only upon the reentry angle and the accuracy of specification of this coefficient. A guided parachute with an aerodynamic efficiency of about two should be used at the last leg of the reentry trajectory. This will allow one to land in a prescribed range and to produce adequate conditions for the interception of the reentry spacecraft by a helicopter in order to prevent a rough landing.
[Research on Resistant Starch Content of Rice Grain Based on NIR Spectroscopy Model].
Luo, Xi; Wu, Fang-xi; Xie, Hong-guang; Zhu, Yong-sheng; Zhang, Jian-fu; Xie, Hua-an
2016-03-01
A new method based on near-infrared reflectance spectroscopy (NIRS) analysis was explored to determine the content of rice-resistant starch instead of common chemical method which took long time was high-cost. First of all, we collected 62 spectral data which have big differences in terms of resistant starch content of rice, and then the spectral data and detected chemical values are imported chemometrics software. After that a near-infrared spectroscopy calibration model for rice-resistant starch content was constructed with partial least squares (PLS) method. Results are as follows: In respect of internal cross validation, the coefficient of determination (R2) of untreated, pretreatment with MSC+1thD, pretreatment with 1thD+SNV were 0.920 2, 0.967 0 and 0.976 7 respectively. Root mean square error of prediction (RMSEP) were 1.533 7, 1.011 2 and 0.837 1 respectively. In respect of external validation, the coefficient of determination (R2) of untreated, pretreatment with MSC+ 1thD, pretreatment with 1thD+SNV were 0.805, 0.976 and 0.992 respectively. The average absolute error was 1.456, 0.818, 0.515 respectively. There was no significant difference between chemical and predicted values (Turkey multiple comparison), so we think near infrared spectrum analysis is more feasible than chemical measurement. Among the different pretreatment, the first derivation and standard normal variate (1thD+SNV) have higher coefficient of determination (R2) and lower error value whether in internal validation and external validation. In other words, the calibration model has higher precision and less error by pretreatment with 1thD+SNV.
Automated brainstem co-registration (ABC) for MRI.
Napadow, Vitaly; Dhond, Rupali; Kennedy, David; Hui, Kathleen K S; Makris, Nikos
2006-09-01
Group data analysis in brainstem neuroimaging is predicated on accurate co-registration of anatomy. As the brainstem is comprised of many functionally heterogeneous nuclei densely situated adjacent to one another, relatively small errors in co-registration can manifest in increased variance or decreased sensitivity (or significance) in detecting activations. We have devised a 2-stage automated, reference mask guided registration technique (Automated Brainstem Co-registration, or ABC) for improved brainstem co-registration. Our approach utilized a brainstem mask dataset to weight an automated co-registration cost function. Our method was validated through measurement of RMS error at 12 manually defined landmarks. These landmarks were also used as guides for a secondary manual co-registration option, intended for outlier individuals that may not adequately co-register with our automated method. Our methodology was tested on 10 healthy human subjects and compared to traditional co-registration techniques (Talairach transform and automated affine transform to the MNI-152 template). We found that ABC had a significantly lower mean RMS error (1.22 +/- 0.39 mm) than Talairach transform (2.88 +/- 1.22 mm, mu +/- sigma) and the global affine (3.26 +/- 0.81 mm) method. Improved accuracy was also found for our manual-landmark-guided option (1.51 +/- 0.43 mm). Visualizing individual brainstem borders demonstrated more consistent and uniform overlap for ABC compared to traditional global co-registration techniques. Improved robustness (lower susceptibility to outliers) was demonstrated with ABC through lower inter-subject RMS error variance compared with traditional co-registration methods. The use of easily available and validated tools (AFNI and FSL) for this method should ease adoption by other investigators interested in brainstem data group analysis.
Comparing drinking water treatment costs to source water protection costs using time series analysis
NASA Astrophysics Data System (ADS)
Heberling, Matthew T.; Nietch, Christopher T.; Thurston, Hale W.; Elovitz, Michael; Birkenhauer, Kelly H.; Panguluri, Srinivas; Ramakrishnan, Balaji; Heiser, Eric; Neyer, Tim
2015-11-01
We present a framework to compare water treatment costs to source water protection costs, an important knowledge gap for drinking water treatment plants (DWTPs). This trade-off helps to determine what incentives a DWTP has to invest in natural infrastructure or pollution reduction in the watershed rather than pay for treatment on site. To illustrate, we use daily observations from 2007 to 2011 for the Bob McEwen Water Treatment Plant, Clermont County, Ohio, to understand the relationship between treatment costs and water quality and operational variables (e.g., turbidity, total organic carbon [TOC], pool elevation, and production volume). Part of our contribution to understanding drinking water treatment costs is examining both long-run and short-run relationships using error correction models (ECMs). Treatment costs per 1000 gallons (per 3.79 m3) were based on chemical, pumping, and granular activated carbon costs. Results from the ECM suggest that a 1% decrease in turbidity decreases treatment costs by 0.02% immediately and an additional 0.1% over future days. Using mean values for the plant, a 1% decrease in turbidity leads to $1123/year decrease in treatment costs. To compare these costs with source water protection costs, we use a polynomial distributed lag model to link total phosphorus loads, a source water quality parameter affected by land use changes, to turbidity at the plant. We find the costs for source water protection to reduce loads much greater than the reduction in treatment costs during these years. Although we find no incentive to protect source water in our case study, this framework can help DWTPs quantify the trade-offs.
[Macroeconomic costs of eye diseases].
Hirneiß, C; Kampik, A; Neubauer, A S
2014-05-01
Eye diseases that are relevant regarding their macroeconomic costs and their impact on society include cataract, diabetic retinopathy, age-related maculopathy, glaucoma and refractive errors. The aim of this article is to provide a comprehensive overview of direct and indirect costs for major eye disease categories for Germany, based on existing literature and data sources. A semi-structured literature search was performed in the databases Medline and Embase and in the search machine Google for relevant original papers and reviews on costs of eye diseases with relevance for or transferability to Germany (last research date October 2013). In addition, manual searching was performed in important national databases and information sources, such as the Federal Office of Statistics and scientific societies. The direct costs for these diseases add up to approximately 2.6 billion Euros yearly for the Federal Republic of Germany, including out of the pocket payments from patients but excluding optical aids (e.g. glasses). In addition to those direct costs there are also indirect costs which are caused e.g. by loss of employment or productivity or by a reduction in health-related quality of life. These indirect costs can only be roughly estimated. Including the indirect costs for the eye diseases investigated, a total yearly macroeconomic cost ranging between 4 and 12 billion Euros is estimated for Germany. The costs for the eye diseases cataract, diabetic retinopathy, age-related maculopathy, glaucoma and refractive errors have a macroeconomic relevant dimension. Based on the predicted demographic changes with an ageing society an increase of the prevalence and thus also an increase of costs for eye diseases is expected in the future.
SU-E-T-88: Comprehensive Automated Daily QA for Hypo- Fractionated Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuinness, C; Morin, O
2014-06-01
Purpose: The trend towards more SBRT treatments with fewer high dose fractions places increased importance on daily QA. Patient plan specific QA with 3%/3mm gamma analysis and daily output constancy checks may not be enough to guarantee the level of accuracy required for SBRT treatments. But increasing the already extensive amount of QA procedures that are required is a daunting proposition. We performed a feasibility study for more comprehensive automated daily QA that could improve the diagnostic capabilities of QA without increasing workload. Methods: We performed the study on a Siemens Artiste linear accelerator using the integrated flat panel EPID.more » We included square fields, a picket fence, overlap and representative IMRT fields to measure output, flatness, symmetry, beam center, and percent difference from the standard. We also imposed a set of machine errors: MLC leaf position, machine output, and beam steering to compare with the standard. Results: Daily output was consistent within +/− 1%. Change in steering current by 1.4% and 2.4% resulted in a 3.2% and 6.3% change in flatness. 1 and 2mm MLC leaf offset errors were visibly obvious in difference plots, but passed a 3%/3mm gamma analysis. A simple test of transmission in a picket fence can catch a leaf offset error of a single leaf by 1mm. The entire morning QA sequence is performed in less than 30 minutes and images are automatically analyzed. Conclusion: Automated QA procedures could be used to provide more comprehensive information about the machine with less time and human involvement. We have also shown that other simple tests are better able to catch MLC leaf position errors than a 3%/3mm gamma analysis commonly used for IMRT and modulated arc treatments. Finally, this information could be used to watch trends of the machine and predict problems before they lead to costly machine downtime.« less
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
Brito, Thiago V.; Morley, Steven K.
2017-10-25
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brito, Thiago V.; Morley, Steven K.
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Astrophysics Data System (ADS)
Varotsos, G. K.; Nistazakis, H. E.; Petkovic, M. I.; Djordjevic, G. T.; Tombras, G. S.
2017-11-01
Over the last years terrestrial free-space optical (FSO) communication systems have demonstrated an increasing scientific and commercial interest in response to the growing demands for ultra high bandwidth, cost-effective and secure wireless data transmissions. However, due the signal propagation through the atmosphere, the performance of such links depends strongly on the atmospheric conditions such as weather phenomena and turbulence effect. Additionally, their operation is affected significantly by the pointing errors effect which is caused by the misalignment of the optical beam between the transmitter and the receiver. In order to address this significant performance degradation, several statistical models have been proposed, while particular attention has been also given to diversity methods. Here, the turbulence-induced fading of the received optical signal irradiance is studied through the M (alaga) distribution, which is an accurate model suitable for weak to strong turbulence conditions and unifies most of the well-known, previously emerged models. Thus, taking into account the atmospheric turbulence conditions along with the pointing errors effect with nonzero boresight and the modulation technique that is used, we derive mathematical expressions for the estimation of the average bit error rate performance for SIMO FSO links. Finally, proper numerical results are given to verify our derived expressions and Monte Carlo simulations are also provided to further validate the accuracy of the analysis proposed and the obtained mathematical expressions.
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
NASA Astrophysics Data System (ADS)
Zhang, Jianbin; Sun, Xiantao; Chen, Weihai; Chen, Wenjie; Jiang, Lusha
2014-12-01
In microelectromechanical system (MEMS) optical switch assembly, the collision always exists between the optical fiber and the edges of the U-groove due to the positioning errors between them. It will cause the irreparable damage since the optical fiber and the silicon-made U-groove are usually very fragile. Typical solution is first to detect the positioning errors by the machine vision or high-resolution sensors and then to actively eliminate them with the aid of the motion of precision mechanisms. However, this method will increase the cost and complexity of the system. In this paper, we present a passive compensation method to accommodate the positioning errors. First, we study the insertion process of the optical fiber into the U-groove to analyze all possible positioning errors as well as the conditions of successful insertion. Then, a novel passive flexure-based mechanism based on the remote center of compliance concept is designed to satisfy the required insertion condition. The pseudo-rigid-body-model method is utilized to calculate the stiffness of the mechanism along the different directions, which is verified by finite element analysis (FEA). Finally, a prototype of the passive flexure-based mechanism is fabricated for performance tests. Both FEA and experimental results indicate that the designed mechanism can be used to the MEMS optical switch assembly.
Application of statistical machine translation to public health information: a feasibility study.
Kirchhoff, Katrin; Turner, Anne M; Axelrod, Amittai; Saavedra, Francisco
2011-01-01
Accurate, understandable public health information is important for ensuring the health of the nation. The large portion of the US population with Limited English Proficiency is best served by translations of public-health information into other languages. However, a large number of health departments and primary care clinics face significant barriers to fulfilling federal mandates to provide multilingual materials to Limited English Proficiency individuals. This article presents a pilot study on the feasibility of using freely available statistical machine translation technology to translate health promotion materials. The authors gathered health-promotion materials in English from local and national public-health websites. Spanish versions were created by translating the documents using a freely available machine-translation website. Translations were rated for adequacy and fluency, analyzed for errors, manually corrected by a human posteditor, and compared with exclusively manual translations. Machine translation plus postediting took 15-53 min per document, compared to the reported days or even weeks for the standard translation process. A blind comparison of machine-assisted and human translations of six documents revealed overall equivalency between machine-translated and manually translated materials. The analysis of translation errors indicated that the most important errors were word-sense errors. The results indicate that machine translation plus postediting may be an effective method of producing multilingual health materials with equivalent quality but lower cost compared to manual translations.
Application of statistical machine translation to public health information: a feasibility study
Turner, Anne M; Axelrod, Amittai; Saavedra, Francisco
2011-01-01
Objective Accurate, understandable public health information is important for ensuring the health of the nation. The large portion of the US population with Limited English Proficiency is best served by translations of public-health information into other languages. However, a large number of health departments and primary care clinics face significant barriers to fulfilling federal mandates to provide multilingual materials to Limited English Proficiency individuals. This article presents a pilot study on the feasibility of using freely available statistical machine translation technology to translate health promotion materials. Design The authors gathered health-promotion materials in English from local and national public-health websites. Spanish versions were created by translating the documents using a freely available machine-translation website. Translations were rated for adequacy and fluency, analyzed for errors, manually corrected by a human posteditor, and compared with exclusively manual translations. Results Machine translation plus postediting took 15–53 min per document, compared to the reported days or even weeks for the standard translation process. A blind comparison of machine-assisted and human translations of six documents revealed overall equivalency between machine-translated and manually translated materials. The analysis of translation errors indicated that the most important errors were word-sense errors. Conclusion The results indicate that machine translation plus postediting may be an effective method of producing multilingual health materials with equivalent quality but lower cost compared to manual translations. PMID:21498805
Zhang, Jianbin; Sun, Xiantao; Chen, Weihai; Chen, Wenjie; Jiang, Lusha
2014-12-01
In microelectromechanical system (MEMS) optical switch assembly, the collision always exists between the optical fiber and the edges of the U-groove due to the positioning errors between them. It will cause the irreparable damage since the optical fiber and the silicon-made U-groove are usually very fragile. Typical solution is first to detect the positioning errors by the machine vision or high-resolution sensors and then to actively eliminate them with the aid of the motion of precision mechanisms. However, this method will increase the cost and complexity of the system. In this paper, we present a passive compensation method to accommodate the positioning errors. First, we study the insertion process of the optical fiber into the U-groove to analyze all possible positioning errors as well as the conditions of successful insertion. Then, a novel passive flexure-based mechanism based on the remote center of compliance concept is designed to satisfy the required insertion condition. The pseudo-rigid-body-model method is utilized to calculate the stiffness of the mechanism along the different directions, which is verified by finite element analysis (FEA). Finally, a prototype of the passive flexure-based mechanism is fabricated for performance tests. Both FEA and experimental results indicate that the designed mechanism can be used to the MEMS optical switch assembly.
Cost effectiveness of the stream-gaging program in Nevada
Arteaga, F.E.
1990-01-01
The stream-gaging network in Nevada was evaluated as part of a nationwide effort by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. Specifically, the study dealt with 79 streamflow gages and 2 canal-flow gages that were under the direct operation of Nevada personnel as of 1983. Cost-effective allocations of resources, including budget and operational criteria, were studied using statistical procedures known as Kalman-filtering techniques. The possibility of developing streamflow data at ungaged sites was evaluated using flow-routing and statistical regression analyses. Neither of these methods provided sufficiently accurate results to warrant their use in place of stream gaging. The 81 gaging stations were being operated in 1983 with a budget of $465,500. As a result of this study, all existing stations were determined to be necessary components of the program for the foreseeable future. At the 1983 funding level, the average standard error of streamflow records was nearly 28%. This same overall level of accuracy could have been maintained with a budget of approximately $445,000 if the funds were redistributed more equitably among the gages. The maximum budget analyzed, $1,164 ,000 would have resulted in an average standard error of 11%. The study indicates that a major source of error is lost data. If perfectly operating equipment were available, the standard error for the 1983 program and budget could have been reduced to 21%. (Thacker-USGS, WRD)
Interspecific song imitation by a Prairie Warbler
Bruce E. Byers; Brodie A. Kramer; Michael E. Akresh; David I. King
2013-01-01
Song development in oscine songbirds relies on imitation of adult singers and thus leaves developing birds vulnerable to potentially costly errors caused by imitation of inappropriate models, such as the songs of other species. In May and June 2012, we recorded the songs of a bird that made such an error: a male Prairie Warbler (Setophaga discolor)...
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
Meads, Catherine; Glover, Matthew; Dimmock, Paul; Pokhrel, Subhash
2016-12-01
As part of the development of the National Institute for Health and Care Excellence (NICE) Medical Technologies Guidance on Parafricta Bootees and Undergarments to reduce skin breakdown in people with, or at risk of, pressure ulcers, the manufacturer (APA Parafricta Ltd) submitted clinical and economic evidence, which was critically appraised by an External Assessment Centre (EAC) and subsequently used by the Medical Technologies Advisory Committee (MTAC) to develop recommendations for further research. The University of Birmingham and Brunel University, acting as a consortium, were commissioned to act as the EAC, independently appraising the submission. This article is an overview of the original evidence submitted, the EAC's findings and the final NICE guidance. Very little comparative evidence was submitted to demonstrate the effectiveness of Parafricta Bootees or Undergarments. The sponsor submitted a simple cost analysis to estimate the costs of using Parafricta in addition to current practice-in comparison with current practice alone-in hospital and community settings separately. The analysis took a National Health Service (NHS) perspective. The basis of the analysis was a previously published comparative study, which showed no statistical difference in average lengths of stay between patients who wore Parafricta Undergarments and Bootees, and those who did not. The economic model incorporated the costs of Parafricta but assumed shorter lengths of stay with Parafricta. The sponsor concluded that Parafricta was cost saving relative to the comparators. The EAC made amendments to the sponsor's analysis to correct for errors and to reflect alternative assumptions. Parafricta remained cost saving in most analyses, and the savings per prevalent case ranged from £757 in the hospital model to £3455 in the community model. All analyses were severely limited by the available data on effectiveness-in particular, a lack of good-quality comparative studies.
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2011 CFR
2011-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
Du, Jiaying; Gerdtman, Christer; Lindén, Maria
2018-04-06
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Gerdtman, Christer
2018-01-01
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented. PMID:29642412
NASA Astrophysics Data System (ADS)
Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei
2017-09-01
The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.
Mohino-Herranz, Inma; Gil-Pita, Roberto; Ferreira, Javier; Rosa-Zurera, Manuel; Seoane, Fernando
2015-10-08
Determining the stress level of a subject in real time could be of special interest in certain professional activities to allow the monitoring of soldiers, pilots, emergency personnel and other professionals responsible for human lives. Assessment of current mental fitness for executing a task at hand might avoid unnecessary risks. To obtain this knowledge, two physiological measurements were recorded in this work using customized non-invasive wearable instrumentation that measures electrocardiogram (ECG) and thoracic electrical bioimpedance (TEB) signals. The relevant information from each measurement is extracted via evaluation of a reduced set of selected features. These features are primarily obtained from filtered and processed versions of the raw time measurements with calculations of certain statistical and descriptive parameters. Selection of the reduced set of features was performed using genetic algorithms, thus constraining the computational cost of the real-time implementation. Different classification approaches have been studied, but neural networks were chosen for this investigation because they represent a good tradeoff between the intelligence of the solution and computational complexity. Three different application scenarios were considered. In the first scenario, the proposed system is capable of distinguishing among different types of activity with a 21.2% probability error, for activities coded as neutral, emotional, mental and physical. In the second scenario, the proposed solution distinguishes among the three different emotional states of neutral, sadness and disgust, with a probability error of 4.8%. In the third scenario, the system is able to distinguish between low mental load and mental overload with a probability error of 32.3%. The computational cost was calculated, and the solution was implemented in commercially available Android-based smartphones. The results indicate that execution of such a monitoring solution is negligible compared to the nominal computational load of current smartphones.
Gene expression inference with deep learning.
Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui
2016-06-15
Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. D-GEX is available at https://github.com/uci-cbcl/D-GEX CONTACT: xhx@ics.uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Gene expression inference with deep learning
Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui
2016-01-01
Motivation: Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. Results: We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. Availability and implementation: D-GEX is available at https://github.com/uci-cbcl/D-GEX. Contact: xhx@ics.uci.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26873929
NASA Technical Reports Server (NTRS)
1987-01-01
In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Auger, Ludovic
2003-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. This scheme projects the discretized covariance propagation equations and covariance matrix onto an orthogonal set of compactly supported wavelets. Wavelet representation is localized in both location and scale, which allows for efficient representation of the inherently anisotropic structure of the error covariances. The truncation is carried out in such a way that the resolution of the error covariance is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance size by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the constituent field. This results indicate that propagation of error covariances for a global two-dimensional data assimilation system are currently feasible. Recommendations for further reduction in computational cost are made with the goal of extending this technique to three-dimensional global assimilation systems.
MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery
2016-04-01
The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process: prescribing, transcription, preparation, and administration. There were no transcription errors, and most (95%) errors occurred during administration. We conclude that PN practices that conferred a meaningful cost reduction and a lower error rate (2.7/1000 PN) than reported in the literature (15.6/1000 PN) were ascribed to the development and implementation of practices that conform to national PN guidelines and recommendations. Electronic ordering and compounding programs eliminated all transcription and related opportunities for errors. © 2015 American Society for Parenteral and Enteral Nutrition.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Ewen, Edward F; Zhao, Liping; Kolm, Paul; Jurkovitz, Claudine; Fidan, Dogan; White, Harvey D; Gallo, Richard; Weintraub, William S
2009-06-01
The economic impact of bleeding in the setting of nonemergent percutaneous coronary intervention (PCI) is poorly understood and complicated by the variety of bleeding definitions currently employed. This retrospective analysis examines and contrasts the in-hospital cost of bleeding associated with this procedure using six bleeding definitions employed in recent clinical trials. All nonemergent PCI cases at Christiana Care Health System not requiring a subsequent coronary artery bypass were identified between January 2003 and March 2006. Bleeding events were identified by chart review, registry, laboratory, and administrative data. A microcosting strategy was applied utilizing hospital charges converted to costs using departmental level direct cost-to-charge ratios. The independent contributions of bleeding, both major and minor, to cost were determined by multiple regression. Bootstrap methods were employed to obtain estimates of regression parameters and their standard errors. A total of 6,008 cases were evaluated. By GUSTO definitions there were 65 (1.1%) severe, 52 (0.9%) moderate, and 321 (5.3%) mild bleeding episodes with estimated bleeding costs of $14,006; $6,980; and $4,037, respectively. When applying TIMI definitions there were 91 (1.5%) major and 178 (3.0%) minor bleeding episodes with estimated costs of $8,794 and $4,310, respectively. In general, the four additional trial-specific definitions identified more bleeding events, provided lower estimates of major bleeding cost, and similar estimates of minor bleeding costs. Bleeding is associated with considerable cost over and above interventional procedures; however, the choice of bleeding definition impacts significantly on both the incidence and economic consequences of these events.
Error-Analysis for Correctness, Effectiveness, and Composing Procedure.
ERIC Educational Resources Information Center
Ewald, Helen Rothschild
The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
The impact of response measurement error on the analysis of designed experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions
Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele
2016-01-01
To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration. Given the ease-of-use and cost benefits of test strips, we recommend further development of test strips robust to pH variation and appropriate for Ebola-relevant chlorine solution concentrations. PMID:27243817
VCSEL-based fiber optic link for avionics: implementation and performance analyses
NASA Astrophysics Data System (ADS)
Shi, Jieqin; Zhang, Chunxi; Duan, Jingyuan; Wen, Huaitao
2006-11-01
A Gb/s fiber optic link with built-in test capability (BIT) basing on vertical-cavity surface-emitting laser (VCSEL) sources for military avionics bus for next generation has been presented in this paper. To accurately predict link performance, statistical methods and Bit Error Rate (BER) measurements have been examined. The results show that the 1Gb/s fiber optic link meets the BER requirement and values for link margin can reach up to 13dB. Analysis shows that the suggested photonic network may provide high performance and low cost interconnections alternative for future military avionics.
Flyby Error Analysis Based on Contour Plots for the Cassini Tour
NASA Technical Reports Server (NTRS)
Stumpf, P. W.; Gist, E. M.; Goodson, T. D.; Hahn, Y.; Wagner, S. V.; Williams, P. N.
2008-01-01
The maneuver cancellation analysis consists of cost contour plots employed by the Cassini maneuver team. The plots are two-dimensional linear representations of a larger six-dimensional solution to a multi-maneuver, multi-encounter mission at Saturn. By using contours plotted with the dot product of vectors B and R and the dot product of vectors B and T components, it is possible to view the effects delta V on for various encounter positions in the B-plane. The plot is used in operations to help determine if the Approach Maneuver (ensuing encounter minus three days) and/or the Cleanup Maneuver (ensuing encounter plus three days) can be cancelled and also is a linear check of an integrated solution.
Integrating Solar PV in Utility System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, A.; Botterud, A.; Wu, J.
2013-10-31
This study develops a systematic framework for estimating the increase in operating costs due to uncertainty and variability in renewable resources, uses the framework to quantify the integration costs associated with sub-hourly solar power variability and uncertainty, and shows how changes in system operations may affect these costs. Toward this end, we present a statistical method for estimating the required balancing reserves to maintain system reliability along with a model for commitment and dispatch of the portfolio of thermal and renewable resources at different stages of system operations. We estimate the costs of sub-hourly solar variability, short-term forecast errors, andmore » day-ahead (DA) forecast errors as the difference in production costs between a case with “realistic” PV (i.e., subhourly solar variability and uncertainty are fully included in the modeling) and a case with “well behaved” PV (i.e., PV is assumed to have no sub-hourly variability and can be perfectly forecasted). In addition, we highlight current practices that allow utilities to compensate for the issues encountered at the sub-hourly time frame with increased levels of PV penetration. In this analysis we use the analytical framework to simulate utility operations with increasing deployment of PV in a case study of Arizona Public Service Company (APS), a utility in the southwestern United States. In our analysis, we focus on three processes that are important in understanding the management of PV variability and uncertainty in power system operations. First, we represent the decisions made the day before the operating day through a DA commitment model that relies on imperfect DA forecasts of load and wind as well as PV generation. Second, we represent the decisions made by schedulers in the operating day through hour-ahead (HA) scheduling. Peaking units can be committed or decommitted in the HA schedules and online units can be redispatched using forecasts that are improved relative to DA forecasts, but still imperfect. Finally, we represent decisions within the operating hour by schedulers and transmission system operators as real-time (RT) balancing. We simulate the DA and HA scheduling processes with a detailed unit-commitment (UC) and economic dispatch (ED) optimization model. This model creates a least-cost dispatch and commitment plan for the conventional generating units using forecasts and reserve requirements as inputs. We consider only the generation units and load of the utility in this analysis; we do not consider opportunities to trade power with neighboring utilities. We also do not consider provision of reserves from renewables or from demand-side options. We estimate dynamic reserve requirements in order to meet reliability requirements in the RT operations, considering the uncertainty and variability in load, solar PV, and wind resources. Balancing reserve requirements are based on the 2.5th and 97.5th percentile of 1-min deviations from the HA schedule in a previous year. We then simulate RT deployment of balancing reserves using a separate minute-by-minute simulation of deviations from the HA schedules in the operating year. In the simulations we assume that balancing reserves can be fully deployed in 10 min. The minute-by-minute deviations account for HA forecasting errors and the actual variability of the load, wind, and solar generation. Using these minute-by-minute deviations and deployment of balancing reserves, we evaluate the impact of PV on system reliability through the calculation of the standard reliability metric called Control Performance Standard 2 (CPS2). Broadly speaking, the CPS2 score measures the percentage of 10-min periods in which a balancing area is able to balance supply and demand within a specific threshold. Compliance with the North American Electric Reliability Corporation (NERC) reliability standards requires that the CPS2 score must exceed 90% (i.e., the balancing area must maintain adequate balance for 90% of the 10-min periods). The combination of representing DA forecast errors in the DA commitments, using 1-min PV data to simulate RT balancing, and estimates of reliability performance through the CPS2 metric, all factors that are important to operating systems with increasing amounts of PV, makes this study unique in its scope.« less
A risk-based prospective payment system that integrates patient, hospital and national costs.
Siegel, C; Jones, K; Laska, E; Meisner, M; Lin, S
1992-05-01
We suggest that a desirable form for prospective payment for inpatient care is hospital average cost plus a linear combination of individual patient and national average cost. When the coefficients are chosen to minimize mean squared error loss between payment and costs, the payment has efficiency and access incentives. The coefficient multiplying patient costs is a hospital specific measure of financial risk of the patient. Access is promoted since providers receive higher reimbursements for risky, high cost patients. Historical cost data can be used to obtain estimates of payment parameters. The method is applied to Medicare data on psychiatric inpatients.
Bubalo, Joseph; Warden, Bruce A; Wiegel, Joshua J; Nishida, Tess; Handel, Evelyn; Svoboda, Leanne M; Nguyen, Lam; Edillo, P Neil
2014-12-01
Medical errors, in particular medication errors, continue to be a troublesome factor in the delivery of safe and effective patient care. Antineoplastic agents represent a group of medications highly susceptible to medication errors due to their complex regimens and narrow therapeutic indices. As the majority of these medication errors are frequently associated with breakdowns in poorly defined systems, developing technologies and evolving workflows seem to be a logical approach to provide added safeguards against medication errors. This article will review both the pros and cons of today's technologies and their ability to simplify the medication use process, reduce medication errors, improve documentation, improve healthcare costs and increase provider efficiency as relates to the use of antineoplastic therapy throughout the medication use process. Several technologies, mainly computerized provider order entry (CPOE), barcode medication administration (BCMA), smart pumps, electronic medication administration record (eMAR), and telepharmacy, have been well described and proven to reduce medication errors, improve adherence to quality metrics, and/or improve healthcare costs in a broad scope of patients. The utilization of these technologies during antineoplastic therapy is weak at best and lacking for most. Specific to the antineoplastic medication use system, the only technology with data to adequately support a claim of reduced medication errors is CPOE. In addition to the benefits these technologies can provide, it is also important to recognize their potential to induce new types of errors and inefficiencies which can negatively impact patient care. The utilization of technology reduces but does not eliminate the potential for error. The evidence base to support technology in preventing medication errors is limited in general but even more deficient in the realm of antineoplastic therapy. Though CPOE has the best evidence to support its use in the antineoplastic population, benefit from many other technologies may have to be inferred based on data from other patient populations. As health systems begin to widely adopt and implement new technologies it is important to critically assess their effectiveness in improving patient safety. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.