NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
An educational and audit tool to reduce prescribing error in intensive care.
Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D
2008-10-01
To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.
Reducing the Familiarity of Conjunction Lures with Pictures
ERIC Educational Resources Information Center
Lloyd, Marianne E.
2013-01-01
Four experiments were conducted to test whether conjunction errors were reduced after pictorial encoding and whether the semantic overlap between study and conjunction items would impact error rates. Across 4 experiments, compound words studied with a single-picture had lower conjunction error rates during a recognition test than those words…
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F
2018-01-01
Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.
Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja
2016-12-21
Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.
Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung
2007-03-01
The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.
Unforced errors and error reduction in tennis
Brody, H
2006-01-01
Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568
Performance improvement of robots using a learning control scheme
NASA Technical Reports Server (NTRS)
Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.
1987-01-01
Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.
Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J
2016-09-09
A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.
Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa
2013-01-01
To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.
Linear and Order Statistics Combiners for Pattern Classification
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)
2001-01-01
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr
2015-01-01
ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299
New architecture for dynamic frame-skipping transcoder.
Fung, Kai-Tat; Chan, Yui-Lam; Siu, Wan-Chi
2002-01-01
Transcoding is a key technique for reducing the bit rate of a previously compressed video signal. A high transcoding ratio may result in an unacceptable picture quality when the full frame rate of the incoming video bitstream is used. Frame skipping is often used as an efficient scheme to allocate more bits to the representative frames, so that an acceptable quality for each frame can be maintained. However, the skipped frame must be decompressed completely, which might act as a reference frame to nonskipped frames for reconstruction. The newly quantized discrete cosine transform (DCT) coefficients of the prediction errors need to be re-computed for the nonskipped frame with reference to the previous nonskipped frame; this can create undesirable complexity as well as introduce re-encoding errors. In this paper, we propose new algorithms and a novel architecture for frame-rate reduction to improve picture quality and to reduce complexity. The proposed architecture is mainly performed on the DCT domain to achieve a transcoder with low complexity. With the direct addition of DCT coefficients and an error compensation feedback loop, re-encoding errors are reduced significantly. Furthermore, we propose a frame-rate control scheme which can dynamically adjust the number of skipped frames according to the incoming motion vectors and re-encoding errors due to transcoding such that the decoded sequence can have a smooth motion as well as better transcoded pictures. Experimental results show that, as compared to the conventional transcoder, the new architecture for frame-skipping transcoder is more robust, produces fewer requantization errors, and has reduced computational complexity.
Poon, Eric G; Cina, Jennifer L; Churchill, William W; Mitton, Patricia; McCrea, Michelle L; Featherstone, Erica; Keohane, Carol A; Rothschild, Jeffrey M; Bates, David W; Gandhi, Tejal K
2005-01-01
We performed a direct observation pre-post study to evaluate the impact of barcode technology on medication dispensing errors and potential adverse drug events in the pharmacy of a tertiary-academic medical center. We found that barcode technology significantly reduced the rate of target dispensing errors leaving the pharmacy by 85%, from 0.37% to 0.06%. The rate of potential adverse drug events (ADEs) due to dispensing errors was also significantly reduced by 63%, from 0.19% to 0.069%. In a 735-bed hospital where 6 million doses of medications are dispensed per year, this technology is expected to prevent about 13,000 dispensing errors and 6,000 potential ADEs per year. PMID:16779372
Fanning, Laura; Jones, Nick; Manias, Elizabeth
2016-04-01
The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.
Simulation: learning from mistakes while building communication and teamwork.
Kuehster, Christina R; Hall, Carla D
2010-01-01
Medical errors are one of the leading causes of death annually in the United States. Many of these errors are related to poor communication and/or lack of teamwork. Using simulation as a teaching modality provides a dual role in helping to reduce these errors. Thorough integration of clinical practice with teamwork and communication in a safe environment increases the likelihood of reducing the error rates in medicine. By allowing practitioners to make potential errors in a safe environment, such as simulation, these valuable lessons improve retention and will rarely be repeated.
Reduction in pediatric identification band errors: a quality collaborative.
Phillips, Shannon Connor; Saysana, Michele; Worley, Sarah; Hain, Paul D
2012-06-01
Accurate and consistent placement of a patient identification (ID) band is used in health care to reduce errors associated with patient misidentification. Multiple safety organizations have devoted time and energy to improving patient ID, but no multicenter improvement collaboratives have shown scalability of previously successful interventions. We hoped to reduce by half the pediatric patient ID band error rate, defined as absent, illegible, or inaccurate ID band, across a quality improvement learning collaborative of hospitals in 1 year. On the basis of a previously successful single-site intervention, we conducted a self-selected 6-site collaborative to reduce ID band errors in heterogeneous pediatric hospital settings. The collaborative had 3 phases: preparatory work and employee survey of current practice and barriers, data collection (ID band failure rate), and intervention driven by data and collaborative learning to accelerate change. The collaborative audited 11377 patients for ID band errors between September 2009 and September 2010. The ID band failure rate decreased from 17% to 4.1% (77% relative reduction). Interventions including education of frontline staff regarding correct ID bands as a safety strategy; a change to softer ID bands, including "luggage tag" type ID bands for some patients; and partnering with families and patients through education were applied at all institutions. Over 13 months, a collaborative of pediatric institutions significantly reduced the ID band failure rate. This quality improvement learning collaborative demonstrates that safety improvements tested in a single institution can be disseminated to improve quality of care across large populations of children.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Extracellular space preservation aids the connectomic analysis of neural circuits.
Pallotto, Marta; Watkins, Paul V; Fubara, Boma; Singer, Joshua H; Briggman, Kevin L
2015-12-09
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits.
Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M
2015-06-01
Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
Reduction in chemotherapy order errors with computerized physician order entry.
Meisenberg, Barry R; Wright, Robert R; Brady-Copertino, Catherine J
2014-01-01
To measure the number and type of errors associated with chemotherapy order composition associated with three sequential methods of ordering: handwritten orders, preprinted orders, and computerized physician order entry (CPOE) embedded in the electronic health record. From 2008 to 2012, a sample of completed chemotherapy orders were reviewed by a pharmacist for the number and type of errors as part of routine performance improvement monitoring. Error frequencies for each of the three distinct methods of composing chemotherapy orders were compared using statistical methods. The rate of problematic order sets-those requiring significant rework for clarification-was reduced from 30.6% with handwritten orders to 12.6% with preprinted orders (preprinted v handwritten, P < .001) to 2.2% with CPOE (preprinted v CPOE, P < .001). The incidence of errors capable of causing harm was reduced from 4.2% with handwritten orders to 1.5% with preprinted orders (preprinted v handwritten, P < .001) to 0.1% with CPOE (CPOE v preprinted, P < .001). The number of problem- and error-containing chemotherapy orders was reduced sequentially by preprinted order sets and then by CPOE. CPOE is associated with low error rates, but it did not eliminate all errors, and the technology can introduce novel types of errors not seen with traditional handwritten or preprinted orders. Vigilance even with CPOE is still required to avoid patient harm.
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
A long-term follow-up evaluation of electronic health record prescribing safety
Abramson, Erika L; Malhotra, Sameer; Osorio, S Nena; Edwards, Alison; Cheriff, Adam; Cole, Curtis; Kaushal, Rainu
2013-01-01
Objective To be eligible for incentives through the Electronic Health Record (EHR) Incentive Program, many providers using older or locally developed EHRs will be transitioning to new, commercial EHRs. We previously evaluated prescribing errors made by providers in the first year following transition from a locally developed EHR with minimal prescribing clinical decision support (CDS) to a commercial EHR with robust CDS. Following system refinements, we conducted this study to assess the rates and types of errors 2 years after transition and determine the evolution of errors. Materials and methods We conducted a mixed methods cross-sectional case study of 16 physicians at an academic-affiliated ambulatory clinic from April to June 2010. We utilized standardized prescription and chart review to identify errors. Fourteen providers also participated in interviews. Results We analyzed 1905 prescriptions. The overall prescribing error rate was 3.8 per 100 prescriptions (95% CI 2.8 to 5.1). Error rates were significantly lower 2 years after transition (p<0.001 compared to pre-implementation, 12 weeks and 1 year after transition). Rates of near misses remained unchanged. Providers positively appreciated most system refinements, particularly reduced alert firing. Discussion Our study suggests that over time and with system refinements, use of a commercial EHR with advanced CDS can lead to low prescribing error rates, although more serious errors may require targeted interventions to eliminate them. Reducing alert firing frequency appears particularly important. Our results provide support for federal efforts promoting meaningful use of EHRs. Conclusions Ongoing error monitoring can allow CDS to be optimally tailored and help achieve maximal safety benefits. Clinical Trials Registration ClinicalTrials.gov, Identifier: NCT00603070. PMID:23578816
Fore, Amanda M; Sculli, Gary L; Albee, Doreen; Neily, Julia
2013-01-01
To implement the sterile cockpit principle to decrease interruptions and distractions during high volume medication administration and reduce the number of medication errors. While some studies have described the importance of reducing interruptions as a tactic to reduce medication errors, work is needed to assess the impact on patient outcomes. Data regarding the type and frequency of distractions were collected during the first 11 weeks of implementation. Medication error rates were tracked 1 year before and after 1 year implementation. Simple regression analysis showed a decrease in the mean number of distractions, (β = -0.193, P = 0.02) over time. The medication error rate decreased by 42.78% (P = 0.04) after implementation of the sterile cockpit principle. The use of crew resource management techniques, including the sterile cockpit principle, applied to medication administration has a significant impact on patient safety. Applying the sterile cockpit principle to inpatient medical units is a feasible approach to reduce the number of distractions during the administration of medication, thus, reducing the likelihood of medication error. 'Do Not Disturb' signs and vests are inexpensive, simple interventions that can be used as reminders to decrease distractions. © 2012 Blackwell Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Anthony, E-mail: anthony.arnold@sesiahs.health.nsw.gov.a; Delaney, Geoff P.; Cassapi, Lynette
Purpose: Radiotherapy is a common treatment for cancer patients. Although incidence of error is low, errors can be severe or affect significant numbers of patients. In addition, errors will often not manifest until long periods after treatment. This study describes the development of an incident reporting tool that allows categorical analysis and time trend reporting, covering first 3 years of use. Methods and Materials: A radiotherapy-specific incident analysis system was established. Staff members were encouraged to report actual errors and near-miss events detected at prescription, simulation, planning, or treatment phases of radiotherapy delivery. Trend reporting was reviewed monthly. Results: Reportsmore » were analyzed for the first 3 years of operation (May 2004-2007). A total of 688 reports was received during the study period. The actual error rate was 0.2% per treatment episode. During the study period, the actual error rates reduced significantly from 1% per year to 0.3% per year (p < 0.001), as did the total event report rates (p < 0.0001). There were 3.5 times as many near misses reported compared with actual errors. Conclusions: This system has allowed real-time analysis of events within a radiation oncology department to a reduced error rate through focus on learning and prevention from the near-miss reports. Plans are underway to develop this reporting tool for Australia and New Zealand.« less
Errors as a Means of Reducing Impulsive Food Choice.
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-06-05
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.
Errors as a Means of Reducing Impulsive Food Choice
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-01-01
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities. PMID:27341281
NASA Astrophysics Data System (ADS)
Liao, Renbo; Liu, Hongzhan; Qiao, Yaojun
2014-05-01
In order to improve the power efficiency and reduce the packet error rate of reverse differential pulse position modulation (RDPPM) for wireless optical communication (WOC), a hybrid reverse differential pulse position width modulation (RDPPWM) scheme is proposed, based on RDPPM and reverse pulse width modulation. Subsequently, the symbol structure of RDPPWM is briefly analyzed, and its performance is compared with that of other modulation schemes in terms of average transmitted power, bandwidth requirement, and packet error rate over ideal additive white Gaussian noise (AWGN) channels. Based on the given model, the simulation results show that the proposed modulation scheme has the advantages of improving the power efficiency and reducing the bandwidth requirement. Moreover, in terms of error probability performance, RDPPWM can achieve a much lower packet error rate than that of RDPPM. For example, at the same received signal power of -28 dBm, the packet error rate of RDPPWM can decrease to 2.6×10-12, while that of RDPPM is 2.2×10. Furthermore, RDPPWM does not need symbol synchronization at the receiving end. These considerations make RDPPWM a favorable candidate to select as the modulation scheme in the WOC systems.
Palmero, David; Di Paolo, Ermindo R; Beauport, Lydie; Pannatier, André; Tolsa, Jean-François
2016-01-01
The objective of this study was to assess whether the introduction of a new preformatted medical order sheet coupled with an introductory course affected prescription quality and the frequency of errors during the prescription stage in a neonatal intensive care unit (NICU). Two-phase observational study consisting of two consecutive 4-month phases: pre-intervention (phase 0) and post-intervention (phase I) conducted in an 11-bed NICU in a Swiss university hospital. Interventions consisted of the introduction of a new preformatted medical order sheet with explicit information supplied, coupled with a staff introductory course on appropriate prescription and medication errors. The main outcomes measured were formal aspects of prescription and frequency and nature of prescription errors. Eighty-three and 81 patients were included in phase 0 and phase I, respectively. A total of 505 handwritten prescriptions in phase 0 and 525 in phase I were analysed. The rate of prescription errors decreased significantly from 28.9% in phase 0 to 13.5% in phase I (p < 0.05). Compared with phase 0, dose errors, name confusion and errors in frequency and rate of drug administration decreased in phase I, from 5.4 to 2.7% (p < 0.05), 5.9 to 0.2% (p < 0.05), 3.6 to 0.2% (p < 0.05), and 4.7 to 2.1% (p < 0.05), respectively. The rate of incomplete and ambiguous prescriptions decreased from 44.2 to 25.7 and 8.5 to 3.2% (p < 0.05), respectively. Inexpensive and simple interventions can improve the intelligibility of prescriptions and reduce medication errors. Medication errors are frequent in NICUs and prescription is one of the most critical steps. CPOE reduce prescription errors, but their implementation is not available everywhere. Preformatted medical order sheet coupled with an introductory course decrease medication errors in a NICU. Preformatted medical order sheet is an inexpensive and readily implemented alternative to CPOE.
Extracellular space preservation aids the connectomic analysis of neural circuits
Pallotto, Marta; Watkins, Paul V; Fubara, Boma; Singer, Joshua H; Briggman, Kevin L
2015-01-01
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits. DOI: http://dx.doi.org/10.7554/eLife.08206.001 PMID:26650352
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-05-01
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-09
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.
Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang
2018-05-04
The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Heat conduction errors and time lag in cryogenic thermometer installations
NASA Technical Reports Server (NTRS)
Warshawsky, I.
1973-01-01
Installation practices are recommended that will increase rate of heat exchange between the thermometric sensing element and the cryogenic fluid and that will reduce the rate of undesired heat transfer to higher-temperature objects. Formulas and numerical data are given that help to estimate the magnitude of heat-conduction errors and of time lag in response.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
Hodgson, Catherine; Lambon Ralph, Matthew A
2008-01-01
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.
Hayden, Randall T; Patterson, Donna J; Jay, Dennis W; Cross, Carl; Dotson, Pamela; Possel, Robert E; Srivastava, Deo Kumar; Mirro, Joseph; Shenep, Jerry L
2008-02-01
To assess the ability of a bar code-based electronic positive patient and specimen identification (EPPID) system to reduce identification errors in a pediatric hospital's clinical laboratory. An EPPID system was implemented at a pediatric oncology hospital to reduce errors in patient and laboratory specimen identification. The EPPID system included bar-code identifiers and handheld personal digital assistants supporting real-time order verification. System efficacy was measured in 3 consecutive 12-month time frames, corresponding to periods before, during, and immediately after full EPPID implementation. A significant reduction in the median percentage of mislabeled specimens was observed in the 3-year study period. A decline from 0.03% to 0.005% (P < .001) was observed in the 12 months after full system implementation. On the basis of the pre-intervention detected error rate, it was estimated that EPPID prevented at least 62 mislabeling events during its first year of operation. EPPID decreased the rate of misidentification of clinical laboratory samples. The diminution of errors observed in this study provides support for the development of national guidelines for the use of bar coding for laboratory specimens, paralleling recent recommendations for medication administration.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
Olson, Stephen M; Hussaini, Mohammad; Lewis, James S
2011-05-01
Frozen section analysis is an essential tool for assessing margins intra-operatively to assure complete resection. Many institutions evaluate surgical defect edge tissue provided by the surgeon after the main lesion has been removed. With the increasing use of transoral laser microsurgery, this method is becoming even more prevalent. We sought to evaluate error rates at our large academic institution and to see if sampling errors could be reduced by the simple method change of taking an additional third section on these specimens. All head and neck tumor resection cases from January 2005 through August 2008 with margins evaluated by frozen section were identified by database search. These cases were analyzed by cutting two levels during frozen section and a third permanent section later. All resection cases from August 2008 through July 2009 were identified as well. These were analyzed by cutting three levels during frozen section (the third a 'much deeper' level) and a fourth permanent section later. Error rates for both of these periods were determined. Errors were separated into sampling and interpretation types. There were 4976 total frozen section specimens from 848 patients. The overall error rate was 2.4% for all frozen sections where just two levels were evaluated and was 2.5% when three levels were evaluated (P=0.67). The sampling error rate was 1.6% for two-level sectioning and 1.2% for three-level sectioning (P=0.42). However, when considering only the frozen section cases where tumor was ultimately identified (either at the time of frozen section or on permanent sections) the sampling error rate for two-level sectioning was 15.3 versus 7.4% for three-level sectioning. This difference was statistically significant (P=0.006). Cutting a single additional 'deeper' level at the time of frozen section identifies more tumor-bearing specimens and may reduce the number of sampling errors.
Feedback on prescribing errors to junior doctors: exploring views, problems and preferred methods.
Bertels, Jeroen; Almoudaris, Alex M; Cortoos, Pieter-Jan; Jacklin, Ann; Franklin, Bryony Dean
2013-06-01
Prescribing errors are common in hospital inpatients. However, the literature suggests that doctors are often unaware of their errors as they are not always informed of them. It has been suggested that providing more feedback to prescribers may reduce subsequent error rates. Only few studies have investigated the views of prescribers towards receiving such feedback, or the views of hospital pharmacists as potential feedback providers. Our aim was to explore the views of junior doctors and hospital pharmacists regarding feedback on individual doctors' prescribing errors. Objectives were to determine how feedback was currently provided and any associated problems, to explore views on other approaches to feedback, and to make recommendations for designing suitable feedback systems. A large London NHS hospital trust. To explore views on current and possible feedback mechanisms, self-administered questionnaires were given to all junior doctors and pharmacists, combining both 5-point Likert scale statements and open-ended questions. Agreement scores for statements regarding perceived prescribing error rates, opinions on feedback, barriers to feedback, and preferences for future practice. Response rates were 49% (37/75) for junior doctors and 57% (57/100) for pharmacists. In general, doctors did not feel threatened by feedback on their prescribing errors. They felt that feedback currently provided was constructive but often irregular and insufficient. Most pharmacists provided feedback in various ways; however some did not or were inconsistent. They were willing to provide more feedback, but did not feel it was always effective or feasible due to barriers such as communication problems and time constraints. Both professional groups preferred individual feedback with additional regular generic feedback on common or serious errors. Feedback on prescribing errors was valued and acceptable to both professional groups. From the results, several suggested methods of providing feedback on prescribing errors emerged. Addressing barriers such as the identification of individual prescribers would facilitate feedback in practice. Research investigating whether or not feedback reduces the subsequent error rate is now needed.
Shah, Priya; Wyatt, Jeremy C; Makubate, Boikanyo; Cross, Frank W
2011-01-01
Objective Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. Design A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. Measurements The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. Results Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). Conclusions Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. PMID:21836158
Rate, causes and reporting of medication errors in Jordan: nurses' perspectives.
Mrayyan, Majd T; Shishani, Kawkab; Al-Faouri, Ibrahim
2007-09-01
The aim of the study was to describe Jordanian nurses' perceptions about various issues related to medication errors. This is the first nursing study about medication errors in Jordan. This was a descriptive study. A convenient sample of 799 nurses from 24 hospitals was obtained. Descriptive and inferential statistics were used for data analysis. Over the course of their nursing career, the average number of recalled committed medication errors per nurse was 2.2. Using incident reports, the rate of medication errors reported to nurse managers was 42.1%. Medication errors occurred mainly when medication labels/packaging were of poor quality or damaged. Nurses failed to report medication errors because they were afraid that they might be subjected to disciplinary actions or even lose their jobs. In the stepwise regression model, gender was the only predictor of medication errors in Jordan. Strategies to reduce or eliminate medication errors are required.
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-01-01
Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033
Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang
2018-01-01
The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions. PMID:29734707
Ultrasound transducer function: annual testing is not sufficient.
Mårtensson, Mattias; Olsson, Mats; Brodin, Lars-Åke
2010-10-01
The objective was to follow-up the study 'High incidence of defective ultrasound transducers in use in routine clinical practice' and evaluate if annual testing is good enough to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level. A total of 299 transducers were tested in 13 clinics at five hospitals in the Stockholm area. Approximately 7000-15,000 ultrasound examinations are carried out at these clinics every year. The transducers tested in the study had been tested and classified as fully operational 1 year before and since then been in normal use in the routine clinical practice. The transducers were tested with the Sonora FirstCall Test System. There were 81 (27.1%) defective transducers found; giving a 95% confidence interval ranging from 22.1 to 32.1%. The most common transducer errors were 'delamination' of the ultrasound lens and 'break in the cable' which together constituted 82.7% of all transducer errors found. The highest error rate was found at the radiological clinics with a mean error rate of 36.0%. There was a significant difference in error rate between two observed ways the clinics handled the transducers. There was no significant difference in the error rates of the transducer brands or the transducers models. Annual testing is not sufficient to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level and it is strongly advisable to create a user routine that minimizes the handling of the transducers.
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.
Yamamoto, Loren; Kanemori, Joan
2010-06-01
Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Identification of Carbon loss in the production of pilot-scale Carbon nanotube using gauze reactor
NASA Astrophysics Data System (ADS)
Wulan, P. P. D. K.; Purwanto, W. W.; Yeni, N.; Lestari, Y. D.
2018-03-01
Carbon loss more than 65% was the major obstacles in the Carbon Nanotube (CNT) production using gauze pilot scale reactor. The results showed that the initial carbon loss calculation is 27.64%. The calculation of carbon loss, then, takes place with various corrections parameters of: product flow rate error measurement, feed flow rate changes, gas product composition by Gas Chromatography Flame Ionization Detector (GC FID), and the carbon particulate by glass fiber filters. Error of product flow rate due to the measurement with bubble soap gives calculation error of carbon loss for about ± 4.14%. Changes in the feed flow rate due to CNT growth in the reactor reduce carbon loss by 4.97%. The detection of secondary hydrocarbon with GC FID during CNT production process reduces carbon loss by 5.14%. Particulates carried by product stream are very few and merely correct the carbon loss about 0.05%. Taking all the factors into account, the amount of carbon loss within this study is (17.21 ± 4.14)%. Assuming that 4.14% of carbon loss is due to the error measurement of product flow rate, the amount of carbon loss is 13.07%. It means that more than 57% of carbon loss within this study is identified.
Russo, Paola; Piazza, Miriam; Leonardi, Giorgio; Roncoroni, Layla; Russo, Carlo; Spadaro, Salvatore; Quaglini, Silvana
2012-01-01
The blood transfusion is a complex activity subject to a high risk of eventually fatal errors. The development and application of computer-based systems could help reducing the error rate, playing a fundamental role in the improvement of the quality of care. This poster presents an under development eLearning tool formalizing the guidelines of the transfusion process. This system, implemented in YAWL (Yet Another Workflow Language), will be used to train the personnel in order to improve the efficiency of care and to reduce errors.
Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J.; Tabirca, Sabin; O’Driscoll, Aoife; Corrigan, Mark
2016-01-01
Background Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. Methods An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. Results A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). Conclusions An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment. PMID:28293602
O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark
2016-01-01
Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.
Structured Head and Neck CT Angiography Reporting Reduces Resident Revision Rates.
Johnson, Tucker F; Brinjikji, Waleed; Doolittle, Derrick A; Nagelschneider, Alex A; Welch, Brian T; Kotsenas, Amy L
2018-04-12
This resident-driven quality improvement project was undertaken to assess the effectiveness of structured reporting to reduce revision rates for afterhours reports dictated by residents. The first part of the study assessed baseline revision rates for head and neck CT angiography (CTA) examinations dictated by residents during afterhours call. A structured report was subsequently created based on templates on the RSNA informatics reporting website and critical findings that should be assessed for on all CTA examinations. The template was made available to residents through the speech recognition software for all head and neck CTA examinations for a duration of 2 months. Report revision rates were then compared with and without use of the structured template. The structured template was found to reduce revision rates by approximately 50% with 10/41 unstructured reports revised and 2/17 structured reports revised. We believe that structured reporting can help reduce reporting errors, particularly in term of typographical errors, train residents to evaluate complex examinations in a systematic fashion, and assist them in recalling critical findings on these examinations. Copyright © 2018 Elsevier Inc. All rights reserved.
Data mining of air traffic control operational errors
DOT National Transportation Integrated Search
2006-01-01
In this paper we present the results of : applying data mining techniques to identify patterns and : anomalies in air traffic control operational errors (OEs). : Reducing the OE rate is of high importance and remains a : challenge in the aviation saf...
Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Fisher, Brad L.; Wolff, David B.
2007-01-01
This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.
Human operator response to error-likely situations in complex engineering systems
NASA Technical Reports Server (NTRS)
Morris, Nancy M.; Rouse, William B.
1988-01-01
The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
The influence of the structure and culture of medical group practices on prescription drug errors.
Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer
2005-08-01
This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.
Achieving High Reliability in Histology: An Improvement Series to Reduce Errors.
Heher, Yael K; Chen, Yigu; Pyatibrat, Sergey; Yoon, Edward; Goldsmith, Jeffrey D; Sands, Kenneth E
2016-11-01
Despite sweeping medical advances in other fields, histology processes have by and large remained constant over the past 175 years. Patient label identification errors are a known liability in the laboratory and can be devastating, resulting in incorrect diagnoses and inappropriate treatment. The objective of this study was to identify vulnerable steps in the histology workflow and reduce the frequency of labeling errors (LEs). In this 36-month study period, a numerical step key (SK) was developed to capture LEs. The two most prevalent root causes were targeted for Lean workflow redesign: manual slide printing and microtome cutting. The numbers and rates of LEs before and after interventions were compared to evaluate the effectiveness of interventions. Following the adoption of a barcode-enabled laboratory information system, the error rate decreased from a baseline of 1.03% (794 errors in 76,958 cases) to 0.28% (107 errors in 37,880 cases). After the implementation of an innovative ice tool box, allowing single-piece workflow for histology microtome cutting, the rate came down to 0.22% (119 errors in 54,342 cases). The study pointed out the importance of tracking and understanding LEs by using a simple numerical SK and quantified the effectiveness of two customized Lean interventions. Overall, a 78.64% reduction in LEs and a 35.28% reduction in time spent on rework have been observed since the study began. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.
Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M
2006-10-01
Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2015-03-01
Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.
Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise
NASA Technical Reports Server (NTRS)
Sedlak, J.; Hashmall, J.
1997-01-01
Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.
Tully, Mary P; Buchan, Iain E
2009-12-01
To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-05-01
Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Vairy, Stephanie; Corny, Jennifer; Jamoulle, Olivier; Levy, Arielle; Lebel, Denis; Carceller, Ana
2017-12-01
A high rate of prescription errors exists in pediatric teaching hospitals, especially during initial training. To determine the effectiveness of a two-hour lecture by a pharmacist on rates of prescription errors and quality of prescriptions. A two-hour lecture led by a pharmacist was provided to 11 junior pediatric residents (PGY-1) as part of a one-month immersion program. A control group included 15 residents without the intervention. We reviewed charts to analyze the first 50 prescriptions of each resident. Data were collected from 1300 prescriptions involving 451 patients, 550 in the intervention group and 750 in the control group. The rate of prescription errors in the intervention group was 9.6% compared to 11.3% in the control group (p=0.32), affecting 106 patients. Statistically significant differences between both groups were prescriptions with unwritten doses (p=0.01) and errors involving overdosing (p=0.04). We identified many errors as well as issues surrounding quality of prescriptions. We found a 10.6% prescription error rate. This two-hour lecture seems insufficient to reduce prescription errors among junior pediatric residents. This study highlights the most frequent types of errors and prescription quality issues that should be targeted by future educational interventions.
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory
NASA Astrophysics Data System (ADS)
Yan, Daqin; Wang, Fuzhong; Wang, Shuo
2017-12-01
Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.
Refining Field Measurements of Methane Flux Rates from Abandoned Oil and Gas Wells
NASA Astrophysics Data System (ADS)
Lagron, C. S.; Kang, M.; Riqueros, N. S.; Jackson, R. B.
2015-12-01
Recent studies in Pennsylvania demonstrate the potential for significant methane emissions from abandoned oil and gas wells. A subset of tested wells was high emitting, with methane flux rates up to seven orders of magnitude greater than natural fluxes (up to 105 mg CH4/hour, or about 2.5LPM). These wells contribute disproportionately to the total methane emissions from abandoned oil and gas wells. The principles guiding the chamber design have been developed for lower flux rates, typically found in natural environments, and chamber design modifications may reduce uncertainty in flux rates associated with high-emitting wells. Kang et al. estimate errors of a factor of two in measured values based on previous studies. We conduct controlled releases of methane to refine error estimates and improve chamber design with a focus on high-emitters. Controlled releases of methane are conducted at 0.05 LPM, 0.50 LPM, 1.0 LPM, 2.0 LPM, 3.0 LPM, and 5.0 LPM, and at two chamber dimensions typically used in field measurements studies of abandoned wells. As most sources of error tabulated by Kang et al. tend to bias the results toward underreporting of methane emissions, a flux-targeted chamber design modification can reduce error margins and/or provide grounds for a potential upward revision of emission estimates.
Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A
2015-10-01
Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-01-01
Introduction Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%–38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. Methods We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Results Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). Conclusion A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive. PMID:28874948
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-08-01
Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%-38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive.
2011-01-01
Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
ERIC Educational Resources Information Center
Alper, Jaclyn
2012-01-01
A total of 52 Wechsler Intelligence Scale for Children, Fourth Edition (WISC-IV) protocols, administered by graduate students were examined to obtain data on the type and frequency of examiner errors, the impact of errors on resultant test scores as well as improvement rate over the course of two years in training. Findings were consistent with…
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Particle Swarm Optimization approach to defect detection in armour ceramics.
Kesharaju, Manasa; Nagarajah, Romesh
2017-03-01
In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.
Influence of measurement error on Maxwell's demon
NASA Astrophysics Data System (ADS)
Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.
2017-06-01
In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .
Automated drug dispensing system reduces medication errors in an intensive care setting.
Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick
2010-12-01
We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.
Technological Advancements and Error Rates in Radiation Therapy Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui
2011-11-15
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1989-01-01
Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.
Zucker, Jason; Mittal, Jaimie; Jen, Shin-Pung; Cheng, Lucy; Cennimo, David
2016-03-01
There is a high prevalence of HIV infection in Newark, New Jersey, with University Hospital admitting approximately 600 HIV-infected patients per year. Medication errors involving antiretroviral therapy (ART) could significantly affect treatment outcomes. The goal of this study was to evaluate the effectiveness of various stewardship interventions in reducing the prevalence of prescribing errors involving ART. This was a retrospective review of all inpatients receiving ART for HIV treatment during three distinct 6-month intervals over a 3-year period. During the first year, the baseline prevalence of medication errors was determined. During the second year, physician and pharmacist education was provided, and a computerized order entry system with drug information resources and prescribing recommendations was implemented. Prospective audit of ART orders with feedback was conducted in the third year. Analyses and comparisons were made across the three phases of this study. Of the 334 patients with HIV admitted in the first year, 45% had at least one antiretroviral medication error and 38% had uncorrected errors at the time of discharge. After education and computerized order entry, significant reductions in medication error rates were observed compared to baseline rates; 36% of 315 admissions had at least one error and 31% had uncorrected errors at discharge. While the prevalence of antiretroviral errors in year 3 was similar to that of year 2 (37% of 276 admissions), there was a significant decrease in the prevalence of uncorrected errors at discharge (12%) with the use of prospective review and intervention. Interventions, such as education and guideline development, can aid in reducing ART medication errors, but a committed stewardship program is necessary to elicit the greatest impact. © 2016 Pharmacotherapy Publications, Inc.
DNA/RNA transverse current sequencing: intrinsic structural noise from neighboring bases
Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.
2015-01-01
Nanopore DNA sequencing via transverse current has emerged as a promising candidate for third-generation sequencing technology. It produces long read lengths which could alleviate problems with assembly errors inherent in current technologies. However, the high error rates of nanopore sequencing have to be addressed. A very important source of the error is the intrinsic noise in the current arising from carrier dispersion along the chain of the molecule, i.e., from the influence of neighboring bases. In this work we perform calculations of the transverse current within an effective multi-orbital tight-binding model derived from first-principles calculations of the DNA/RNA molecules, to study the effect of this structural noise on the error rates in DNA/RNA sequencing via transverse current in nanopores. We demonstrate that a statistical technique, utilizing not only the currents through the nucleotides but also the correlations in the currents, can in principle reduce the error rate below any desired precision. PMID:26150827
Error-rate prediction for programmable circuits: methodology, tools and studied cases
NASA Astrophysics Data System (ADS)
Velazco, Raoul
2013-05-01
This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).
Hansen, Heidi; Ben-David, Merav; McDonald, David B
2008-03-01
In noninvasive genetic sampling, when genotyping error rates are high and recapture rates are low, misidentification of individuals can lead to overestimation of population size. Thus, estimating genotyping errors is imperative. Nonetheless, conducting multiple polymerase chain reactions (PCRs) at multiple loci is time-consuming and costly. To address the controversy regarding the minimum number of PCRs required for obtaining a consensus genotype, we compared consumer-style the performance of two genotyping protocols (multiple-tubes and 'comparative method') in respect to genotyping success and error rates. Our results from 48 faecal samples of river otters (Lontra canadensis) collected in Wyoming in 2003, and from blood samples of five captive river otters amplified with four different primers, suggest that use of the comparative genotyping protocol can minimize the number of PCRs per locus. For all but five samples at one locus, the same consensus genotypes were reached with fewer PCRs and with reduced error rates with this protocol compared to the multiple-tubes method. This finding is reassuring because genotyping errors can occur at relatively high rates even in tissues such as blood and hair. In addition, we found that loci that amplify readily and yield consensus genotypes, may still exhibit high error rates (7-32%) and that amplification with different primers resulted in different types and rates of error. Thus, assigning a genotype based on a single PCR for several loci could result in misidentification of individuals. We recommend that programs designed to statistically assign consensus genotypes should be modified to allow the different treatment of heterozygotes and homozygotes intrinsic to the comparative method. © 2007 The Authors.
Stereotype threat can reduce older adults' memory errors.
Barber, Sarah J; Mather, Mara
2013-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research, we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment. Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 and 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well.
Error reporting in transfusion medicine at a tertiary care centre: a patient safety initiative.
Elhence, Priti; Shenoy, Veena; Verma, Anupam; Sachan, Deepti
2012-11-01
Errors in the transfusion process can compromise patient safety. A study was undertaken at our center to identify the errors in the transfusion process and their causes in order to reduce their occurrence by corrective and preventive actions. All near miss, no harm events and adverse events reported in the 'transfusion process' during 1 year study period were recorded, classified and analyzed at a tertiary care teaching hospital in North India. In total, 285 transfusion related events were reported during the study period. Of these, there were four adverse (1.5%), 10 no harm (3.5%) and 271 (95%) near miss events. Incorrect blood component transfusion rate was 1 in 6031 component units. ABO incompatible transfusion rate was one in 15,077 component units issued or one in 26,200 PRBC units issued and acute hemolytic transfusion reaction due to ABO incompatible transfusion was 1 in 60,309 component units issued. Fifty-three percent of the antecedent near miss events were bedside events. Patient sample handling errors were the single largest category of errors (n=94, 33%) followed by errors in labeling and blood component handling and storage in user areas. The actual and near miss event data obtained through this initiative provided us with clear evidence about latent defects and critical points in the transfusion process so that corrective and preventive actions could be taken to reduce errors and improve transfusion safety.
Computer calculated dose in paediatric prescribing.
Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C
2005-01-01
Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.
Debiasing affective forecasting errors with targeted, but not representative, experience narratives.
Shaffer, Victoria A; Focella, Elizabeth S; Scherer, Laura D; Zikmund-Fisher, Brian J
2016-10-01
To determine whether representative experience narratives (describing a range of possible experiences) or targeted experience narratives (targeting the direction of forecasting bias) can reduce affective forecasting errors, or errors in predictions of experiences. In Study 1, participants (N=366) were surveyed about their experiences with 10 common medical events. Those who had never experienced the event provided ratings of predicted discomfort and those who had experienced the event provided ratings of actual discomfort. Participants making predictions were randomly assigned to either the representative experience narrative condition or the control condition in which they made predictions without reading narratives. In Study 2, participants (N=196) were again surveyed about their experiences with these 10 medical events, but participants making predictions were randomly assigned to either the targeted experience narrative condition or the control condition. Affective forecasting errors were observed in both studies. These forecasting errors were reduced with the use of targeted experience narratives (Study 2) but not representative experience narratives (Study 1). Targeted, but not representative, narratives improved the accuracy of predicted discomfort. Public collections of patient experiences should favor stories that target affective forecasting biases over stories representing the range of possible experiences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Medication safety initiative in reducing medication errors.
Nguyen, Elisa E; Connolly, Phyllis M; Wong, Vivian
2010-01-01
The purpose of the study was to evaluate whether a Medication Pass Time Out initiative was effective and sustainable in reducing medication administration errors. A retrospective descriptive method was used for this research, where a structured Medication Pass Time Out program was implemented following staff and physician education. As a result, the rate of interruptions during the medication administration process decreased from 81% to 0. From the observations at baseline, 6 months, and 1 year after implementation, the percent of doses of medication administered without interruption improved from 81% to 99%. Medication doses administered without errors at baseline, 6 months, and 1 year improved from 98% to 100%.
Improving TCP Network Performance by Detecting and Reacting to Packet Reordering
NASA Technical Reports Server (NTRS)
Kruse, Hans; Ostermann, Shawn; Allman, Mark
2003-01-01
There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP s performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of 10(exp -5)
Nya-Ngatchou, Jean-Jacques; Corl, Dawn; Onstad, Susan; Yin, Tom; Tylee, Tracy; Suhr, Louise; Thompson, Rachel E; Wisse, Brent E
2015-02-01
Hypoglycaemia is associated with morbidity and mortality in critically ill patients, and many hospitals have programmes to minimize hypoglycaemia rates. Recent studies have established the hypoglycaemic patient-day as a key metric and have published benchmark inpatient hypoglycaemia rates on the basis of point-of-care blood glucose data even though these values are prone to measurement errors. A retrospective, cohort study including all patients admitted to Harborview Medical Center Intensive Care Units (ICUs) during 2010 and 2011 was conducted to evaluate a quality improvement programme to reduce inappropriate documentation of point-of-care blood glucose measurement errors. Laboratory Medicine point-of-care blood glucose data and patient charts were reviewed to evaluate all episodes of hypoglycaemia. A quality improvement intervention decreased measurement errors from 31% of hypoglycaemic (<70 mg/dL) patient-days in 2010 to 14% in 2011 (p < 0.001) and decreased the observed hypoglycaemia rate from 4.3% of ICU patient-days to 3.4% (p < 0.001). Hypoglycaemic events were frequently recurrent or prolonged (~40%), and these events are not identified by the hypoglycaemic patient-day metric, which also may be confounded by a large number of very low risk or minimally monitored patient-days. Documentation of point-of-care blood glucose measurement errors likely overestimates ICU hypoglycaemia rates and can be reduced by a quality improvement effort. The currently used hypoglycaemic patient-day metric does not evaluate recurrent or prolonged events that may be more likely to cause patient harm. The monitored patient-day as currently defined may not be the optimal denominator to determine inpatient hypoglycaemic risk. Copyright © 2014 John Wiley & Sons, Ltd.
Implementing smart infusion pumps with dose-error reduction software: real-world experiences.
Heron, Claire
2017-04-27
Intravenous (IV) drug administration, especially with 'smart pumps', is complex and susceptible to errors. Although errors can occur at any stage of the IV medication process, most errors occur during reconstitution and administration. Dose-error reduction software (DERS) loaded on to infusion pumps incorporates a drug library with predefined upper and lower drug dose limits and infusion rates, which can reduce IV infusion errors. Although this is an important advance for patient safety at the point of care, uptake is still relatively low. This article discuses the challenges and benefits of implementing DERS in clinical practice as experienced by three UK trusts.
Morales-González, María Fernanda; Galiano Gálvez, María Alejandra
2017-09-08
Our institution implemented the use of pre-designed labeling of intravenous drugs and fluids, administration routes and infusion pumps of to prevent medication errors. To evaluate the effectiveness of predesigned labeling in reducing medication errors in the preparation and administration stages of prescribed medication in patients hospitalized with invasive lines, and to characterize medication errors. This is a pre/post intervention study. Pre-intervention group: invasively administered dose from July 1st to December 31st, 2014, using traditional labeling (adhesive paper handwritten note). Post-intervention group: dose administered from January 1st to June 30th, 2015, using predesigned labeling (labeling with preset data-adhesive labels, color- grouped by drugs, labels with colors for invasive lines). Outcome: medication errors in hospitalized patients, as measured with notification form and record electronics. Tabulation/analysis Stata-10, with descriptive statistics, hypotheses testing, estimating risk with 95% confidence. In the pre-intervention group, 5,819 doses of drugs were administered invasively in 634 patients. Error rate of 1.4 x 1,000 administrations. The post-intervention group of 1088 doses comprised 8,585 patients with similar routes of administration. The error rate was 0.3 x 1,000 (p = 0.034). Patients receiving medication through an invasive route who did not use predesigned labeling had 4.6 times more risk of medication error than those who had used predesigned labels (95% CI: 1.25 to 25.4). The adult critically ill patient unit had the highest proportion of medication errors. The most frequent error was wrong dose administration. 41.2% produced harm to the patient. The use of predesigned labeling in invasive lines reduces errors in medication in the last two phases: preparation and administration.
Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark
2012-11-01
To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Shihai; Lo, Chien-Chi; Li, Po-E
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...
2016-02-29
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
Weaver, Amy L; Stutzman, Sonja E; Supnet, Charlene; Olson, DaiWai M
2016-03-01
The emergency department (ED) is demanding and high risk. The impact of sleep quantity has been hypothesized to impact patient care. This study investigated the hypothesis that fatigue and impaired mentation, due to sleep disturbance and shortened overall sleeping hours, would lead to increased nursing errors. This is a prospective observational study of 30 ED nurses using self-administered survey and sleep architecture measured by wrist actigraphy as predictors of self-reported error rates. An actigraphy device was worn prior to working a 12-hour shift and nurses completed the Pittsburgh Sleep Quality Index (PSQI). Error rates were reported on a visual analog scale at the end of a 12-hour shift. The PSQI responses indicated that 73.3% of subjects had poor sleep quality. Lower sleep quality measured by actigraphy (hours asleep/hours in bed) was associated with higher self-perceived minor errors. Sleep quantity (total hours slept) was not associated with minor, moderate, nor severe errors. Our study found that ED nurses' sleep quality, immediately prior to a working 12-hour shift, is more predictive of error than sleep quantity. These results present evidence that a "good night's sleep" prior to working a nursing shift in the ED is beneficial for reducing minor errors. Copyright © 2016 Elsevier Ltd. All rights reserved.
The effectiveness of risk management program on pediatric nurses' medication error.
Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat
2013-09-01
Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P < 0.001) and the error-reporting rate was higher (P < 0.007) compared to before the intervention and also in comparison to the nurses of the control hospital. Based on the results of this study and taking into account the high-risk nature of the medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
Servo control booster system for minimizing following error
Wise, William L.
1985-01-01
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.
Yang, Yingdong; Mao, Xuchu; Tian, Weifeng
2016-06-08
Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.
NASA Astrophysics Data System (ADS)
Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.
2018-04-01
Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.
Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R
2014-01-01
We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.
Stereotype threat can reduce older adults' memory errors
Barber, Sarah J.; Mather, Mara
2014-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment (Seibt & Förster, 2004). Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 & 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well. PMID:24131297
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.
Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean
2017-06-01
Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.
Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression
NASA Astrophysics Data System (ADS)
Daly, Scott J.
1989-08-01
The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.
Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather
2018-01-09
Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These results confirm experimental findings in other fields and raise questions about the acceptability of the high rates of multitasking and interruption in clinical environments. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Bechet, P.; Mitran, R.; Munteanu, M.
2013-08-01
Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.
A hybrid method for synthetic aperture ladar phase-error compensation
NASA Astrophysics Data System (ADS)
Hua, Zhili; Li, Hongping; Gu, Yongjian
2009-07-01
As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.
Syntactic and semantic errors in radiology reports associated with speech recognition software.
Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J
2017-03-01
Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors ( p < .001). Proportion of errors and fraction of material errors varied significantly among radiologists and between imaging subspecialties ( p < .001). Errors were more common in cross-sectional reports, reports reinterpreting results of outside examinations, and procedural studies (all p < .001). Error rate decreased over time ( p < .001), which suggests that a quality control program with regular feedback may reduce errors.
NASA Technical Reports Server (NTRS)
Balla, R. Jeffrey; Miller, Corey A.
2008-01-01
This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.
NASA Technical Reports Server (NTRS)
Borgia, Andrea; Spera, Frank J.
1990-01-01
This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.
Outpatient Prescribing Errors and the Impact of Computerized Prescribing
Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W
2005-01-01
Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752
Advancing the research agenda for diagnostic error reduction.
Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep
2013-10-01
Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.
Information technology and medication safety: what is the benefit?
Kaushal, R; Bates, D
2002-01-01
Medication errors occur frequently and have significant clinical and financial consequences. Several types of information technologies can be used to decrease rates of medication errors. Computerized physician order entry with decision support significantly reduces serious inpatient medication error rates in adults. Other available information technologies that may prove effective for inpatients include computerized medication administration records, robots, automated pharmacy systems, bar coding, "smart" intravenous devices, and computerized discharge prescriptions and instructions. In outpatients, computerization of prescribing and patient oriented approaches such as personalized web pages and delivery of web based information may be important. Public and private mandates for information technology interventions are growing, but further development, application, evaluation, and dissemination are required. PMID:12486992
Morbi, Abigail H M; Hamady, Mohamad S; Riga, Celia V; Kashef, Elika; Pearch, Ben J; Vincent, Charles; Moorthy, Krishna; Vats, Amit; Cheshire, Nicholas J W; Bicknell, Colin D
2012-08-01
To determine the type and frequency of errors during vascular interventional radiology (VIR) and design and implement an intervention to reduce error and improve efficiency in this setting. Ethical guidance was sought from the Research Services Department at Imperial College London. Informed consent was not obtained. Field notes were recorded during 55 VIR procedures by a single observer. Two blinded assessors identified failures from field notes and categorized them into one or more errors by using a 22-part classification system. The potential to cause harm, disruption to procedural flow, and preventability of each failure was determined. A preprocedural team rehearsal (PPTR) was then designed and implemented to target frequent preventable potential failures. Thirty-three procedures were observed subsequently to determine the efficacy of the PPTR. Nonparametric statistical analysis was used to determine the effect of intervention on potential failure rates, potential to cause harm and procedural flow disruption scores (Mann-Whitney U test), and number of preventable failures (Fisher exact test). Before intervention, 1197 potential failures were recorded, of which 54.6% were preventable. A total of 2040 errors were deemed to have occurred to produce these failures. Planning error (19.7%), staff absence (16.2%), equipment unavailability (12.2%), communication error (11.2%), and lack of safety consciousness (6.1%) were the most frequent errors, accounting for 65.4% of the total. After intervention, 352 potential failures were recorded. Classification resulted in 477 errors. Preventable failures decreased from 54.6% to 27.3% (P < .001) with implementation of PPTR. Potential failure rates per hour decreased from 18.8 to 9.2 (P < .001), with no increase in potential to cause harm or procedural flow disruption per failure. Failures during VIR procedures are largely because of ineffective planning, communication error, and equipment difficulties, rather than a result of technical or patient-related issues. Many of these potential failures are preventable. A PPTR is an effective means of targeting frequent preventable failures, reducing procedural delays and improving patient safety.
[Medical errors: inevitable but preventable].
Giard, R W
2001-10-27
Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.
Berwid, Olga G.; Halperin, Jeffrey M.; Johnson, Ray E.; Marks, David J.
2013-01-01
Background Attention-Deficit/Hyperactivity Disorder has been associated with deficits in self-regulatory cognitive processes, some of which are thought to lie at the heart of the disorder. Slowing of reaction times (RTs) for correct responses following errors made during decision tasks has been interpreted as an indication of intact self-regulatory functioning and has been shown to be attenuated in school-aged children with ADHD. This study attempted to examine whether ADHD symptoms are associated with an early-emerging deficit in post-error slowing. Method A computerized two-choice RT task was administered to an ethnically diverse sample of preschool-aged children classified as either ‘control’ (n = 120) or ‘hyperactive/inattentive’ (HI; n = 148) using parent- and teacher-rated ADHD symptoms. Analyses were conducted to determine whether HI preschoolers exhibit a deficit in this self-regulatory ability. Results HI children exhibited reduced post-error slowing relative to controls on the trials selected for analysis. Supplementary analyses indicated that this may have been due to a reduced proportion of trials following errors on which HI children slowed rather than to a reduction in the absolute magnitude of slowing on all trials following errors. Conclusions High levels of ADHD symptoms in preschoolers may be associated with a deficit in error processing as indicated by post-error slowing. The results of supplementary analyses suggest that this deficit is perhaps more a result of failures to perceive errors than of difficulties with executive control. PMID:23387525
Hasni, Nesrine; Ben Hamida, Emira; Ben Jeddou, Khouloud; Ben Hamida, Sarra; Ayadi, Imene; Ouahchi, Zeineb; Marrakchi, Zahra
2016-12-01
The medication iatrogenic risk is quite unevaluated in neonatology Objective: Assessment of errors that occurred during the preparation and administration of injectable medicines in a neonatal unit in order to implement corrective actions to reduce the occurrence of these errors. A prospective, observational study was performed in a neonatal unit over a period of one month. The practice of preparing and administering injectable medications were identified through a standardized data collection form. These practices were compared with summaries of the characteristics of each product (RCP) and the bibliography. One hundred preparations were observed of 13 different drugs. 85 errors during preparations and administration steps were detected. These errors were divided into preparation errors in 59% of cases such as changing the dilution protocol (32%), the use of bad solvent (11%) and administration errors in 41% of cases as errors timing of administration (18%) or omission of administration (9%). This study showed a high rate of errors during stages of preparation and administration of injectable drugs. In order to optimize the care of newborns and reduce the risk of medication errors, corrective actions have been implemented through the establishment of a quality assurance system which consisted of the development of injectable drugs preparation procedures, the introduction of a labeling system and staff training.
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
Neuropsychological analysis of a typewriting disturbance following cerebral damage.
Boyle, M; Canter, G J
1987-01-01
Following a left CVA, a skilled professional typist sustained a disturbance of typing disproportionate to her handwriting disturbance. Typing errors were predominantly of the sequencing type, with spatial errors much less frequent, suggesting that the impairment was based on a relatively early (premotor) stage of processing. Depriving the subject of visual feedback during handwriting greatly increased her error rate. Similarly, interfering with auditory feedback during speech substantially reduced her self-correction of speech errors. These findings suggested that impaired ability to utilize somesthetic information--probably caused by the subject's parietal lobe lesion--may have been the basis of the typing disorder.
Decreasing patient identification band errors by standardizing processes.
Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie
2013-04-01
Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
Local position control: A new concept for control of manipulators
NASA Technical Reports Server (NTRS)
Kelly, Frederick A.
1988-01-01
Resolved motion rate control is currently one of the most frequently used methods of manipulator control. It is currently used in the Space Shuttle remote manipulator system (RMS) and in prosthetic devices. Position control is predominately used in locating the end-effector of an industrial manipulator along a path with prescribed timing. In industrial applications, resolved motion rate control is inappropriate since position error accumulates. This is due to velocity being the control variable. In some applications this property is an advantage rather than a disadvantage. It may be more important for motion to end as soon as the input command is removed rather than reduce the position error to zero. Local position control is a new concept for manipulator control which retains the important properties of resolved motion rate control, but reduces the drift. Local position control can be considered to be a generalization of resolved position and resolved rate control. It places both control schemes on a common mathematical basis.
Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas
2003-07-01
Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.
Cochran, Gary L; Barrett, Ryan S; Horn, Susan D
2016-08-01
The role of pharmacist transcription, onsite pharmacist dispensing, use of automated dispensing cabinets (ADCs), nurse-nurse double checks, or barcode-assisted medication administration (BCMA) in reducing medication error rates in critical access hospitals (CAHs) was evaluated. Investigators used the practice-based evidence methodology to identify predictors of medication errors in 12 Nebraska CAHs. Detailed information about each medication administered was recorded through direct observation. Errors were identified by comparing the observed medication administered with the physician's order. Chi-square analysis and Fisher's exact test were used to measure differences between groups of medication-dispensing procedures. Nurses observed 6497 medications being administered to 1374 patients. The overall error rate was 1.2%. The transcription error rates for orders transcribed by an onsite pharmacist were slightly lower than for orders transcribed by a telepharmacy service (0.10% and 0.33%, respectively). Fewer dispensing errors occurred when medications were dispensed by an onsite pharmacist versus any other method of medication acquisition (0.10% versus 0.44%, p = 0.0085). The rates of dispensing errors for medications that were retrieved from a single-cell ADC (0.19%), a multicell ADC (0.45%), or a drug closet or general supply (0.77%) did not differ significantly. BCMA was associated with a higher proportion of dispensing and administration errors intercepted before reaching the patient (66.7%) compared with either manual double checks (10%) or no BCMA or double check (30.4%) of the medication before administration (p = 0.0167). Onsite pharmacist dispensing and BCMA were associated with fewer medication errors and are important components of a medication safety strategy in CAHs. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Chua, Siew-Siang; Choo, Sim-Mei; Sulaiman, Che Zuraini; Omar, Asma; Thong, Meow-Keong
2017-01-01
Background and purpose Drug administration errors are more likely to reach the patient than other medication errors. The main aim of this study was to determine whether the sharing of information on drug administration errors among health care providers would reduce such problems. Patients and methods This study involved direct, undisguised observations of drug administrations in two pediatric wards of a major teaching hospital in Kuala Lumpur, Malaysia. This study consisted of two phases: Phase 1 (pre-intervention) and Phase 2 (post-intervention). Data were collected by two observers over a 40-day period in both Phase 1 and Phase 2 of the study. Both observers were pharmacy graduates: Observer 1 just completed her undergraduate pharmacy degree, whereas Observer 2 was doing her one-year internship as a provisionally registered pharmacist in the hospital under study. A drug administration error was defined as a discrepancy between the drug regimen received by the patient and that intended by the prescriber and also drug administration procedures that did not follow standard hospital policies and procedures. Results from Phase 1 of the study were analyzed, presented and discussed with the ward staff before commencement of data collection in Phase 2. Results A total of 1,284 and 1,401 doses of drugs were administered in Phase 1 and Phase 2, respectively. The rate of drug administration errors reduced significantly from Phase 1 to Phase 2 (44.3% versus 28.6%, respectively; P<0.001). Logistic regression analysis showed that the adjusted odds of drug administration errors in Phase 1 of the study were almost three times that in Phase 2 (P<0.001). The most common types of errors were incorrect administration technique and incorrect drug preparation. Nasogastric and intravenous routes of drug administration contributed significantly to the rate of drug administration errors. Conclusion This study showed that sharing of the types of errors that had occurred was significantly associated with a reduction in drug administration errors. PMID:28356748
NASA Technical Reports Server (NTRS)
Burken, John J. (Inventor); Burcham, Frank W., Jr. (Inventor); Bull, John (Inventor)
2000-01-01
Development of an emergency flight control system is disclosed for lateral control using only differential engine thrust modulation of multiengine aircraft is currently underway. The multiengine has at least two engines laterally displaced to the left and right from the axis of the aircraft. In response to a heading angle command psi(sub c) is to be tracked. By continually sensing the heading angle psi of the aircraft and computing a heading error signal psi(sub e) as a function of the difference between the heading angle command psi(sub c) and the sensed heading angle psi, a track control signal is developed with compensation as a function of sensed bank angle phi. Bank angle rate phi, or roll rate p, yaw rate tau, and true velocity produce an aircraft thrust control signal ATC(sub psi(L,R)). The thrust control signal is differentially applied to the left and right engines, with equal amplitude and opposite sign, such that a negative sign is applied to the control signal on the side of the aircraft. A turn is required to reduce the error signal until the heading feedback reduces the error to zero.
Servo control booster system for minimizing following error
Wise, W.L.
1979-07-26
A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.
Niccum, Brittany A; Lee, Heewook; MohammedIsmail, Wazim; Tang, Haixu; Foster, Patricia L
2018-06-15
When the DNA polymerase that replicates the Escherichia coli chromosome, DNA Pol III, makes an error, there are two primary defenses against mutation: proofreading by the epsilon subunit of the holoenzyme and mismatch repair. In proofreading deficient strains, mismatch repair is partially saturated and the cell's response to DNA damage, the SOS response, may be partially induced. To investigate the nature of replication errors, we used mutation accumulation experiments and whole genome sequencing to determine mutation rates and mutational spectra across the entire chromosome of strains deficient in proofreading, mismatch repair, and the SOS response. We report that a proofreading-deficient strain has a mutation rate 4,000-fold greater than wild-type strains. While the SOS response may be induced in these cells, it does not contribute to the mutational load. Inactivating mismatch repair in a proofreading-deficient strain increases the mutation rate another 1.5-fold. DNA polymerase has a bias for converting G:C to A:T base pairs, but proofreading reduces the impact of these mutations, helping to maintain the genomic G:C content. These findings give an unprecedented view of how polymerase and error-correction pathways work together to maintain E. coli' s low mutation rate of 1 per thousand generations. Copyright © 2018, Genetics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less
Marquardt, Lynn; Eichele, Heike; Lundervold, Astri J.; Haavik, Jan; Eichele, Tom
2018-01-01
Introduction: Attention-deficit hyperactivity disorder (ADHD) is one of the most frequent neurodevelopmental disorders in children and tends to persist into adulthood. Evidence from neuropsychological, neuroimaging, and electrophysiological studies indicates that alterations of error processing are core symptoms in children and adolescents with ADHD. To test whether adults with ADHD show persisting deficits and compensatory processes, we investigated performance monitoring during stimulus-evaluation and response-selection, with a focus on errors, as well as within-group correlations with symptom scores. Methods: Fifty-five participants (27 ADHD and 28 controls) aged 19–55 years performed a modified flanker task during EEG recording with 64 electrodes, and the ADHD and control groups were compared on measures of behavioral task performance, event-related potentials of performance monitoring (N2, P3), and error processing (ERN, Pe). Adult ADHD Self-Report Scale (ASRS) was used to assess ADHD symptom load. Results: Adults with ADHD showed higher error rates in incompatible trials, and these error rates correlated positively with the ASRS scores. Also, we observed lower P3 amplitudes in incompatible trials, which were inversely correlated with symptom load in the ADHD group. Adults with ADHD also displayed reduced error-related ERN and Pe amplitudes. There were no significant differences in reaction time (RT) and RT variability between the two groups. Conclusion: Our findings show deviations of electrophysiological measures, suggesting reduced effortful engagement of attentional and error-monitoring processes in adults with ADHD. Associations between ADHD symptom scores, event-related potential amplitudes, and poorer task performance in the ADHD group further support this notion. PMID:29706908
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
Improving registration accuracy.
Murphy, J Patrick; Shorrosh, Paul
2008-04-01
A registration quality assurance initiative--whether manual or automated--can result in benefits such as: Cleaner claims, Reduced cost to collect, Enhanced revenue, Decreased registration, error rates, Improved staff morale, Fewer customer complaints
Zupanc, Christine M; Burgess-Limerick, Robin J; Wallis, Guy
2007-08-01
To investigate error and reaction time consequences of alternating compatible and incompatible steering arrangements during a simulated obstacle avoidance task. Underground coal mine shuttle cars provide an example of a vehicle in which operators are required to alternate between compatible and incompatible steering configurations. This experiment examines the performance of 48 novice participants in a virtual analogy of an underground coal mine shuttle car. Participants were randomly assigned to a compatible condition, an incompatible condition, an alternating condition in which compatibility alternated within and between hands, or an alternating condition in which compatibility alternated between hands. Participants made fewer steering direction errors and made correct steering responses more quickly in the compatible condition. Error rate decreased over time in the incompatible condition. A compatibility effect for both errors and reaction time was also found when the control-response relationship alternated; however, performance improvements over time were not consistent. Isolating compatibility to a hand resulted in reduced error rate and faster reaction time than when compatibility alternated within and between hands. The consequences of alternating control-response relationships are higher error rates and slower responses, at least in the early stages of learning. This research highlights the importance of ensuring consistently compatible human-machine directional control-response relationships.
Viterbi equalization for long-distance, high-speed underwater laser communication
NASA Astrophysics Data System (ADS)
Hu, Siqi; Mi, Le; Zhou, Tianhua; Chen, Weibiao
2017-07-01
In long-distance, high-speed underwater laser communication, because of the strong absorption and scattering processes, the laser pulse is stretched with the increase in communication distance and the decrease in water clarity. The maximum communication bandwidth is limited by laser-pulse stretching. Improving the communication rate increases the intersymbol interference (ISI). To reduce the effect of ISI, the Viterbi equalization (VE) algorithm is used to estimate the maximum-likelihood receiving sequence. The Monte Carlo method is used to simulate the stretching of the received laser pulse and the maximum communication rate at a wavelength of 532 nm in Jerlov IB and Jerlov II water channels with communication distances of 80, 100, and 130 m, respectively. The high-data rate communication performance for the VE and hard-decision algorithms is compared. The simulation results show that the VE algorithm can be used to reduce the ISI by selecting the minimum error path. The trade-off between the high-data rate communication performance and minor bit-error rate performance loss makes VE a promising option for applications in long-distance, high-speed underwater laser communication systems.
Error and Error Mitigation in Low-Coverage Genome Assemblies
Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam
2011-01-01
The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033
Requirements for fault-tolerant factoring on an atom-optics quantum computer.
Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae
2013-01-01
Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Methods to Improve the Maintenance of the Earth Catalog of Satellites During Severe Solar Storms
NASA Technical Reports Server (NTRS)
Wilkin, Paul G.; Tolson, Robert H.
1998-01-01
The objective of this thesis is to investigate methods to improve the ability to maintain the inventory of orbital elements of Earth satellites during periods of atmospheric disturbance brought on by severe solar activity. Existing techniques do not account for such atmospheric dynamics, resulting in tracking errors of several seconds in predicted crossing time. Two techniques are examined to reduce of these tracking errors. First, density predicted from various atmospheric models is fit to the orbital decay rate for a number of satellites. An orbital decay model is then developed that could be used to reduce tracking errors by accounting for atmospheric changes. The second approach utilizes a Kalman filter to estimate the orbital decay rate of a satellite after every observation. The new information is used to predict the next observation. Results from the first approach demonstrated the feasibility of building an orbital decay model based on predicted atmospheric density. Correlation of atmospheric density to orbital decay was as high as 0.88. However, it is clear that contemporary: atmospheric models need further improvement in modeling density perturbations polar region brought on by solar activity. The second approach resulted in a dramatic reduction in tracking errors for certain satellites during severe solar Storms. For example, in the limited cases studied, the reduction in tracking errors ranged from 79 to 25 percent.
Multitasking simulation: Present application and future directions.
Adams, Traci Nicole; Rho, Jason C
2017-02-01
The Accreditation Council for Graduate Medical Education lists multi-tasking as a core competency in several medical specialties due to increasing demands on providers to manage the care of multiple patients simultaneously. Trainees often learn multitasking on the job without any formal curriculum, leading to high error rates. Multitasking simulation training has demonstrated success in reducing error rates among trainees. Studies of multitasking simulation demonstrate that this type of simulation is feasible, does not hinder the acquisition of procedural skill, and leads to better performance during subsequent periods of multitasking. Although some healthcare agencies have discouraged multitasking due to higher error rates among multitasking providers, it cannot be eliminated entirely in settings such as the emergency department in which providers care for more than one patient simultaneously. Simulation can help trainees to identify situations in which multitasking is inappropriate, while preparing them for situations in which multitasking is inevitable.
Code of Federal Regulations, 2010 CFR
2010-01-01
... informational activities but not for recruitment activities. Recruitment activities are those activities... decreased based upon its payment error rate as described in § 275.23. The rates of Federal funding for the activities identified in paragraphs (b)(2) and (b)(3) of this section shall not be reduced based upon the...
Chua, S S; Tea, M H; Rahman, M H A
2009-04-01
Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.
Liu, Shi Qiang; Zhu, Rong
2016-01-01
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314
NASA Astrophysics Data System (ADS)
Li, L. L.; Jin, C. L.; Ge, X.
2018-01-01
In this paper, the output regulation problem with dissipative property for a class of switched stochastic delay systems is investigated, based on an error-dependent switching law. Under the assumption that none subsystem is solvable for the problem, a sufficient condition is derived by structuring multiple Lyapunov-Krasovskii functionals with respect to multiple supply rates, via designing error feedback regulators. The condition is also established when dissipative property reduces to passive property. Finally, two numerical examples are given to demonstrate the feasibility and efficiency of the present method.
Cerebral metabolic dysfunction and impaired vigilance in recently abstinent methamphetamine abusers.
London, Edythe D; Berman, Steven M; Voytek, Bradley; Simon, Sara L; Mandelkern, Mark A; Monterosso, John; Thompson, Paul M; Brody, Arthur L; Geaga, Jennifer A; Hong, Michael S; Hayashi, Kiralee M; Rawson, Richard A; Ling, Walter
2005-11-15
Methamphetamine (MA) abusers have cognitive deficits, abnormal metabolic activity and structural deficits in limbic and paralimbic cortices, and reduced hippocampal volume. The links between cognitive impairment and these cerebral abnormalities are not established. We assessed cerebral glucose metabolism with [F-18]fluorodeoxyglucose positron emission tomography in 17 abstinent (4 to 7 days) methamphetamine users and 16 control subjects performing an auditory vigilance task and obtained structural magnetic resonance brain scans. Regional brain radioactivity served as a marker for relative glucose metabolism. Error rates on the task were related to regional radioactivity and hippocampal morphology. Methamphetamine users had higher error rates than control subjects on the vigilance task. The groups showed different relationships between error rates and relative activity in the anterior and middle cingulate gyrus and the insula. Whereas the MA user group showed negative correlations involving these regions, the control group showed positive correlations involving the cingulate cortex. Across groups, hippocampal metabolic and structural measures were negatively correlated with error rates. Dysfunction in the cingulate and insular cortices of recently abstinent MA abusers contribute to impaired vigilance and other cognitive functions requiring sustained attention. Hippocampal integrity predicts task performance in methamphetamine users as well as control subjects.
Customization of user interfaces to reduce errors and enhance user acceptance.
Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram
2014-03-01
Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Making Residents Part of the Safety Culture: Improving Error Reporting and Reducing Harms.
Fox, Michael D; Bump, Gregory M; Butler, Gabriella A; Chen, Ling-Wan; Buchert, Andrew R
2017-01-30
Reporting medical errors is a focus of the patient safety movement. As frontline physicians, residents are optimally positioned to recognize errors and flaws in systems of care. Previous work highlights the difficulty of engaging residents in identification and/or reduction of medical errors and in integrating these trainees into their institutions' cultures of safety. The authors describe the implementation of a longitudinal, discipline-based, multifaceted curriculum to enhance the reporting of errors by pediatric residents at Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center. The key elements of this curriculum included providing the necessary education to identify medical errors with an emphasis on systems-based causes, modeling of error reporting by faculty, and integrating error reporting and discussion into the residents' daily activities. The authors tracked monthly error reporting rates by residents and other health care professionals, in addition to serious harm event rates at the institution. The interventions resulted in significant increases in error reports filed by residents, from 3.6 to 37.8 per month over 4 years (P < 0.0001). This increase in resident error reporting correlated with a decline in serious harm events, from 15.0 to 8.1 per month over 4 years (P = 0.01). Integrating patient safety into the everyday resident responsibilities encourages frequent reporting and discussion of medical errors and leads to improvements in patient care. Multiple simultaneous interventions are essential to making residents part of the safety culture of their training hospitals.
A Quality Improvement Project to Decrease Human Milk Errors in the NICU.
Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E
2017-02-01
Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.
NASA Astrophysics Data System (ADS)
Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han
2017-05-01
Feedforward-feedback control is widely used in motion control of piezoactuator systems. Due to the phase lag caused by incomplete dynamics compensation, the performance of the composite controller is greatly limited at high frequency. This paper proposes a new rate-dependent model to improve the high-frequency tracking performance by reducing dynamics compensation error. The rate-dependent model is designed as a function of the input and input variation rate to describe the input-output relationship of the residual system dynamics which mainly performs as phase lag in a wide frequency band. Then the direct inversion of the proposed rate-dependent model is used to compensate the residual system dynamics. Using the proposed rate-dependent model as feedforward term, the open loop performance can be improved significantly at medium-high frequency. Then, combining the with feedback controller, the composite controller can provide enhanced close loop performance from low frequency to high frequency. At the frequency of 1 Hz, the proposed controller presents the same performance as previous methods. However, at the frequency of 900 Hz, the tracking error is reduced to be 30.7% of the decoupled approach.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2014-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2015-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058
Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C
2013-12-01
To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.
Coppens-Hofman, Marjolein C.; Terband, Hayo; Snik, Ad F.M.; Maassen, Ben A.M.
2017-01-01
Purpose Adults with intellectual disabilities (ID) often show reduced speech intelligibility, which affects their social interaction skills. This study aims to establish the main predictors of this reduced intelligibility in order to ultimately optimise management. Method Spontaneous speech and picture naming tasks were recorded in 36 adults with mild or moderate ID. Twenty-five naïve listeners rated the intelligibility of the spontaneous speech samples. Performance on the picture-naming task was analysed by means of a phonological error analysis based on expert transcriptions. Results The transcription analyses showed that the phonemic and syllabic inventories of the speakers were complete. However, multiple errors at the phonemic and syllabic level were found. The frequencies of specific types of errors were related to intelligibility and quality ratings. Conclusions The development of the phonemic and syllabic repertoire appears to be completed in adults with mild-to-moderate ID. The charted speech difficulties can be interpreted to indicate speech motor control and planning difficulties. These findings may aid the development of diagnostic tests and speech therapies aimed at improving speech intelligibility in this specific group. PMID:28118637
NASA Astrophysics Data System (ADS)
Alvarez, Jose; Massey, Steven; Kalitsov, Alan; Velev, Julian
Nanopore sequencing via transverse current has emerged as a competitive candidate for mapping DNA methylation without needed bisulfite-treatment, fluorescent tag, or PCR amplification. By eliminating the error producing amplification step, long read lengths become feasible, which greatly simplifies the assembly process and reduces the time and the cost inherent in current technologies. However, due to the large error rates of nanopore sequencing, single base resolution has not been reached. A very important source of noise is the intrinsic structural noise in the electric signature of the nucleotide arising from the influence of neighboring nucleotides. In this work we perform calculations of the tunneling current through DNA molecules in nanopores using the non-equilibrium electron transport method within an effective multi-orbital tight-binding model derived from first-principles calculations. We develop a base-calling algorithm accounting for the correlations of the current through neighboring bases, which in principle can reduce the error rate below any desired precision. Using this method we show that we can clearly distinguish DNA methylation and other base modifications based on the reading of the tunneling current.
Improved Conflict Detection for Reducing Operational Errors in Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Hainz
2003-01-01
An operational error is an incident in which an air traffic controller allows the separation between two aircraft to fall below the minimum separation standard. The rates of such errors in the US have increased significantly over the past few years. This paper proposes new detection methods that can help correct this trend by improving on the performance of Conflict Alert, the existing software in the Host Computer System that is intended to detect and warn controllers of imminent conflicts. In addition to the usual trajectory based on the flight plan, a "dead-reckoning" trajectory (current velocity projection) is also generated for each aircraft and checked for conflicts. Filters for reducing common types of false alerts were implemented. The new detection methods were tested in three different ways. First, a simple flightpath command language was developed t o generate precisely controlled encounters for the purpose of testing the detection software. Second, written reports and tracking data were obtained for actual operational errors that occurred in the field, and these were "replayed" to test the new detection algorithms. Finally, the detection methods were used to shadow live traffic, and performance was analysed, particularly with regard to the false-alert rate. The results indicate that the new detection methods can provide timely warnings of imminent conflicts more consistently than Conflict Alert.
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
Russ, Alissa L; Zillich, Alan J; Melton, Brittany L; Russell, Scott A; Chen, Siying; Spina, Jeffrey R; Weiner, Michael; Johnson, Elizabette G; Daggy, Joanne K; McManus, M Sue; Hawsey, Jason M; Puleo, Anthony G; Doebbeling, Bradley N; Saleem, Jason J
2014-01-01
Objective To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. Materials and methods We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug–allergy, drug–drug interaction, and drug–disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. Results Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1–5) compared to original alerts: 4 (1–7); p=0.024). Discussion Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. Conclusions This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes. PMID:24668841
Russ, Alissa L; Zillich, Alan J; Melton, Brittany L; Russell, Scott A; Chen, Siying; Spina, Jeffrey R; Weiner, Michael; Johnson, Elizabette G; Daggy, Joanne K; McManus, M Sue; Hawsey, Jason M; Puleo, Anthony G; Doebbeling, Bradley N; Saleem, Jason J
2014-10-01
To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug-allergy, drug-drug interaction, and drug-disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1-5) compared to original alerts: 4 (1-7); p=0.024). Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Davis, Stephen Jerome; Hurtado, Josephine; Nguyen, Rosemary; Huynh, Tran; Lindon, Ivan; Hudnall, Cedric; Bork, Sara
2017-01-01
Background: USP <797> regulatory requirements have mandated that pharmacies improve aseptic techniques and cleanliness of the medication preparation areas. In addition, the Institute for Safe Medication Practices (ISMP) recommends that technology and automation be used as much as possible for preparing and verifying compounded sterile products. Objective: To determine the benefits associated with the implementation of the workflow management system, such as reducing medication preparation and delivery errors, reducing quantity and frequency of medication errors, avoiding costs, and enhancing the organization's decision to move toward positive patient identification (PPID). Methods: At Texas Children's Hospital, data were collected and analyzed from January 2014 through August 2014 in the pharmacy areas in which the workflow management system would be implemented. Data were excluded for September 2014 during the workflow management system oral liquid implementation phase. Data were collected and analyzed from October 2014 through June 2015 to determine whether the implementation of the workflow management system reduced the quantity and frequency of reported medication errors. Data collected and analyzed during the study period included the quantity of doses prepared, number of incorrect medication scans, number of doses discontinued from the workflow management system queue, and the number of doses rejected. Data were collected and analyzed to identify patterns of incorrect medication scans, to determine reasons for rejected medication doses, and to determine the reduction in wasted medications. Results: During the 17-month study period, the pharmacy department dispensed 1,506,220 oral liquid and injectable medication doses. From October 2014 through June 2015, the pharmacy department dispensed 826,220 medication doses that were prepared and checked via the workflow management system. Of those 826,220 medication doses, there were 16 reported incorrect volume errors. The error rate after the implementation of the workflow management system averaged 8.4%, which was a 1.6% reduction. After the implementation of the workflow management system, the average number of reported oral liquid medication and injectable medication errors decreased to 0.4 and 0.2 times per week, respectively. Conclusion: The organization was able to achieve its purpose and goal of improving the provision of quality pharmacy care through optimal medication use and safety by reducing medication preparation errors. Error rates decreased and the workflow processes were streamlined, which has led to seamless operations within the pharmacy department. There has been significant cost avoidance and waste reduction and enhanced interdepartmental satisfaction due to the reduction of reported medication errors.
Control techniques to improve Space Shuttle solid rocket booster separation
NASA Technical Reports Server (NTRS)
Tomlin, D. D.
1983-01-01
The present Space Shuttle's control system does not prevent the Orbiter's main engines from being in gimbal positions that are adverse to solid rocket booster separation. By eliminating the attitude error and attitude rate feedback just prior to solid rocket booster separation, the detrimental effects of the Orbiter's main engines can be reduced. In addition, if angular acceleration feedback is applied, the gimbal torques produced by the Orbiter's engines can reduce the detrimental effects of the aerodynamic torques. This paper develops these control techniques and compares the separation capability of the developed control systems. Currently with the worst case initial conditions and each Shuttle system dispersion aligned in the worst direction (which is more conservative than will be experienced in flight), the solid rocket booster has an interference with the Shuttle's external tank of 30 in. Elimination of the attitude error and attitude rate feedback reduces that interference to 19 in. Substitution of angular acceleration feedback reduces the interference to 6 in. The two latter interferences can be eliminated by atess conservative analysis techniques, that is, by using a root sum square of the system dispersions.
Effects of a preceptorship programme on turnover rate, cost, quality and professional development.
Lee, Tso-Ying; Tzeng, Wen-Chii; Lin, Chia-Huei; Yeh, Mei-Ling
2009-04-01
The purpose of the present study was to design a preceptorship programme and to evaluate its effects on turnover rate, turnover cost, quality of care and professional development. A high turnover rate of nurses is a common global problem. How to improve nurses' willingness to stay in their jobs and reduce the high turnover rate has become a focus. Well-designed preceptorship programmes could possibly decrease turnover rates and improve professional development. A quasi-experimental research design was used. First, a preceptorship programme was designed to establish the role and responsibilities of preceptors in instructing new nurses. Second, a quasi-experimental design was used to evaluate the preceptorship programme. Data on new nurses' turnover rate, turnover cost, quality of nursing care, satisfaction of preceptor's teaching and preceptor's perception were measured. After conducting the preceptorship programme, the turnover rate was 46.5% less than the previous year. The turnover cost was decreased by US$186,102. Additionally, medication error rates made by new nurses dropped from 50-0% and incident rates of adverse events and falls decreased. All new nurses were satisfied with preceptor guidance. The preceptorship programme effectively lowered the turnover rate of new nurses, reduced turnover costs and enhanced the quality of nursing care, especially by reducing medication error incidents. Positive feedback about the programme was received from new nurses. Study findings may offer healthcare administrators another option for retaining new nurses, controlling costs, improving quality and fostering professional development. In addition, incentives and effective support from the organisation must be considered when preceptors perform preceptorship responsibilities.
Residents' numeric inputting error in computerized physician order entry prescription.
Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong
2016-04-01
Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
On Rater Agreement and Rater Training
ERIC Educational Resources Information Center
Wang, Binhong
2010-01-01
This paper first analyzed two studies on rater factors and rating criteria to raise the problem of rater agreement. After that the author reveals the causes of discrepencies in rating administration by discussing rater variability and rater bias. The author argues that rater bias can not be eliminated completely, we can only reduce the error to a…
A methodology to reduce uncertainties in the high-flow portion of a rating curve
USDA-ARS?s Scientific Manuscript database
Flow monitoring at watershed scale relies on the establishment of a rating curve that describes the relationship between stage and flow and is developed from actual flow measurements at various stages. Measurement errors increase with out-of-bank flow conditions because of safety concerns and diffic...
Steiner, Lisa; Burgess-Limerick, Robin; Porter, William
2014-03-01
The authors examine the pattern of direction errors made during the manipulation of a physical simulation of an underground coal mine bolting machine to assess the directional control-response compatibility relationships associated with the device and to compare these results to data obtained from a virtual simulation of a generic device. Directional errors during the manual control of underground coal roof bolting equipment are associated with serious injuries. Directional control-response relationships have previously been examined using a virtual simulation of a generic device; however, the applicability of these results to a specific physical device may be questioned. Forty-eight participants randomly assigned to different directional control-response relationships manipulated horizontal or vertical control levers to move a simulated bolter arm in three directions (elevation, slew, and sump) as well as to cause a light to become illuminated and raise or lower a stabilizing jack. Directional errors were recorded during the completion of 240 trials by each participant Directional error rates are increased when the control and response are in opposite directions or if the direction of the control and response are perpendicular.The pattern of direction error rates was consistent with experiments obtained from a generic device in a virtual environment. Error rates are increased by incompatible directional control-response relationships. Ensuring that the design of equipment controls maintains compatible directional control-response relationships has potential to reduce the errors made in high-risk situations, such as underground coal mining.
Huff, Mark J; Umanath, Sharda
2018-06-01
In 2 experiments, we assessed age-related suggestibility to additive and contradictory misinformation (i.e., remembering of false details from an external source). After reading a fictional story, participants answered questions containing misleading details that were either additive (misleading details that supplemented an original event) or contradictory (errors that changed original details). On a final test, suggestibility was greater for additive than contradictory misinformation, and older adults endorsed fewer false contradictory details than younger adults. To mitigate suggestibility in Experiment 2, participants were warned about potential errors, instructed to detect errors, or instructed to detect errors after exposure to examples of additive and contradictory details. Again, suggestibility to additive misinformation was greater than contradictory, and older adults endorsed less contradictory misinformation. Only after detection instructions with misinformation examples were younger adults able to reduce contradictory misinformation effects and reduced these effects to the level of older adults. Additive misinformation however, was immune to all warning and detection instructions. Thus, older adults were less susceptible to contradictory misinformation errors, and younger adults could match this misinformation rate when warning/detection instructions were strong. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Fast and memory efficient text image compression with JBIG2.
Ye, Yan; Cosman, Pamela
2003-01-01
In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.
UAS Well Clear Recovery Against Non-Cooperative Intruders Using Vertical Maneuvers
NASA Technical Reports Server (NTRS)
Cone, Andrew C.; Thipphavong, David; Lee, Seung Man; Santiago, Confesor
2017-01-01
This paper documents a study that drove the development of a mathematical expression in the detect-and-avoid (DAA) minimum operational performance standards (MOPS) for unmanned aircraft systems (UAS). This equation describes the conditions under which vertical maneuver guidance should be provided during recovery of DAA well clear separation with a non-cooperative VFR aircraft. Although the original hypothesis was that vertical maneuvers for DAA well clear recovery should only be offered when sensor vertical rate errors are small, this paper suggests that UAS climb and descent performance should be considered-in addition to sensor errors for vertical position and vertical rate-when determining whether to offer vertical guidance. A fast-time simulation study involving 108,000 encounters between a UAS and a non-cooperative visual-flight-rules aircraft was conducted. Results are presented showing that, when vertical maneuver guidance for DAA well clear recovery was suppressed, the minimum vertical separation increased by roughly 50 feet (or horizontal separation by 500 to 800 feet). However, the percentage of encounters that had a risk of collision when performing vertical well clear recovery maneuvers was reduced as UAS vertical rate performance increased and sensor vertical rate errors decreased. A class of encounter is identified for which vertical-rate error had a large effect on the efficacy of horizontal maneuvers due to the difficulty of making the correct left/right turn decision: crossing conflict with intruder changing altitude. Overall, these results support logic that would allow vertical maneuvers when UAS vertical performance is sufficient to avoid the intruder, based on the intruder's estimated vertical position and vertical rate, as well as the vertical rate error of the UAS' sensor.
Clawson, Ann; Clayson, Peter E; Keith, Cierra M; Catron, Christina; Larson, Michael J
2017-03-01
Cognitive control includes higher-level cognitive processes used to evaluate environmental conflict. Given the importance of cognitive control in regulating behavior, understanding the developmental course of these processes may contribute to a greater understanding of normal and abnormal development. We examined behavioral (response times [RTs], error rates) and event-related potential data (N2, error-related negativity [ERN], correct-response negativity [CRN], error positivity [Pe]) during a flanker task in cross-sectional groups of 45 youth (ages 8-18), 52 younger adults (ages 20-28), and 58 older adults (ages 56-91). Younger adults displayed the most efficient processing, including significantly reduced CRN and N2 amplitude, increased Pe amplitude, and significantly better task performance than youth or older adults (e.g., faster RTs, fewer errors). Youth displayed larger CRN and N2, attenuated Pe, and significantly worse task performance than younger adults. Older adults fell either between youth and younger adults (e.g., CRN amplitudes, N2 amplitudes) or displayed neural and behavioral performance that was similar to youth (e.g., Pe amplitudes, error rates). These findings point to underdeveloped neural and cognitive processes early in life and reduced efficiency in older adulthood, contributing to poor implementation and modulation of cognitive control in response to conflict. Thus, cognitive control processing appears to reach peak performance and efficiency in younger adulthood, marked by improved task performance with less neural activation. Copyright © 2017 Elsevier B.V. All rights reserved.
A Real-Time Position-Locating Algorithm for CCD-Based Sunspot Tracking
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1996-01-01
NASA Marshall Space Flight Center's (MSFC) EXperimental Vector Magnetograph (EXVM) polarimeter measures the sun's vector magnetic field. These measurements are taken to improve understanding of the sun's magnetic field in the hopes to better predict solar flares. Part of the procedure for the EXVM requires image motion stabilization over a period of a few minutes. A high speed tracker can be used to reduce image motion produced by wind loading on the EXVM, fluctuations in the atmosphere and other vibrations. The tracker consists of two elements, an image motion detector and a control system. The image motion detector determines the image movement from one frame to the next and sends an error signal to the control system. For the ground based application to reduce image motion due to atmospheric fluctuations requires an error determination at the rate of at least 100 hz. It would be desirable to have an error determination rate of 1 kHz to assure that higher rate image motion is reduced and to increase the control system stability. Two algorithms are presented that are typically used for tracking. These algorithms are examined for their applicability for tracking sunspots, specifically their accuracy if only one column and one row of CCD pixels are used. To examine the accuracy of this method two techniques are used. One involves moving a sunspot image a known distance with computer software, then applying the particular algorithm to see how accurately it determines this movement. The second technique involves using a rate table to control the object motion, then applying the algorithms to see how accurately each determines the actual motion. Results from these two techniques are presented.
[The factors affecting the results of mechanical jaundice management].
Malkov, I S; Shaimardanov, R Sh; Korobkov, V N; Filippov, V A; Khisamiev, I G
To improve the results of obstructive jaundice management by rational diagnostic and treatment strategies. Outcomes of 820 patients with obstructive jaundice syndrome were analyzed. Diagnostic and tactical mistakes were made at pre-hospital stage in 143 (17.4%) patients and in 105 (12.8%) at hospital stage. Herewith, in 53 (6.5%) cases the errors were observed at all stages. Retrospective analysis of severe postoperative complications and lethal outcomes in patients with obstructive jaundice showed that in 23.8% of cases they were explained by diagnostic and tactical mistakes at various stages of examination and treatment. We developed an algorithm for obstructive jaundice management to reduce the number of diagnostic and tactical errors, a reduction in the frequency of diagnostic and tactical errors. It reduced the number of postoperative complications up to 16.5% and mortality rate to 3.0%.
Siebert, Johan N; Ehrler, Frederic; Combescure, Christophe; Lacroix, Laurence; Haddad, Kevin; Sanchez, Oliver; Gervaix, Alain; Lovis, Christian; Manzano, Sergio
2017-02-01
During pediatric cardiopulmonary resuscitation (CPR), vasoactive drug preparation for continuous infusion is both complex and time-consuming, placing children at higher risk than adults for medication errors. Following an evidence-based ergonomic-driven approach, we developed a mobile device app called Pediatric Accurate Medication in Emergency Situations (PedAMINES), intended to guide caregivers step-by-step from preparation to delivery of drugs requiring continuous infusion. The aim of our study was to determine whether the use of PedAMINES reduces drug preparation time (TDP) and time to delivery (TDD; primary outcome), as well as medication errors (secondary outcomes) when compared with conventional preparation methods. The study was a randomized controlled crossover trial with 2 parallel groups comparing PedAMINES with a conventional and internationally used drugs infusion rate table in the preparation of continuous drug infusion. We used a simulation-based pediatric CPR cardiac arrest scenario with a high-fidelity manikin in the shock room of a tertiary care pediatric emergency department. After epinephrine-induced return of spontaneous circulation, pediatric emergency nurses were first asked to prepare a continuous infusion of dopamine, using either PedAMINES (intervention group) or the infusion table (control group), and second, a continuous infusion of norepinephrine by crossing the procedure. The primary outcome was the elapsed time in seconds, in each allocation group, from the oral prescription by the physician to TDD by the nurse. TDD included TDP. The secondary outcome was the medication dosage error rate during the sequence from drug preparation to drug injection. A total of 20 nurses were randomized into 2 groups. During the first study period, mean TDP while using PedAMINES and conventional preparation methods was 128.1 s (95% CI 102-154) and 308.1 s (95% CI 216-400), respectively (180 s reduction, P=.002). Mean TDD was 214 s (95% CI 171-256) and 391 s (95% CI 298-483), respectively (177.3 s reduction, P=.002). Medication errors were reduced from 70% to 0% (P<.001) by using PedAMINES when compared with conventional methods. In this simulation-based study, PedAMINES dramatically reduced TDP, to delivery and the rate of medication errors. ©Johan N Siebert, Frederic Ehrler, Christophe Combescure, Laurence Lacroix, Kevin Haddad, Oliver Sanchez, Alain Gervaix, Christian Lovis, Sergio Manzano. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.02.2017.
Pilot error in air carrier mishaps: longitudinal trends among 558 reports, 1983-2002.
Baker, Susan P; Qiang, Yandong; Rebok, George W; Li, Guohua
2008-01-01
Many interventions have been implemented in recent decades to reduce pilot error in flight operations. This study aims to identify longitudinal trends in the prevalence and patterns of pilot error and other factors in U.S. air carrier mishaps. National Transportation Safety Board investigation reports were examined for 558 air carrier mishaps during 1983-2002. Pilot errors and circumstances of mishaps were described and categorized. Rates were calculated per 10 million flights. The overall mishap rate remained fairly stable, but the proportion of mishaps involving pilot error decreased from 42% in 1983-87 to 25% in 1998-2002, a 40% reduction. The rate of mishaps related to poor decisions declined from 6.2 to 1.8 per 10 million flights, a 71% reduction; much of this decrease was due to a 76% reduction in poor decisions related to weather. Mishandling wind or runway conditions declined by 78%. The rate of mishaps involving poor crew interaction declined by 68%. Mishaps during takeoff declined by 70%, from 5.3 to 1.6 per 10 million flights. The latter reduction was offset by an increase in mishaps while the aircraft was standing, from 2.5 to 6.0 per 10 million flights, and during pushback, which increased from 0 to 3.1 per 10 million flights. Reductions in pilot errors involving decision making and crew coordination are important trends that may reflect improvements in training and technological advances that facilitate good decisions. Mishaps while aircraft are standing and during pushback have increased and deserve special attention.
Pilot Error in Air Carrier Mishaps: Longitudinal Trends Among 558 Reports, 1983–2002
Baker, Susan P.; Qiang, Yandong; Rebok, George W.; Li, Guohua
2009-01-01
Background Many interventions have been implemented in recent decades to reduce pilot error in flight operations. This study aims to identify longitudinal trends in the prevalence and patterns of pilot error and other factors in U.S. air carrier mishaps. Method National Transportation Safety Board investigation reports were examined for 558 air carrier mishaps during 1983–2002. Pilot errors and circumstances of mishaps were described and categorized. Rates were calculated per 10 million flights. Results The overall mishap rate remained fairly stable, but the proportion of mishaps involving pilot error decreased from 42% in 1983–87 to 25% in 1998–2002, a 40% reduction. The rate of mishaps related to poor decisions declined from 6.2 to 1.8 per 10 million flights, a 71% reduction; much of this decrease was due to a 76% reduction in poor decisions related to weather. Mishandling wind or runway conditions declined by 78%. The rate of mishaps involving poor crew interaction declined by 68%. Mishaps during takeoff declined by 70%, from 5.3 to 1.6 per 10 million flights. The latter reduction was offset by an increase in mishaps while the aircraft was standing, from 2.5 to 6.0 per 10 million flights, and during pushback, which increased from 0 to 3.1 per 10 million flights. Conclusions Reductions in pilot errors involving decision making and crew coordination are important trends that may reflect improvements in training and technological advances that facilitate good decisions. Mishaps while aircraft are standing and during push-back have increased and deserve special attention. PMID:18225771
NASA Astrophysics Data System (ADS)
Gourdji, S.; Yadav, V.; Karion, A.; Mueller, K. L.; Kort, E. A.; Conley, S.; Ryerson, T. B.; Nehrkorn, T.
2017-12-01
The ability of atmospheric inverse models to detect, spatially locate and quantify emissions from large point sources in urban domains needs improvement before inversions can be used reliably as carbon monitoring tools. In this study, we use the Aliso Canyon natural gas leak from October 2015 to February 2016 (near Los Angeles, CA) as a natural tracer experiment to assess inversion quality by comparison with published estimates of leak rates calculated using a mass balance approach (Conley et al., 2016). Fourteen dedicated flights were flown in horizontal transects downwind and throughout the duration of the leak to sample CH4 mole fractions and collect meteorological information for use in the mass-balance estimates. The same CH4 observational data were then used here in geostatistical inverse models with no prior assumptions about the leak location or emission rate and flux sensitivity matrices generated using the WRF-STILT atmospheric transport model. Transport model errors were assessed by comparing WRF-STILT wind speeds, wind direction and planetary boundary layer (PBL) height to those observed on the plane; the impact of these errors in the inversions, and the optimal inversion setup for reducing their influence was also explored. WRF-STILT provides a reasonable simulation of true atmospheric conditions on most flight dates, given the complex terrain and known difficulties in simulating atmospheric transport under such conditions. Moreover, even large (>120°) errors in wind direction were found to be tolerable in terms of spatially locating the leak rate within a 5-km radius of the actual site. Errors in the WRF-STILT wind speed (>50%) and PBL height have more negative impacts on the inversions, with too high wind speeds (typically corresponding with too low PBL heights) resulting in overestimated leak rates, and vice-versa. Coarser data averaging intervals and the use of observed wind speed errors in the model-data mismatch covariance matrix are shown to help reduce the influence of transport model errors, by averaging out compensating errors and de-weighting the influence of problematic observations. This study helps to enable the integration of aircraft measurements with other tower-based data in larger inverse models that can reliably detect, locate and quantify point source emissions in urban areas.
STAG2 promotes error correction in mitosis by regulating kinetochore-microtubule attachments.
Kleyman, Marianna; Kabeche, Lilian; Compton, Duane A
2014-10-01
Mutations in the STAG2 gene are present in ∼20% of tumors from different tissues of origin. STAG2 encodes a subunit of the cohesin complex, and tumors with loss-of-function mutations are usually aneuploid and display elevated frequencies of lagging chromosomes during anaphase. Lagging chromosomes are a hallmark of chromosomal instability (CIN) arising from persistent errors in kinetochore-microtubule (kMT) attachment. To determine whether the loss of STAG2 increases the rate of formation of kMT attachment errors or decreases the rate of their correction, we examined mitosis in STAG2-deficient cells. STAG2 depletion does not impair bipolar spindle formation or delay mitotic progression. Instead, loss of STAG2 permits excessive centromere stretch along with hyperstabilization of kMT attachments. STAG2-deficient cells display mislocalization of Bub1 kinase, Bub3 and the chromosome passenger complex. Importantly, strategically destabilizing kMT attachments in tumor cells harboring STAG2 mutations by overexpression of the microtubule-destabilizing enzymes MCAK (also known as KIF2C) and Kif2B decreased the rate of lagging chromosomes and reduced the rate of chromosome missegregation. These data demonstrate that STAG2 promotes the correction of kMT attachment errors to ensure faithful chromosome segregation during mitosis. © 2014. Published by The Company of Biologists Ltd.
Sada, Oumer; Melkie, Addisu; Shibeshi, Workineh
2015-09-16
Medication errors (MEs) are important problems in all hospitalized populations, especially in intensive care unit (ICU). Little is known about the prevalence of medication prescribing errors in the ICU of hospitals in Ethiopia. The aim of this study was to assess medication prescribing errors in the ICU of Tikur Anbessa Specialized Hospital using retrospective cross-sectional analysis of patient cards and medication charts. About 220 patient charts were reviewed with a total of 1311 patient-days, and 882 prescription episodes. 359 MEs were detected; with prevalence of 40 per 100 orders. Common prescribing errors were omission errors 154 (42.89%), 101 (28.13%) wrong combination, 48 (13.37%) wrong abbreviation, 30 (8.36%) wrong dose, wrong frequency 18 (5.01%) and wrong indications 8 (2.23%). The present study shows that medication errors are common in medical ICU of Tikur Anbessa Specialized Hospital. These results suggest future targets of prevention strategies to reduce the rate of medication error.
USDA-ARS?s Scientific Manuscript database
Flow monitoring at watershed scale relies on the establishment of a rating curve that describes the relationship between stage and flow and is developed from actual flow measurements at various stages. Measurement errors increase with out-of-bank flow conditions because of safety concerns and diffic...
Alsulami, Zayed; Choonara, Imti; Conroy, Sharon
2014-06-01
To evaluate how closely double-checking policies are followed by nurses in paediatric areas and also to identify the types, frequency and rates of medication administration errors that occur despite the double-checking process. Double-checking by two nurses is an intervention used in many UK hospitals to prevent or reduce medication administration errors. There is, however, insufficient evidence to either support or refute the practice of double-checking in terms of medication error risk reduction. Prospective observational study. This was a prospective observational study of paediatric nurses' adherence to the double-checking process for medication administration from April-July 2012. Drug dose administration events (n = 2000) were observed. Independent drug dose calculation, rate of administering intravenous bolus drugs and labelling of flush syringes were the steps with lowest adherence rates. Drug dose calculation was only double-checked independently in 591 (30%) drug administrations. There was a statistically significant difference in nurses' adherence rate to the double-checking steps between weekdays and weekends in nine of the 15 evaluated steps. Medication administration errors (n = 191) or deviations from policy were observed, at a rate of 9·6% of drug administrations. These included 64 drug doses, which were left for parents to administer without nurse observation. There was variation between paediatric nurses' adherence to double-checking steps during medication administration. The most frequent type of administration errors or deviation from policy involved the medicine being given to the parents to administer to the child when the nurse was not present. © 2013 John Wiley & Sons Ltd.
Schnock, Kumiko O; Biggs, Bonnie; Fladger, Anne; Bates, David W; Rozenblum, Ronen
2017-02-22
Retained surgical instruments (RSI) are one of the most serious preventable complications in operating room settings, potentially leading to profound adverse effects for patients, as well as costly legal and financial consequences for hospitals. Safety measures to eliminate RSIs have been widely adopted in the United States and abroad, but despite widespread efforts, medical errors with RSI have not been eliminated. Through a systematic review of recent studies, we aimed to identify the impact of radio frequency identification (RFID) technology on reducing RSI errors and improving patient safety. A literature search on the effects of RFID technology on RSI error reduction was conducted in PubMed and CINAHL (2000-2016). Relevant articles were selected and reviewed by 4 researchers. After the literature search, 385 articles were identified and the full texts of the 88 articles were assessed for eligibility. Of these, 5 articles were included to evaluate the benefits and drawbacks of using RFID for preventing RSI-related errors. The use of RFID resulted in rapid detection of RSI through body tissue with high accuracy rates, reducing risk of counting errors and improving workflow. Based on the existing literature, RFID technology seems to have the potential to substantially improve patient safety by reducing RSI errors, although the body of evidence is currently limited. Better designed research studies are needed to get a clear understanding of this domain and to find new opportunities to use this technology and improve patient safety.
Fast QC-LDPC code for free space optical communication
NASA Astrophysics Data System (ADS)
Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong
2017-02-01
Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.
Philippoff, Joanna; Baumgartner, Erin
2016-03-01
The scientific value of citizen-science programs is limited when the data gathered are inconsistent, erroneous, or otherwise unusable. Long-term monitoring studies, such as Our Project In Hawai'i's Intertidal (OPIHI), have clear and consistent procedures and are thus a good model for evaluating the quality of participant data. The purpose of this study was to examine the kinds of errors made by student researchers during OPIHI data collection and factors that increase or decrease the likelihood of these errors. Twenty-four different types of errors were grouped into four broad error categories: missing data, sloppiness, methodological errors, and misidentification errors. "Sloppiness" was the most prevalent error type. Error rates decreased with field trip experience and student age. We suggest strategies to reduce data collection errors applicable to many types of citizen-science projects including emphasizing neat data collection, explicitly addressing and discussing the problems of falsifying data, emphasizing the importance of using standard scientific vocabulary, and giving participants multiple opportunities to practice to build their data collection techniques and skills.
Automated Identification of Abnormal Adult EEGs
López, S.; Suarez, G.; Jungreis, D.; Obeid, I.; Picone, J.
2016-01-01
The interpretation of electroencephalograms (EEGs) is a process that is still dependent on the subjective analysis of the examiners. Though interrater agreement on critical events such as seizures is high, it is much lower on subtler events (e.g., when there are benign variants). The process used by an expert to interpret an EEG is quite subjective and hard to replicate by machine. The performance of machine learning technology is far from human performance. We have been developing an interpretation system, AutoEEG, with a goal of exceeding human performance on this task. In this work, we are focusing on one of the early decisions made in this process – whether an EEG is normal or abnormal. We explore two baseline classification algorithms: k-Nearest Neighbor (kNN) and Random Forest Ensemble Learning (RF). A subset of the TUH EEG Corpus was used to evaluate performance. Principal Components Analysis (PCA) was used to reduce the dimensionality of the data. kNN achieved a 41.8% detection error rate while RF achieved an error rate of 31.7%. These error rates are significantly lower than those obtained by random guessing based on priors (49.5%). The majority of the errors were related to misclassification of normal EEGs. PMID:27195311
Digital Intraoral Imaging Re-Exposure Rates of Dental Students.
Senior, Anthea; Winand, Curtis; Ganatra, Seema; Lai, Hollis; Alsulfyani, Noura; Pachêco-Pereira, Camila
2018-01-01
A guiding principle of radiation safety is ensuring that radiation dosage is as low as possible while yielding the necessary diagnostic information. Intraoral images taken with conventional dental film have a higher re-exposure rate when taken by dental students compared to experienced staff. The aim of this study was to examine the prevalence of and reasons for re-exposure of digital intraoral images taken by third- and fourth-year dental students in a dental school clinic. At one dental school in Canada, the total number of intraoral images taken by third- and fourth-year dental students, re-exposures, and error descriptions were extracted from patient clinical records for an eight-month period (September 2015 to April 2016). The data were categorized to distinguish between digital images taken with solid-state sensors or photostimulable phosphor plates (PSP). The results showed that 9,397 intraoral images were made, and 1,064 required re-exposure. The most common error requiring re-exposure for bitewing images was an error in placement of the receptor too far mesially or distally (29% for sensors and 18% for PSP). The most common error requiring re-exposure for periapical images was inadequate capture of the periapical area (37% for sensors and 6% for PSP). A retake rate of 11% was calculated, and the common technique errors causing image deficiencies were identified. Educational intervention can now be specifically designed to reduce the retake rate and radiation dose for future patients.
Using snowball sampling method with nurses to understand medication administration errors.
Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In
2009-02-01
We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalapurakal, John A., E-mail: j-kalapurakal@northwestern.edu; Zafirovski, Aleksandar; Smith, Jeffery
Purpose: This report describes the value of a voluntary error reporting system and the impact of a series of quality assurance (QA) measures including checklists and timeouts on reported error rates in patients receiving radiation therapy. Methods and Materials: A voluntary error reporting system was instituted with the goal of recording errors, analyzing their clinical impact, and guiding the implementation of targeted QA measures. In response to errors committed in relation to treatment of the wrong patient, wrong treatment site, and wrong dose, a novel initiative involving the use of checklists and timeouts for all staff was implemented. The impactmore » of these and other QA initiatives was analyzed. Results: From 2001 to 2011, a total of 256 errors in 139 patients after 284,810 external radiation treatments (0.09% per treatment) were recorded in our voluntary error database. The incidence of errors related to patient/tumor site, treatment planning/data transfer, and patient setup/treatment delivery was 9%, 40.2%, and 50.8%, respectively. The compliance rate for the checklists and timeouts initiative was 97% (P<.001). These and other QA measures resulted in a significant reduction in many categories of errors. The introduction of checklists and timeouts has been successful in eliminating errors related to wrong patient, wrong site, and wrong dose. Conclusions: A comprehensive QA program that regularly monitors staff compliance together with a robust voluntary error reporting system can reduce or eliminate errors that could result in serious patient injury. We recommend the adoption of these relatively simple QA initiatives including the use of checklists and timeouts for all staff to improve the safety of patients undergoing radiation therapy in the modern era.« less
Wang, Hua-fen; Jin, Jing-fen; Feng, Xiu-qin; Huang, Xin; Zhu, Ling-ling; Zhao, Xiao-ying; Zhou, Quan
2015-01-01
Background Medication errors may occur during prescribing, transcribing, prescription auditing, preparing, dispensing, administration, and monitoring. Medication administration errors (MAEs) are those that actually reach patients and remain a threat to patient safety. The Joint Commission International (JCI) advocates medication error prevention, but experience in reducing MAEs during the period of before and after JCI accreditation has not been reported. Methods An intervention study, aimed at reducing MAEs in hospitalized patients, was performed in the Second Affiliated Hospital of Zhejiang University, Hangzhou, People’s Republic of China, during the journey to JCI accreditation and in the post-JCI accreditation era (first half-year of 2011 to first half-year of 2014). Comprehensive interventions included organizational, information technology, educational, and process optimization-based measures. Data mining was performed on MAEs derived from a compulsory electronic reporting system. Results The number of MAEs continuously decreased from 143 (first half-year of 2012) to 64 (first half-year of 2014), with a decrease in occurrence rate by 60.9% (0.338% versus 0.132%, P<0.05). The number of MAEs related to high-alert medications decreased from 32 (the second half-year of 2011) to 16 (the first half-year of 2014), with a decrease in occurrence rate by 57.9% (0.0787% versus 0.0331%, P<0.05). Omission was the top type of MAE during the first half-year of 2011 to the first half-year of 2014, with a decrease by 50% (40 cases versus 20 cases). Intravenous administration error was the top type of error regarding administration route, but it continuously decreased from 64 (first half-year of 2012) to 27 (first half-year of 2014). More experienced registered nurses made fewer medication errors. The number of MAEs in surgical wards was twice that in medicinal wards. Compared with non-intensive care units, the intensive care units exhibited higher occurrence rates of MAEs (1.81% versus 0.24%, P<0.001). Conclusion A 3-and-a-half-year intervention program on MAEs was confirmed to be effective. MAEs made by nursing staff can be reduced, but cannot be eliminated. The depth, breadth, and efficiency of multidiscipline collaboration among physicians, pharmacists, nurses, information engineers, and hospital administrators are pivotal to safety in medication administration. JCI accreditation may help health systems enhance the awareness and ability to prevent MAEs and achieve successful quality improvements. PMID:25767393
Wang, Hua-Fen; Jin, Jing-Fen; Feng, Xiu-Qin; Huang, Xin; Zhu, Ling-Ling; Zhao, Xiao-Ying; Zhou, Quan
2015-01-01
Medication errors may occur during prescribing, transcribing, prescription auditing, preparing, dispensing, administration, and monitoring. Medication administration errors (MAEs) are those that actually reach patients and remain a threat to patient safety. The Joint Commission International (JCI) advocates medication error prevention, but experience in reducing MAEs during the period of before and after JCI accreditation has not been reported. An intervention study, aimed at reducing MAEs in hospitalized patients, was performed in the Second Affiliated Hospital of Zhejiang University, Hangzhou, People's Republic of China, during the journey to JCI accreditation and in the post-JCI accreditation era (first half-year of 2011 to first half-year of 2014). Comprehensive interventions included organizational, information technology, educational, and process optimization-based measures. Data mining was performed on MAEs derived from a compulsory electronic reporting system. The number of MAEs continuously decreased from 143 (first half-year of 2012) to 64 (first half-year of 2014), with a decrease in occurrence rate by 60.9% (0.338% versus 0.132%, P<0.05). The number of MAEs related to high-alert medications decreased from 32 (the second half-year of 2011) to 16 (the first half-year of 2014), with a decrease in occurrence rate by 57.9% (0.0787% versus 0.0331%, P<0.05). Omission was the top type of MAE during the first half-year of 2011 to the first half-year of 2014, with a decrease by 50% (40 cases versus 20 cases). Intravenous administration error was the top type of error regarding administration route, but it continuously decreased from 64 (first half-year of 2012) to 27 (first half-year of 2014). More experienced registered nurses made fewer medication errors. The number of MAEs in surgical wards was twice that in medicinal wards. Compared with non-intensive care units, the intensive care units exhibited higher occurrence rates of MAEs (1.81% versus 0.24%, P<0.001). A 3-and-a-half-year intervention program on MAEs was confirmed to be effective. MAEs made by nursing staff can be reduced, but cannot be eliminated. The depth, breadth, and efficiency of multidiscipline collaboration among physicians, pharmacists, nurses, information engineers, and hospital administrators are pivotal to safety in medication administration. JCI accreditation may help health systems enhance the awareness and ability to prevent MAEs and achieve successful quality improvements.
Genetic mapping in the presence of genotyping errors.
Cartwright, Dustin A; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander
2007-08-01
Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders.
Genetic Mapping in the Presence of Genotyping Errors
Cartwright, Dustin A.; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander
2007-01-01
Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders. PMID:17277374
RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.
Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F
2016-11-01
Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.
Fast online generalized multiscale finite element method using constraint energy minimization
NASA Astrophysics Data System (ADS)
Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat
2018-02-01
Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.
pyAmpli: an amplicon-based variant filter pipeline for targeted resequencing data.
Beyens, Matthias; Boeckx, Nele; Van Camp, Guy; Op de Beeck, Ken; Vandeweyer, Geert
2017-12-14
Haloplex targeted resequencing is a popular method to analyze both germline and somatic variants in gene panels. However, involved wet-lab procedures may introduce false positives that need to be considered in subsequent data-analysis. No variant filtering rationale addressing amplicon enrichment related systematic errors, in the form of an all-in-one package, exists to our knowledge. We present pyAmpli, a platform independent parallelized Python package that implements an amplicon-based germline and somatic variant filtering strategy for Haloplex data. pyAmpli can filter variants for systematic errors by user pre-defined criteria. We show that pyAmpli significantly increases specificity, without reducing sensitivity, essential for reporting true positive clinical relevant mutations in gene panel data. pyAmpli is an easy-to-use software tool which increases the true positive variant call rate in targeted resequencing data. It specifically reduces errors related to PCR-based enrichment of targeted regions.
EPs welcome new focus on reducing diagnostic errors.
2015-12-01
Emergency medicine leaders welcome a major new report from the Institute of Medicine (IOM) calling on providers, policy makers, and government agencies to institute changes to reduce the incidence of diagnostic errors. The 369-page report, "Improving Diagnosis in Health Care," states that the rate of diagnostic errors in this country is unacceptably high and offers a long list of recommendations aimed at addressing the problem. These include large, systemic changes that involve improvements in multiple areas, including health information technology (HIT), professional education, teamwork, and payment reform. Further, of particular interest to emergency physicians are recommended changes to the liability system. The authors of the IOM report state that while most people will likely experience a significant diagnostic error in their lifetime, the importance of this problem is under-appreciated. According to conservative estimates, the report says 5% of adults who seek outpatient care each year experience a diagnostic error. The report also notes that research over many decades shows diagnostic errors contribute to roughly 10% of all.deaths. The report says more steps need to be taken to facilitate inter-professional and intra-professional teamwork throughout the diagnostic process. Experts concur with the report's finding that mechanisms need to be developed so that providers receive ongoing feedback on their diagnostic performance.
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
Disturbance torque rejection properties of the NASA/JPL 70-meter antenna axis servos
NASA Technical Reports Server (NTRS)
Hill, R. E.
1989-01-01
Analytic methods for evaluating pointing errors caused by external disturbance torques are developed and applied to determine the effects of representative values of wind and friction torque. The expressions relating pointing errors to disturbance torques are shown to be strongly dependent upon the state estimator parameters, as well as upon the state feedback gain and the flow versus pressure characteristics of the hydraulic system. Under certain conditions, when control is derived from an uncorrected estimate of integral position error, the desired type 2 servo properties are not realized and finite steady-state position errors result. Methods for reducing these errors to negligible proportions through the proper selection of control gain and estimator correction parameters are demonstrated. The steady-state error produced by a disturbance torque is found to be directly proportional to the hydraulic internal leakage. This property can be exploited to provide a convenient method of determining system leakage from field measurements of estimator error, axis rate, and hydraulic differential pressure.
Greenland, Sander; Gustafson, Paul
2006-07-01
Researchers sometimes argue that their exposure-measurement errors are independent of other errors and are nondifferential with respect to disease, resulting in estimation bias toward the null. Among well-known problems with such arguments are that independence and nondifferentiality are harder to satisfy than ordinarily appreciated (e.g., because of correlation of errors in questionnaire items, and because of uncontrolled covariate effects on error rates); small violations of independence or nondifferentiality may lead to bias away from the null; and, if exposure is polytomous, the bias produced by independent nondifferential error is not always toward the null. The authors add to this list by showing that, in a 2 x 2 table (for which independent nondifferential error produces bias toward the null), accounting for independent nondifferential error does not reduce the p value even though it increases the point estimate. Thus, such accounting should not increase certainty that an association is present.
Elimination of Emergency Department Medication Errors Due To Estimated Weights.
Greenwalt, Mary; Griffen, David; Wilkerson, Jim
2017-01-01
From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.
Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver
NASA Astrophysics Data System (ADS)
Nazrul Islam, A. K. M.; Majumder, S. P.
2017-12-01
An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.
Forrester, Janet E
2015-12-01
Errors in the statistical presentation and analyses of data in the medical literature remain common despite efforts to improve the review process, including the creation of guidelines for authors and the use of statistical reviewers. This article discusses common elementary statistical errors seen in manuscripts recently submitted to Clinical Therapeutics and describes some ways in which authors and reviewers can identify errors and thus correct them before publication. A nonsystematic sample of manuscripts submitted to Clinical Therapeutics over the past year was examined for elementary statistical errors. Clinical Therapeutics has many of the same errors that reportedly exist in other journals. Authors require additional guidance to avoid elementary statistical errors and incentives to use the guidance. Implementation of reporting guidelines for authors and reviewers by journals such as Clinical Therapeutics may be a good approach to reduce the rate of statistical errors. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Good ergonomics and team diversity reduce absenteeism and errors in car manufacturing.
Fritzsche, Lars; Wegge, Jürgen; Schmauder, Martin; Kliegel, Matthias; Schmidt, Klaus-Helmut
2014-01-01
Prior research suggests that ergonomics work design and mixed teams (in age and gender) may compensate declines in certain abilities of ageing employees. This study investigates simultaneous effects of both team level factors on absenteeism and performance (error rates) over one year in a sample of 56 car assembly teams (N = 623). Results show that age was related to prolonged absenteeism and more mistakes in work planning, but not to overall performance. In comparison, high-physical workload was strongly associated with longer absenteeism and increased error rates. Furthermore, controlling for physical workload, age diversity was related to shorter absenteeism, and the presence of females in the team was associated with shorter absenteeism and better performance. In summary, this study suggests that both ergonomics work design and mixed team composition may compensate age-related productivity risks in manufacturing by maintaining the work ability of older employees and improving job quality.
Psychophysical measurements in children: challenges, pitfalls, and considerations.
Witton, Caroline; Talcott, Joel B; Henning, G Bruce
2017-01-01
Measuring sensory sensitivity is important in studying development and developmental disorders. However, with children, there is a need to balance reliable but lengthy sensory tasks with the child's ability to maintain motivation and vigilance. We used simulations to explore the problems associated with shortening adaptive psychophysical procedures, and suggest how these problems might be addressed. We quantify how adaptive procedures with too few reversals can over-estimate thresholds, introduce substantial measurement error, and make estimates of individual thresholds less reliable. The associated measurement error also obscures group differences. Adaptive procedures with children should therefore use as many reversals as possible, to reduce the effects of both Type 1 and Type 2 errors. Differences in response consistency, resulting from lapses in attention, further increase the over-estimation of threshold. Comparisons between data from individuals who may differ in lapse rate are therefore problematic, but measures to estimate and account for lapse rates in analyses may mitigate this problem.
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2006-01-01
Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.
Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy
NASA Astrophysics Data System (ADS)
Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.
2008-04-01
Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.
ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
ANDREWS, J.W.
1998-12-31
One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method ofmore » reconciling any conflicting results from the two leakage tests.« less
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection
NASA Astrophysics Data System (ADS)
Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.
2006-12-01
We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.
NASA Astrophysics Data System (ADS)
Almalaq, Yasser; Matin, Mohammad A.
2014-09-01
The broadband passive optical network (BPON) has the ability to support high-speed data, voice, and video services to home and small businesses customers. In this work, the performance of bi-directional BPON is analyzed for both down and up streams traffic cases by the help of erbium doped fiber amplifier (EDFA). The importance of BPON is reduced cost. Because PBON uses a splitter the cost of the maintenance between the providers and the customers side is suitable. In the proposed research, BPON has been tested by the use of bit error rate (BER) analyzer. BER analyzer realizes maximum Q factor, minimum bit error rate, and eye height.
Li, Mengfei; Hansen, Christian; Rose, Georg
2017-09-01
Electromagnetic tracking systems (EMTS) have achieved a high level of acceptance in clinical settings, e.g., to support tracking of medical instruments in image-guided interventions. However, tracking errors caused by movable metallic medical instruments and electronic devices are a critical problem which prevents the wider application of EMTS for clinical applications. We plan to introduce a method to dynamically reduce tracking errors caused by metallic objects in proximity to the magnetic sensor coil of the EMTS. We propose a method using ramp waveform excitation based on modeling the conductive distorter as a resistance-inductance circuit. Additionally, a fast data acquisition method is presented to speed up the refresh rate. With the current approach, the sensor's positioning mean error is estimated to be 3.4, 1.3 and 0.7 mm, corresponding to a distance between the sensor and center of the transmitter coils' array of up to 200, 150 and 100 mm, respectively. The sensor pose error caused by different medical instruments placed in proximity was reduced by the proposed method to a level lower than 0.5 mm in position and [Formula: see text] in orientation. By applying the newly developed fast data acquisition method, we achieved a system refresh rate up to approximately 12.7 frames per second. Our software-based approach can be integrated into existing medical EMTS seamlessly with no change in hardware. It improves the tracking accuracy of clinical EMTS when there is a metallic object placed near the sensor coil and has the potential to improve the safety and outcome of image-guided interventions.
Lin, Chia-Chi; Kuo, Hao-Chung; Peng, Peng-Chun; Lin, Gong-Ru
2008-03-31
Optically injection-locked single-wavelength gain-switching VCSEL based all-optical converter is demonstrated to generate RZ data at 2.5 Gbit/s with bit-error-rate of 10(-9) under receiving power of -29.3 dBm. A modified rate equation model is established to elucidate the optical injection induced gain-switching and NRZ-to-RZ data conversion in the VCSEL. The peak-to-peak frequency chirp of the VCSEL based NRZ-to-RZ is 4.5 GHz associated with a reduced frequency chirp rate of 178 MHz/ps at input optical NRZ power of -21 dBm, which is almost decreasing by a factor of 1/3 comparing with chirp on the SOA based NRZ-to-RZ converter reported previously. The power penalty of the BER measured back-to-back is about 2 dB from 1 Gbit/s to 2.5 Gbit/s.
An intervention to decrease patient identification band errors in a children's hospital.
Hain, Paul D; Joers, B; Rush, M; Slayton, J; Throop, P; Hoagg, S; Allen, L; Grantham, J; Deshpande, J K
2010-06-01
Patient misidentification continues to be a quality and safety issue. There is a paucity of US data describing interventions to reduce identification band error rates. Monroe Carell Jr Children's Hospital at Vanderbilt. Percentage of patients with defective identification bands. Web-based surveys were sent, asking hospital personnel to anonymously identify perceived barriers to reaching zero defects with identification bands. Corrective action plans were created and implemented with ideas from leadership, front-line staff and the online survey. Data from unannounced audits of patient identification bands were plotted on statistical process control charts and shared monthly with staff. All hospital personnel were expected to "stop the line" if there were any patient identification questions. The first audit showed a defect rate of 20.4%. The original mean defect rate was 6.5%. After interventions and education, the new mean defect rate was 2.6%. (a) The initial rate of patient identification band errors in the hospital was higher than expected. (b) The action resulting in most significant improvement was staff awareness of the problem, with clear expectations to immediately stop the line if a patient identification error was present. (c) Staff surveys are an excellent source of suggestions for combating patient identification issues. (d) Continued audit and data collection is necessary for sustainable staff focus and continued improvement. (e) Statistical process control charts are both an effective method to track results and an easily understood tool for sharing data with staff.
Bayesian assessment of overtriage and undertriage at a level I trauma centre.
DiDomenico, Paul B; Pietzsch, Jan B; Paté-Cornell, M Elisabeth
2008-07-13
We analysed the trauma triage system at a specific level I trauma centre to assess rates of over- and undertriage and to support recommendations for system improvements. The triage process is designed to estimate the severity of patient injury and allocate resources accordingly, with potential errors of overestimation (overtriage) consuming excess resources and underestimation (undertriage) potentially leading to medical errors.We first modelled the overall trauma system using risk analysis methods to understand interdependencies among the actions of the participants. We interviewed six experienced trauma surgeons to obtain their expert opinion of the over- and undertriage rates occurring in the trauma centre. We then assessed actual over- and undertriage rates in a random sample of 86 trauma cases collected over a six-week period at the same centre. We employed Bayesian analysis to quantitatively combine the data with the prior probabilities derived from expert opinion in order to obtain posterior distributions. The results were estimates of overtriage and undertriage in 16.1 and 4.9% of patients, respectively. This Bayesian approach, which provides a quantitative assessment of the error rates using both case data and expert opinion, provides a rational means of obtaining a best estimate of the system's performance. The overall approach that we describe in this paper can be employed more widely to analyse complex health care delivery systems, with the objective of reduced errors, patient risk and excess costs.
Wedell, Douglas H; Moro, Rodrigo
2008-04-01
Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize misinterpretation of the base event. In both experiments, conjunction errors were reduced when likely events were conjoined. Conjunction errors were also reduced for estimations compared with choices, with this reduction greater for likely conjuncts, an interaction effect. Shifting conceptual focus from probabilities to frequencies did not affect conjunction error rates. Analyses of numerical estimates for a subset of the problems provided support for the use of three general models by participants for generating estimates. Strikingly, the order in which the two tasks were carried out did not affect the pattern of results, supporting the idea that the mode of responding strongly determines the mode of thinking about conjunctions and hence the occurrence of the conjunction fallacy. These findings were evaluated in terms of implications for rationality of human judgment and reasoning.
Ning, Hsiao-Chen; Lin, Chia-Ni; Chiu, Daniel Tsun-Yee; Chang, Yung-Ta; Wen, Chiao-Ni; Peng, Shu-Yu; Chu, Tsung-Lan; Yu, Hsin-Ming; Wu, Tsu-Lan
2016-01-01
Background Accurate patient identification and specimen labeling at the time of collection are crucial steps in the prevention of medical errors, thereby improving patient safety. Methods All patient specimen identification errors that occurred in the outpatient department (OPD), emergency department (ED), and inpatient department (IPD) of a 3,800-bed academic medical center in Taiwan were documented and analyzed retrospectively from 2005 to 2014. To reduce such errors, the following series of strategies were implemented: a restrictive specimen acceptance policy for the ED and IPD in 2006; a computer-assisted barcode positive patient identification system for the ED and IPD in 2007 and 2010, and automated sample labeling combined with electronic identification systems introduced to the OPD in 2009. Results Of the 2000345 specimens collected in 2005, 1023 (0.0511%) were identified as having patient identification errors, compared with 58 errors (0.0015%) among 3761238 specimens collected in 2014, after serial interventions; this represents a 97% relative reduction. The total number (rate) of institutional identification errors contributed from the ED, IPD, and OPD over a 10-year period were 423 (0.1058%), 556 (0.0587%), and 44 (0.0067%) errors before the interventions, and 3 (0.0007%), 52 (0.0045%) and 3 (0.0001%) after interventions, representing relative 99%, 92% and 98% reductions, respectively. Conclusions Accurate patient identification is a challenge of patient safety in different health settings. The data collected in our study indicate that a restrictive specimen acceptance policy, computer-generated positive identification systems, and interdisciplinary cooperation can significantly reduce patient identification errors. PMID:27494020
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
Lendvay, Thomas S; Brand, Timothy C; White, Lee; Kowalewski, Timothy; Jonnadula, Saikiran; Mercer, Laina D; Khorsand, Derek; Andros, Justin; Hannaford, Blake; Satava, Richard M
2013-06-01
Preoperative simulation warm-up has been shown to improve performance and reduce errors in novice and experienced surgeons, yet existing studies have only investigated conventional laparoscopy. We hypothesized that a brief virtual reality (VR) robotic warm-up would enhance robotic task performance and reduce errors. In a 2-center randomized trial, 51 residents and experienced minimally invasive surgery faculty in General Surgery, Urology, and Gynecology underwent a validated robotic surgery proficiency curriculum on a VR robotic simulator and on the da Vinci surgical robot (Intuitive Surgical Inc). Once they successfully achieved performance benchmarks, surgeons were randomized to either receive a 3- to 5-minute VR simulator warm-up or read a leisure book for 10 minutes before performing similar and dissimilar (intracorporeal suturing) robotic surgery tasks. The primary outcomes compared were task time, tool path length, economy of motion, technical, and cognitive errors. Task time (-29.29 seconds, p = 0.001; 95% CI, -47.03 to -11.56), path length (-79.87 mm; p = 0.014; 95% CI, -144.48 to -15.25), and cognitive errors were reduced in the warm-up group compared with the control group for similar tasks. Global technical errors in intracorporeal suturing (0.32; p = 0.020; 95% CI, 0.06-0.59) were reduced after the dissimilar VR task. When surgeons were stratified by earlier robotic and laparoscopic clinical experience, the more experienced surgeons (n = 17) demonstrated significant improvements from warm-up in task time (-53.5 seconds; p = 0.001; 95% CI, -83.9 to -23.0) and economy of motion (0.63 mm/s; p = 0.007; 95% CI, 0.18-1.09), and improvement in these metrics was not statistically significantly appreciated in the less-experienced cohort (n = 34). We observed significant performance improvement and error reduction rates among surgeons of varying experience after VR warm-up for basic robotic surgery tasks. In addition, the VR warm-up reduced errors on a more complex task (robotic suturing), suggesting the generalizability of the warm-up. Copyright © 2013 American College of Surgeons. All rights reserved.
Lendvay, Thomas S.; Brand, Timothy C.; White, Lee; Kowalewski, Timothy; Jonnadula, Saikiran; Mercer, Laina; Khorsand, Derek; Andros, Justin; Hannaford, Blake; Satava, Richard M.
2014-01-01
Background Pre-operative simulation “warm-up” has been shown to improve performance and reduce errors in novice and experienced surgeons, yet existing studies have only investigated conventional laparoscopy. We hypothesized a brief virtual reality (VR) robotic warm-up would enhance robotic task performance and reduce errors. Study Design In a two-center randomized trial, fifty-one residents and experienced minimally invasive surgery faculty in General Surgery, Urology, and Gynecology underwent a validated robotic surgery proficiency curriculum on a VR robotic simulator and on the da Vinci surgical robot. Once successfully achieving performance benchmarks, surgeons were randomized to either receive a 3-5 minute VR simulator warm-up or read a leisure book for 10 minutes prior to performing similar and dissimilar (intracorporeal suturing) robotic surgery tasks. The primary outcomes compared were task time, tool path length, economy of motion, technical and cognitive errors. Results Task time (-29.29sec, p=0.001, 95%CI-47.03,-11.56), path length (-79.87mm, p=0.014, 95%CI -144.48,-15.25), and cognitive errors were reduced in the warm-up group compared to the control group for similar tasks. Global technical errors in intracorporeal suturing (0.32, p=0.020, 95%CI 0.06,0.59) were reduced after the dissimilar VR task. When surgeons were stratified by prior robotic and laparoscopic clinical experience, the more experienced surgeons(n=17) demonstrated significant improvements from warm-up in task time (-53.5sec, p=0.001, 95%CI -83.9,-23.0) and economy of motion (0.63mm/sec, p=0.007, 95%CI 0.18,1.09), whereas improvement in these metrics was not statistically significantly appreciated in the less experienced cohort(n=34). Conclusions We observed a significant performance improvement and error reduction rate among surgeons of varying experience after VR warm-up for basic robotic surgery tasks. In addition, the VR warm-up reduced errors on a more complex task (robotic suturing) suggesting the generalizability of the warm-up. PMID:23583618
Competence in Streptococcus pneumoniae is regulated by the rate of ribosomal decoding errors.
Stevens, Kathleen E; Chang, Diana; Zwack, Erin E; Sebert, Michael E
2011-01-01
Competence for genetic transformation in Streptococcus pneumoniae develops in response to accumulation of a secreted peptide pheromone and was one of the initial examples of bacterial quorum sensing. Activation of this signaling system induces not only expression of the proteins required for transformation but also the production of cellular chaperones and proteases. We have shown here that activity of this pathway is sensitively responsive to changes in the accuracy of protein synthesis that are triggered by either mutations in ribosomal proteins or exposure to antibiotics. Increasing the error rate during ribosomal decoding promoted competence, while reducing the error rate below the baseline level repressed the development of both spontaneous and antibiotic-induced competence. This pattern of regulation was promoted by the bacterial HtrA serine protease. Analysis of strains with the htrA (S234A) catalytic site mutation showed that the proteolytic activity of HtrA selectively repressed competence when translational fidelity was high but not when accuracy was low. These findings redefine the pneumococcal competence pathway as a response to errors during protein synthesis. This response has the capacity to address the immediate challenge of misfolded proteins through production of chaperones and proteases and may also be able to address, through genetic exchange, upstream coding errors that cause intrinsic protein folding defects. The competence pathway may thereby represent a strategy for dealing with lesions that impair proper protein coding and for maintaining the coding integrity of the genome. The signaling pathway that governs competence in the human respiratory tract pathogen Streptococcus pneumoniae regulates both genetic transformation and the production of cellular chaperones and proteases. The current study shows that this pathway is sensitively controlled in response to changes in the accuracy of protein synthesis. Increasing the error rate during ribosomal decoding induced competence, while decreasing the error rate repressed competence. This pattern of regulation was promoted by the HtrA protease, which selectively repressed competence when translational fidelity was high but not when accuracy was low. Our findings demonstrate that this organism is able to monitor the accuracy of information used for protein biosynthesis and suggest that errors trigger a response addressing both the immediate challenge of misfolded proteins and, through genetic exchange, upstream coding errors that may underlie protein folding defects. This pathway may represent an evolutionary strategy for maintaining the coding integrity of the genome.
Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Bryant, Larry
2014-01-01
Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.
Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.
Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok
2015-01-01
Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.
Oldland, Alan R.; May, Sondra K.; Barber, Gerard R.; Stolpman, Nancy M.
2015-01-01
Purpose: To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. Methods: During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Results: Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Conclusions: Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training. PMID:25684799
Oldland, Alan R; Golightly, Larry K; May, Sondra K; Barber, Gerard R; Stolpman, Nancy M
2015-01-01
To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training.
Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F
2010-06-01
Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.
Hartwell, Christopher J; Campion, Michael A
2016-06-01
This study explores normative feedback as a way to reduce rating errors and increase the reliability and validity of structured interview ratings. Based in control theory and social comparison theory, we propose a model of normative feedback interventions (NFIs) in the context of structured interviews and test our model using data from over 20,000 interviews conducted by more than 100 interviewers over a period of more than 4 years. Results indicate that lenient and severe interviewers reduced discrepancies between their ratings and the overall normative mean rating after receipt of normative feedback, though changes were greater for lenient interviewers. When various waves of feedback were presented in later NFIs, the combined normative mean rating over multiple time periods was more predictive of subsequent rating changes than the normative mean rating from the most recent time period. Mean within-interviewer rating variance, along with interrater agreement and interrater reliability, increased after the initial NFI, but results from later NFIs were more complex and revealed that feedback interventions may lose effectiveness over time. A second study using simulated data indicated that leniency and severity errors did not impact rating validity, but did affect which applicants were hired. We conclude that giving normative feedback to interviewers will aid in minimizing interviewer rating differences and enhance the reliability of structured interview ratings. We suggest that interviewer feedback might be considered as a potential new component of interview structure, though future research is needed before a definitive conclusion can be drawn. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
Inui, Hiroshi; Taketomi, Shuji; Tahara, Keitarou; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2017-03-01
Bone cutting errors can cause malalignment of unicompartmental knee arthroplasties (UKA). Although the extent of tibial malalignment due to horizontal cutting errors has been well reported, there is a lack of studies evaluating malalignment as a consequence of keel cutting errors, particularly in the Oxford UKA. The purpose of this study was to examine keel cutting errors during Oxford UKA placement using a navigation system and to clarify whether two different tibial keel cutting techniques would have different error rates. The alignment of the tibial cut surface after a horizontal osteotomy and the surface of the tibial trial component was measured with a navigation system. Cutting error was defined as the angular difference between these measurements. The following two techniques were used: the standard "pushing" technique in 83 patients (group P) and a modified "dolphin" technique in 41 patients (group D). In all 123 patients studied, the mean absolute keel cutting error was 1.7° and 1.4° in the coronal and sagittal planes, respectively. In group P, there were 22 outlier patients (27 %) in the coronal plane and 13 (16 %) in the sagittal plane. Group D had three outlier patients (8 %) in the coronal plane and none (0 %) in the sagittal plane. Significant differences were observed in the outlier ratio of these techniques in both the sagittal (P = 0.014) and coronal (P = 0.008) planes. Our study demonstrated overall keel cutting errors of 1.7° in the coronal plane and 1.4° in the sagittal plane. The "dolphin" technique was found to significantly reduce keel cutting errors on the tibial side. This technique will be useful for accurate component positioning and therefore improve the longevity of Oxford UKAs. Retrospective comparative study, Level III.
Colour and spatial cueing in low-prevalence visual search.
Russell, Nicholas C C; Kunar, Melina A
2012-01-01
In visual search, 30-40% of targets with a prevalence rate of 2% are missed, compared to 7% of targets with a prevalence rate of 50% (Wolfe, Horowitz, & Kenner, 2005). This "low-prevalence" (LP) effect is thought to occur as participants are making motor errors, changing their response criteria, and/or quitting their search too soon. We investigate whether colour and spatial cues, known to improve visual search when the target has a high prevalence (HP), benefit search when the target is rare. Experiments 1 and 2 showed that although knowledge of the target's colour reduces miss errors overall, it does not eliminate the LP effect as more targets were missed at LP than at HP. Furthermore, detection of a rare target is significantly impaired if it appears in an unexpected colour-more so than if the prevalence of the target is high (Experiment 2). Experiment 3 showed that, if a rare target is exogenously cued, target detection is improved but still impaired relative to high-prevalence conditions. Furthermore, if the cue is absent or invalid, the percentage of missed targets increases. Participants were given the option to correct motor errors in all three experiments, which reduced but did not eliminate the LP effect. The results suggest that although valid colour and spatial cues improve target detection, participants still miss more targets at LP than at HP. Furthermore, invalid cues at LP are very costly in terms of miss errors. We discuss our findings in relation to current theories and applications of LP search.
Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits
NASA Astrophysics Data System (ADS)
Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Selectively Encrypted Pull-Up Based Watermarking of Biometric data
NASA Astrophysics Data System (ADS)
Shinde, S. A.; Patel, Kushal S.
2012-10-01
Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.
Smart photodetector arrays for error control in page-oriented optical memory
NASA Astrophysics Data System (ADS)
Schaffer, Maureen Elizabeth
1998-12-01
Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data modulation and error correction coding for the purpose of error control in the POM system. These techniques are adapted, where possible, for 2D data and evaluated as to their suitability for a SPA implementation in terms of BER, code rate, decoder time and pixel complexity. Our analysis shows that differential data modulation combined with relatively simple block codes known as array codes provide a powerful means to achieve the desired data transfer rates while reducing error rates to industry requirements. Finally, we demonstrate the first smart photodetector array designed to perform parallel error correction on an entire page of data and satisfy the sustained data rates of page-oriented optical memories. Our implementation integrates a monolithic PN photodiode array and differential input receiver for optoelectronic signal conversion with a cluster error correction code using 0.35-mum CMOS. This approach provides high sensitivity, low electrical power dissipation, and fast parallel correction of 2 x 2-bit cluster errors in an 8 x 8 bit code block to achieve corrected output data rates scalable to 102 Gbps in the current technology increasing to 1.88 Tbps in 0.1-mum CMOS.
Reduction in Chemotherapy Mixing Errors Using Six Sigma: Illinois CancerCare Experience.
Heard, Bridgette; Miller, Laura; Kumar, Pankaj
2012-03-01
Chemotherapy mixing errors (CTMRs), although rare, have serious consequences. Illinois CancerCare is a large practice with multiple satellite offices. The goal of this study was to reduce the number of CTMRs using Six Sigma methods. A Six Sigma team consisting of five participants (registered nurses and pharmacy technicians [PTs]) was formed. The team had 10 hours of Six Sigma training in the DMAIC (ie, Define, Measure, Analyze, Improve, Control) process. Measurement of errors started from the time the CT order was verified by the PT to the time of CT administration by the nurse. Data collection included retrospective error tracking software, system audits, and staff surveys. Root causes of CTMRs included inadequate knowledge of CT mixing protocol, inconsistencies in checking methods, and frequent changes in staffing of clinics. Initial CTMRs (n = 33,259) constituted 0.050%, with 77% of these errors affecting patients. The action plan included checklists, education, and competency testing. The postimplementation error rate (n = 33,376, annualized) over a 3-month period was reduced to 0.019%, with only 15% of errors affecting patients. Initial Sigma was calculated at 4.2; this process resulted in the improvement of Sigma to 5.2, representing a 100-fold reduction. Financial analysis demonstrated a reduction in annualized loss of revenue (administration charges and drug wastage) from $11,537.95 (Medicare Average Sales Price) before the start of the project to $1,262.40. The Six Sigma process is a powerful technique in the reduction of CTMRs.
Roch, Marie A; Stinner-Sloan, Johanna; Baumann-Pickering, Simone; Wiggins, Sean M
2015-01-01
A concern for applications of machine learning techniques to bioacoustics is whether or not classifiers learn the categories for which they were trained. Unfortunately, information such as characteristics of specific recording equipment or noise environments can also be learned. This question is examined in the context of identifying delphinid species by their echolocation clicks. To reduce the ambiguity between species classification performance and other confounding factors, species whose clicks can be readily distinguished were used in this study: Pacific white-sided and Risso's dolphins. A subset of data from autonomous acoustic recorders located at seven sites in the Southern California Bight collected between 2006 and 2012 was selected. Cepstral-based features were extracted for each echolocation click and Gaussian mixture models were used to classify groups of 100 clicks. One hundred Monte-Carlo three-fold experiments were conducted to examine classification performance where fold composition was determined by acoustic encounter, recorder characteristics, or recording site. The error rate increased from 6.1% when grouped by acoustic encounter to 18.1%, 46.2%, and 33.2% for grouping by equipment, equipment category, and site, respectively. A noise compensation technique reduced error for these grouping schemes to 2.7%, 4.4%, 6.7%, and 11.4%, respectively, a reduction in error rate of 56%-86%.
NASA Astrophysics Data System (ADS)
Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun
2018-02-01
Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Sayler, Elaine; Eldredge-Hindy, Harriet; Dinome, Jessie; Lockamy, Virginia; Harrison, Amy S
2015-01-01
The planning procedure for Valencia and Leipzig surface applicators (VLSAs) (Nucletron, Veenendaal, The Netherlands) differs substantially from CT-based planning; the unfamiliarity could lead to significant errors. This study applies failure modes and effects analysis (FMEA) to high-dose-rate (HDR) skin brachytherapy using VLSAs to ensure safety and quality. A multidisciplinary team created a protocol for HDR VLSA skin treatments and applied FMEA. Failure modes were identified and scored by severity, occurrence, and detectability. The clinical procedure was then revised to address high-scoring process nodes. Several key components were added to the protocol to minimize risk probability numbers. (1) Diagnosis, prescription, applicator selection, and setup are reviewed at weekly quality assurance rounds. Peer review reduces the likelihood of an inappropriate treatment regime. (2) A template for HDR skin treatments was established in the clinic's electronic medical record system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planner as well as increases the detectability of an error. (3) A screen check was implemented during the second check to increase detectability of an error. (4) To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display, facilitating data entry and verification. (5) VLSAs are color coded and labeled to match the electronic medical record prescriptions, simplifying in-room selection and verification. Multidisciplinary planning and FMEA increased detectability and reduced error probability during VLSA HDR brachytherapy. This clinical model may be useful to institutions implementing similar procedures. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Incorporating harvest rates into the sex-age-kill model for white-tailed deer
Norton, Andrew S.; Diefenbach, Duane R.; Rosenberry, Christopher S.; Wallingford, Bret D.
2013-01-01
Although monitoring population trends is an essential component of game species management, wildlife managers rarely have complete counts of abundance. Often, they rely on population models to monitor population trends. As imperfect representations of real-world populations, models must be rigorously evaluated to be applied appropriately. Previous research has evaluated population models for white-tailed deer (Odocoileus virginianus); however, the precision and reliability of these models when tested against empirical measures of variability and bias largely is untested. We were able to statistically evaluate the Pennsylvania sex-age-kill (PASAK) population model using realistic error measured using data from 1,131 radiocollared white-tailed deer in Pennsylvania from 2002 to 2008. We used these data and harvest data (number killed, age-sex structure, etc.) to estimate precision of abundance estimates, identify the most efficient harvest data collection with respect to precision of parameter estimates, and evaluate PASAK model robustness to violation of assumptions. Median coefficient of variation (CV) estimates by Wildlife Management Unit, 13.2% in the most recent year, were slightly above benchmarks recommended for managing game species populations. Doubling reporting rates by hunters or doubling the number of deer checked by personnel in the field reduced median CVs to recommended levels. The PASAK model was robust to errors in estimates for adult male harvest rates but was sensitive to errors in subadult male harvest rates, especially in populations with lower harvest rates. In particular, an error in subadult (1.5-yr-old) male harvest rates resulted in the opposite error in subadult male, adult female, and juvenile population estimates. Also, evidence of a greater harvest probability for subadult female deer when compared with adult (≥2.5-yr-old) female deer resulted in a 9.5% underestimate of the population using the PASAK model. Because obtaining appropriate sample sizes, by management unit, to estimate harvest rate parameters each year may be too expensive, assumptions of constant annual harvest rates may be necessary. However, if changes in harvest regulations or hunter behavior influence subadult male harvest rates, the PASAK model could provide an unreliable index to population changes.
An audit on the reporting of critical results in a tertiary institute.
Rensburg, Megan A; Nutt, Louise; Zemlin, Annalise E; Erasmus, Rajiv T
2009-03-01
Critical result reporting is a requirement for accreditation by accreditation bodies worldwide. Accurate, prompt communication of results to the clinician by the laboratory is of extreme importance. Repeating of the critical result by the recipient has been used as a means to improve the accuracy of notification. Our objective was to assess the accuracy of notification of critical chemical pathology laboratory results telephoned out to clinicians/clinical areas. We hypothesize that read-back of telephoned critical laboratory results by the recipient may improve the accuracy of the notification. This was a prospective study, where all critical results telephoned by chemical pathologists and registrars at Tygerberg Hospital were monitored for one month. The recipient was required to repeat the result (patient name, folder number and test results). Any error, as well as the designation of the recipient was logged. Of 472 outgoing telephone calls, 51 errors were detected (error rate 10.8%). Most errors were made when recording the folder number (64.7%), with incorrect patient name being the lowest (5.9%). Calls to the clinicians had the highest error rate (20%), most of them being the omission of recording folder numbers. Our audit highlights the potential errors during the post-analytical phase of laboratory testing. The importance of critical result reporting is still poorly recognized in South Africa. Implementation of a uniform accredited practice for communication of critical results can reduce error and improve patient safety.
Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E
2009-07-01
The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2004-01-01
Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar. Error model modifications for non-raining situations will be required, however. Sampling error appears to represent only a fraction of the total error in monthly, 2S0-resolution TMI estimates; the remaining error is attributed to physical inconsistency or non-representativeness of cloud-resolving model simulated profiles supporting the algorithm.
Neural Network and Letter Recognition.
NASA Astrophysics Data System (ADS)
Lee, Hue Yeon
Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C -layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the 'Gabor' transform. Pattern dependent choice of center and wavelengths of 'Gabor' filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets. The correct recognition rate of the system increases with the number of training sets and eventually saturates at a certain value. Similar recognition rates are obtained for the above three different learning algorithms. The minimum error rate, 4.9% is achieved for alphanumeric sets when 50 sets are trained. With the ambiguity resolver, it is reduced to 2.5%. In case that only numeral sets are trained and tested, 2.0% error rate is achieved. When only alphabet sets are considered, the error rate is reduced to 1.1%.
Spacecraft and propulsion technician error
NASA Astrophysics Data System (ADS)
Schultz, Daniel Clyde
Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.
Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage
NASA Astrophysics Data System (ADS)
Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo
2005-01-01
Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.
Data entry errors and design for model-based tight glycemic control in critical care.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to achieve consistently. Model-based methods and computerized protocols offer the opportunity to improve TGC quality but require human data entry, particularly of blood glucose (BG) values, which can be significantly prone to error. This study presents the design and optimization of data entry methods to minimize error for a computerized and model-based TGC method prior to pilot clinical trials. To minimize data entry error, two tests were carried out to optimize a method with errors less than the 5%-plus reported in other studies. Four initial methods were tested on 40 subjects in random order, and the best two were tested more rigorously on 34 subjects. The tests measured entry speed and accuracy. Errors were reported as corrected and uncorrected errors, with the sum comprising a total error rate. The first set of tests used randomly selected values, while the second set used the same values for all subjects to allow comparisons across users and direct assessment of the magnitude of errors. These research tests were approved by the University of Canterbury Ethics Committee. The final data entry method tested reduced errors to less than 1-2%, a 60-80% reduction from reported values. The magnitude of errors was clinically significant and was typically by 10.0 mmol/liter or an order of magnitude but only for extreme values of BG < 2.0 mmol/liter or BG > 15.0-20.0 mmol/liter, both of which could be easily corrected with automated checking of extreme values for safety. The data entry method selected significantly reduced data entry errors in the limited design tests presented, and is in use on a clinical pilot TGC study. The overall approach and testing methods are easily performed and generalizable to other applications and protocols. © 2012 Diabetes Technology Society.
Nondimensional parameter for conformal grinding: combining machine and process parameters
NASA Astrophysics Data System (ADS)
Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.
1999-11-01
Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
Automated Processing of Plasma Samples for Lipoprotein Separation by Rate-Zonal Ultracentrifugation.
Peters, Carl N; Evans, Iain E J
2016-12-01
Plasma lipoproteins are the primary means of lipid transport among tissues. Defining alterations in lipid metabolism is critical to our understanding of disease processes. However, lipoprotein measurement is limited to specialized centers. Preparation for ultracentrifugation involves the formation of complex density gradients that is both laborious and subject to handling errors. We created a fully automated device capable of forming the required gradient. The design has been made freely available for download by the authors. It is inexpensive relative to commercial density gradient formers, which generally create linear gradients unsuitable for rate-zonal ultracentrifugation. The design can easily be modified to suit user requirements and any potential future improvements. Evaluation of the device showed reliable peristaltic pump accuracy and precision for fluid delivery. We also demonstrate accurate fluid layering with reduced mixing at the gradient layers when compared to usual practice by experienced laboratory personnel. Reduction in layer mixing is of critical importance, as it is crucial for reliable lipoprotein separation. The automated device significantly reduces laboratory staff input and reduces the likelihood of error. Overall, this device creates a simple and effective solution to formation of complex density gradients. © 2015 Society for Laboratory Automation and Screening.
Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A 3 Year Experience
Sakyi, AS; Laing, EF; Ephraim, RK; Asibey, OF; Sadique, OK
2015-01-01
Background: Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Aim: We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. Materials and Methods: We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). Results: A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Conclusion: Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified. PMID:25745569
Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin
2016-01-01
An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.
A nudging data assimilation algorithm for the identification of groundwater pumping
NASA Astrophysics Data System (ADS)
Cheng, Wei-Chen; Kendall, Donald R.; Putti, Mario; Yeh, William W.-G.
2009-08-01
This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measured data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistent physical interpretation for pumping rate identification. The algorithm identifies the unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rates, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show an excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.
A nudging data assimilation algorithm for the identification of groundwater pumping
NASA Astrophysics Data System (ADS)
Cheng, W.; Kendall, D. R.; Putti, M.; Yeh, W. W.
2008-12-01
This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measurement data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistently physical interpretation for pumping rate identification. The algorithm identifies unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rate, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study, we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.
Nurses' attitudes and perceived barriers to the reporting of medication administration errors.
Yung, Hai-Peng; Yu, Shu; Chu, Chi; Hou, I-Ching; Tang, Fu-In
2016-07-01
(1) To explore the attitudes and perceived barriers to reporting medication administration errors and (2) to understand the characteristics of - and nurses' feelings - about error reports. Under-reporting of medication administration errors is a global concern related to the safety of patient care. Understanding nurses' attitudes and perceived barriers to error reporting is the initial step to increasing the reporting rate. A cross-sectional, descriptive survey with a self-administered questionnaire was completed by the nurses of a medical centre hospital in Taiwan. A total of 306 nurses participated in the study. Nurses' attitudes towards medication administration error reporting were inclined towards positive. The major perceived barrier was fear of the consequences after reporting. The results demonstrated that 88.9% of medication administration errors were reported orally, whereas 19.0% were reported through the hospital internet system. Self-recrimination was the common feeling of nurses after the commission of an medication administration error. Even if hospital management encourages errors to be reported without recrimination, nurses' attitudes toward medication administration error reporting are not very positive and fear is the most prominent barrier contributing to underreporting. Nursing managers should establish anonymous reporting systems and counselling classes to create a secure atmosphere to reduce nurses' fear and provide incentives to encourage reporting. © 2016 John Wiley & Sons Ltd.
A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.
2007-01-01
Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
NASA Astrophysics Data System (ADS)
Cottrell, Paul Edward
There is a lack of research in the area of hedging future contracts, especially in illiquid or very volatile market conditions. It is important to understand the volatility of the oil and currency markets because reduced fluctuations in these markets could lead to better hedging performance. This study compared different hedging methods by using a hedging error metric, supplementing the Receding Horizontal Control and Stochastic Programming (RHCSP) method by utilizing the London Interbank Offered Rate with the Levy process. The RHCSP hedging method was investigated to determine if improved hedging error was accomplished compared to the Black-Scholes, Leland, and Whalley and Wilmott methods when applied on simulated, oil, and currency futures markets. A modified RHCSP method was also investigated to determine if this method could significantly reduce hedging error under extreme market illiquidity conditions when applied on simulated, oil, and currency futures markets. This quantitative study used chaos theory and emergence for its theoretical foundation. An experimental research method was utilized for this study with a sample size of 506 hedging errors pertaining to historical and simulation data. The historical data were from January 1, 2005 through December 31, 2012. The modified RHCSP method was found to significantly reduce hedging error for the oil and currency market futures by the use of a 2-way ANOVA with a t test and post hoc Tukey test. This study promotes positive social change by identifying better risk controls for investment portfolios and illustrating how to benefit from high volatility in markets. Economists, professional investment managers, and independent investors could benefit from the findings of this study.
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Chow, Chi-Wai; Chiang, Ming-Feng; Shih, Fu-Yuan; Pan, Ci-Ling
2011-09-01
In a wavelength division multiplexed-passive optical network (WDM-PON), different fiber lengths and optical components would introduce different power budgets to different optical networking units (ONUs). Besides, the power decay of the distributed optical carrier from the optical line terminal owing to aging of the optical transmitter could also reduce the injected power into the ONU. In this work, we propose and demonstrate a carrier distributed WDM-PON using a reflective semiconductor optical amplifier-based ONU that can adjust its upstream data rate to accommodate different injected optical powers. The WDM-PON is evaluated at standard-reach (25 km) and long-reach (100 km). Bit-error rate measurements at different injected optical powers and transmission lengths show that by adjusting the upstream data rate of the system (622 Mb/s, 1.25 and 2.5 Gb/s), error-free (<10-9) operation can still be achieved when the power budget drops.
Rekaya, Romdhane; Smith, Shannon; Hay, El Hamidi; Farhat, Nourhene; Aggrey, Samuel E
2016-01-01
Errors in the binary status of some response traits are frequent in human, animal, and plant applications. These error rates tend to differ between cases and controls because diagnostic and screening tests have different sensitivity and specificity. This increases the inaccuracies of classifying individuals into correct groups, giving rise to both false-positive and false-negative cases. The analysis of these noisy binary responses due to misclassification will undoubtedly reduce the statistical power of genome-wide association studies (GWAS). A threshold model that accommodates varying diagnostic errors between cases and controls was investigated. A simulation study was carried out where several binary data sets (case-control) were generated with varying effects for the most influential single nucleotide polymorphisms (SNPs) and different diagnostic error rate for cases and controls. Each simulated data set consisted of 2000 individuals. Ignoring misclassification resulted in biased estimates of true influential SNP effects and inflated estimates for true noninfluential markers. A substantial reduction in bias and increase in accuracy ranging from 12% to 32% was observed when the misclassification procedure was invoked. In fact, the majority of influential SNPs that were not identified using the noisy data were captured using the proposed method. Additionally, truly misclassified binary records were identified with high probability using the proposed method. The superiority of the proposed method was maintained across different simulation parameters (misclassification rates and odds ratios) attesting to its robustness.
A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.
Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W
2012-09-01
In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. ©2012 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang
2017-06-01
The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.
Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente
2014-07-15
Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Computerized N-acetylcysteine physician order entry by template protocol for acetaminophen toxicity.
Thompson, Trevonne M; Lu, Jenny J; Blackwood, Louisa; Leikin, Jerrold B
2011-01-01
Some medication dosing protocols are logistically complex for traditional physician ordering. The use of computerized physician order entry (CPOE) with templates, or order sets, may be useful to reduce medication administration errors. This study evaluated the rate of medication administration errors using CPOE order sets for N-acetylcysteine (NAC) use in treating acetaminophen poisoning. An 18-month retrospective review of computerized inpatient pharmacy records for NAC use was performed. All patients who received NAC for the treatment of acetaminophen poisoning were included. Each record was analyzed to determine the form of NAC given and whether an administration error occurred. In the 82 cases of acetaminophen poisoning in which NAC was given, no medication administration errors were identified. Oral NAC was given in 31 (38%) cases; intravenous NAC was given in 51 (62%) cases. In this retrospective analysis of N-acetylcysteine administration using computerized physician order entry and order sets, no medication administration errors occurred. CPOE is an effective tool in safely executing complicated protocols in an inpatient setting.
Accessory stimulus modulates executive function during stepping task
Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo
2015-01-01
When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. PMID:25925321
Bishop, Lauri; Khan, Moiz; Martelli, Dario; Quinn, Lori; Stein, Joel; Agrawal, Sunil
2017-10-01
Many robotic devices in rehabilitation incorporate an assist-as-needed haptic guidance paradigm to promote training. This error reduction model, while beneficial for skill acquisition, could be detrimental for long-term retention. Error augmentation (EA) models have been explored as alternatives. A robotic Tethered Pelvic Assist Device has been developed to study force application to the pelvis on gait and was used here to induce weight shift onto the paretic (error reduction) or nonparetic (error augmentation) limb during treadmill training. The purpose of these case reports is to examine effects of training with these two paradigms to reduce load force asymmetry during gait in two individuals after stroke (>6 mos). Participants presented with baseline gait asymmetry, although independent community ambulators. Participants underwent 1-hr trainings for 3 days using either the error reduction or error augmentation model. Outcomes included the Borg rating of perceived exertion scale for treatment tolerance and measures of force and stance symmetry. Both participants tolerated training. Force symmetry (measured on treadmill) improved from pretraining to posttraining (36.58% and 14.64% gains), however, with limited transfer to overground gait measures (stance symmetry gains of 9.74% and 16.21%). Training with the Tethered Pelvic Assist Device device proved feasible to improve force symmetry on the treadmill irrespective of training model. Future work should consider methods to increase transfer to overground gait.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Evaluation of voice codecs for the Australian mobile satellite system
NASA Technical Reports Server (NTRS)
Bundrock, Tony; Wilkinson, Mal
1990-01-01
The evaluation procedure to choose a low bit rate voice coding algorithm is described for the Australian land mobile satellite system. The procedure is designed to assess both the inherent quality of the codec under 'normal' conditions and its robustness under 'severe' conditions. For the assessment, normal conditions were chosen to be random bit error rate with added background acoustic noise and the severe condition is designed to represent burst error conditions when mobile satellite channel suffers from signal fading due to roadside vegetation. The assessment is divided into two phases. First, a reduced set of conditions is used to determine a short list of candidate codecs for more extensive testing in the second phase. The first phase conditions include quality and robustness and codecs are ranked with a 60:40 weighting on the two. Second, the short listed codecs are assessed over a range of input voice levels, BERs, background noise conditions, and burst error distributions. Assessment is by subjective rating on a five level opinion scale and all results are then used to derive a weighted Mean Opinion Score using appropriate weights for each of the test conditions.
Buffer management for sequential decoding. [block erasure probability reduction
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carson, M; Molineu, A; Taylor, P
Purpose: To analyze the most recent results of IROC Houston’s anthropomorphic H&N phantom to determine the nature of failing irradiations and the feasibility of altering pass/fail credentialing criteria. Methods: IROC Houston’s H&N phantom, used for IMRT credentialing for NCI-sponsored clinical trials, requires that an institution’s treatment plan must agree with measurement within 7% (TLD doses) and ≥85% pixels must pass 7%/4 mm gamma analysis. 156 phantom irradiations (November 2014 – October 2015) were re-evaluated using tighter criteria: 1) 5% TLD and 5%/4 mm, 2) 5% TLD and 5%/3 mm, 3) 4% TLD and 4%/4 mm, and 4) 3% TLD andmore » 3%/3 mm. Failure/poor performance rates were evaluated with respect to individual film and TLD performance by location in the phantom. Overall poor phantom results were characterized qualitatively as systematic (dosimetric) errors, setup errors/positional shifts, global but non-systematic errors, and errors affecting only a local region. Results: The pass rate for these phantoms using current criteria is 90%. Substituting criteria 1-4 reduces the overall pass rate to 77%, 70%, 63%, and 37%, respectively. Statistical analyses indicated the probability of noise-induced TLD failure at the 5% criterion was <0.5%. Using criteria 1, TLD results were most often the cause of failure (86% failed TLD while 61% failed film), with most failures identified in the primary PTV (77% cases). Other criteria posed similar results. Irradiations that failed from film only were overwhelmingly associated with phantom shifts/setup errors (≥80% cases). Results failing criteria 1 were primarily diagnosed as systematic: 58% of cases. 11% were setup/positioning errors, 8% were global non-systematic errors, and 22% were local errors. Conclusion: This study demonstrates that 5% TLD and 5%/4 mm gamma criteria may be both practically and theoretically achievable. Further work is necessary to diagnose and resolve dosimetric inaccuracy in these trials, particularly for systematic dose errors. This work is funded by NCI Grant CA180803.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, J; Wang, J; Peng, J
Purpose: To implement an entire workflow quality assurance (QA) process in the radiotherapy department and to reduce the error rates of radiotherapy based on the entire workflow management in the developing country. Methods: The entire workflow QA process management starts from patient registration to the end of last treatment including all steps through the entire radiotherapy process. Error rate of chartcheck is used to evaluate the the entire workflow QA process. Two to three qualified senior medical physicists checked the documents before the first treatment fraction of every patient. Random check of the treatment history during treatment was also performed.more » A total of around 6000 patients treatment data before and after implementing the entire workflow QA process were compared from May, 2014 to December, 2015. Results: A systemic checklist was established. It mainly includes patient’s registration, treatment plan QA, information exporting to OIS(Oncology Information System), documents of treatment QAand QA of the treatment history. The error rate derived from the chart check decreases from 1.7% to 0.9% after our the entire workflow QA process. All checked errors before the first treatment fraction were corrected as soon as oncologist re-confirmed them and reinforce staff training was accordingly followed to prevent those errors. Conclusion: The entire workflow QA process improved the safety, quality of radiotherapy in our department and we consider that our QA experience can be applicable for the heavily-loaded radiotherapy departments in developing country.« less
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output
NASA Astrophysics Data System (ADS)
Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu
2015-07-01
Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in real time. The ARTK positioning accuracy is better and more robust than the combination of phase difference over time (PDOT) and SRTK method at a high rate. The ARTK positioning accuracy is equivalent to SRTK solution when the DLTTD is 0.5 s, and centimeter level accuracy can be achieved even when DLTTD is 15 s.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Evaluation of normalization methods for cDNA microarray data by k-NN classification
Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J
2005-01-01
Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics. PMID:16045803
Evaluation of normalization methods for cDNA microarray data by k-NN classification.
Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J
2005-07-26
Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics.
Securing quantum key distribution systems using fewer states
NASA Astrophysics Data System (ADS)
Islam, Nurul T.; Lim, Charles Ci Wen; Cahall, Clinton; Kim, Jungsang; Gauthier, Daniel J.
2018-04-01
Quantum key distribution (QKD) allows two remote users to establish a secret key in the presence of an eavesdropper. The users share quantum states prepared in two mutually unbiased bases: one to generate the key while the other monitors the presence of the eavesdropper. Here, we show that a general d -dimension QKD system can be secured by transmitting only a subset of the monitoring states. In particular, we find that there is no loss in the secure key rate when dropping one of the monitoring states. Furthermore, it is possible to use only a single monitoring state if the quantum bit error rates are low enough. We apply our formalism to an experimental d =4 time-phase QKD system, where only one monitoring state is transmitted, and obtain a secret key rate of 17.4 ±2.8 Mbits/s at a 4 dB channel loss and with a quantum bit error rate of 0.045 ±0.001 and 0.037 ±0.001 in time and phase bases, respectively, which is 58.4% of the secret key rate that can be achieved with the full setup. This ratio can be increased, potentially up to 100%, if the error rates in time and phase basis are reduced. Our results demonstrate that it is possible to substantially simplify the design of high-dimensional QKD systems, including those that use the spatial or temporal degrees of freedom of the photon, and still outperform qubit-based (d =2 ) protocols.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
High-speed tracking control of piezoelectric actuators using an ellipse-based hysteresis model.
Gu, Guoying; Zhu, Limin
2010-08-01
In this paper, an ellipse-based mathematic model is developed to characterize the rate-dependent hysteresis in piezoelectric actuators. Based on the proposed model, an expanded input space is constructed to describe the multivalued hysteresis function H[u](t) by a multiple input single output (MISO) mapping Gamma:R(2)-->R. Subsequently, the inverse MISO mapping Gamma(-1)(H[u](t),H[u](t);u(t)) is proposed for real-time hysteresis compensation. In controller design, a hybrid control strategy combining a model-based feedforward controller and a proportional integral differential (PID) feedback loop is used for high-accuracy and high-speed tracking control of piezoelectric actuators. The real-time feedforward controller is developed to cancel the rate-dependent hysteresis based on the inverse hysteresis model, while the PID controller is used to compensate for the creep, modeling errors, and parameter uncertainties. Finally, experiments with and without hysteresis compensation are conducted and the experimental results are compared. The experimental results show that the hysteresis compensation in the feedforward path can reduce the hysteresis-caused error by up to 88% and the tracking performance of the hybrid controller is greatly improved in high-speed tracking control applications, e.g., the root-mean-square tracking error is reduced to only 0.34% of the displacement range under the input frequency of 100 Hz.
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
Quantum cryptographic system with reduced data loss
Lo, H.K.; Chau, H.F.
1998-03-24
A secure method for distributing a random cryptographic key with reduced data loss is disclosed. Traditional quantum key distribution systems employ similar probabilities for the different communication modes and thus reject at least half of the transmitted data. The invention substantially reduces the amount of discarded data (those that are encoded and decoded in different communication modes e.g. using different operators) in quantum key distribution without compromising security by using significantly different probabilities for the different communication modes. Data is separated into various sets according to the actual operators used in the encoding and decoding process and the error rate for each set is determined individually. The invention increases the key distribution rate of the BB84 key distribution scheme proposed by Bennett and Brassard in 1984. Using the invention, the key distribution rate increases with the number of quantum signals transmitted and can be doubled asymptotically. 23 figs.
Quantum cryptographic system with reduced data loss
Lo, Hoi-Kwong; Chau, Hoi Fung
1998-01-01
A secure method for distributing a random cryptographic key with reduced data loss. Traditional quantum key distribution systems employ similar probabilities for the different communication modes and thus reject at least half of the transmitted data. The invention substantially reduces the amount of discarded data (those that are encoded and decoded in different communication modes e.g. using different operators) in quantum key distribution without compromising security by using significantly different probabilities for the different communication modes. Data is separated into various sets according to the actual operators used in the encoding and decoding process and the error rate for each set is determined individually. The invention increases the key distribution rate of the BB84 key distribution scheme proposed by Bennett and Brassard in 1984. Using the invention, the key distribution rate increases with the number of quantum signals transmitted and can be doubled asymptotically.
The Implementation of an Automated Assessment Feedback and Quality Assurance System for ICT Courses
ERIC Educational Resources Information Center
Debuse, J.; Lawley, M.; Shibl, R.
2007-01-01
Providing detailed, constructive and helpful feedback is an important contribution to effective student learning. Quality assurance is also required to ensure consistency across all students and reduce error rates. However, with increasing workloads and student numbers these goals are becoming more difficult to achieve. An automated feedback…
The Effect of Health Information Technology on Hospital Quality of Care
ERIC Educational Resources Information Center
Sun, Ruirui
2016-01-01
Health Information Technology (Health IT) is designed to store patients' records safely and clearly, to reduce input errors and missing records, and to make communications more efficiently. Concerned with the relatively lower adoption rate among the US hospitals compared to most developed countries, the Bush Administration set up the Office of…
ERIC Educational Resources Information Center
Chen, Ru San; Dunlap, William P.
1994-01-01
The present simulation study confirms that the corrected epsilon approximate test of B. Lecoutre yields a less biased estimation of population epsilon and reduces Type I error rates when compared to the epsilon approximate test of H. Huynh and L. S. Feldt. (SLD)
NASA Astrophysics Data System (ADS)
Ciaramello, Frank M.; Hemami, Sheila S.
2009-02-01
Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.
Acetaminophen attenuates error evaluation in cortex
Kam, Julia W.Y.; Heine, Steven J.; Inzlicht, Michael; Handy, Todd C.
2016-01-01
Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants’ ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual’s Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. PMID:26892161
Tarrasch, Ricardo; Berman, Zohar; Friedmann, Naama
2016-01-01
This study explored the effects of a Mindfulness-Based Stress Reduction (MBSR) intervention on reading, attention, and psychological well-being among people with developmental dyslexia and/or attention deficits. Various types of dyslexia exist, characterized by different error types. We examined a question that has not been tested so far: which types of errors (and dyslexias) are affected by MBSR training. To do so, we tested, using an extensive battery of reading tests, whether each participant had dyslexia, and which errors types s/he makes, and then compared the rate of each error type before and after the MBSR workshop. We used a similar approach to attention disorders: we evaluated the participants' sustained, selective, executive, and orienting of attention to assess whether they had attention-disorders, and if so, which functions were impaired. We then evaluated the effect of MBSR on each of the attention functions. Psychological measures including mindfulness, stress, reflection and rumination, lifesatisfaction, depression, anxiety, and sleep-disturbances were also evaluated. Nineteen Hebrew-readers completed a 2-month mindfulness workshop. The results showed that whereas reading errors of letter-migrations within and between words and vowelletter errors did not decrease following the workshop, most participants made fewer reading errors in general following the workshop, with a significant reduction of 19% from their original number of errors. This decrease mainly resulted from a decrease in errors that occur due to reading via the sublexical rather than the lexical route. It seems, therefore, that mindfulness helped reading by keeping the readers on the lexical route. This improvement in reading probably resulted from improved sustained attention: the reduction in sublexical reading was significant for the dyslexic participants who also had attention deficits, and there were significant correlations between reduced reading errors and decreases in impulsivity. Following the meditation workshop, the rate of commission errors decreased, indicating decreased impulsivity, and the variation in RTs in the CPT task decreased, indicating improved sustained attention. Significant improvements were obtained in participants' mindfulness, perceived-stress, rumination, depression, state-anxiety, and sleep-disturbances. Correlations were also obtained between reading improvement and increased mindfulness following the workshop. Thus, whereas mindfulness training did not affect specific types of errors and did not improve dyslexia, it did affect the reading of adults with developmental dyslexia and ADHD, by helping them to stay on the straight path of the lexical route while reading. Thus, the reading improvement induced by mindfulness sheds light on the intricate relation between attention and reading. Mindfulness reduced impulsivity and improved sustained attention, and this, in turn, improved reading of adults with developmental dyslexia and ADHD, by helping them to read via the straight path of the lexical route.
Tarrasch, Ricardo; Berman, Zohar; Friedmann, Naama
2016-01-01
This study explored the effects of a Mindfulness-Based Stress Reduction (MBSR) intervention on reading, attention, and psychological well-being among people with developmental dyslexia and/or attention deficits. Various types of dyslexia exist, characterized by different error types. We examined a question that has not been tested so far: which types of errors (and dyslexias) are affected by MBSR training. To do so, we tested, using an extensive battery of reading tests, whether each participant had dyslexia, and which errors types s/he makes, and then compared the rate of each error type before and after the MBSR workshop. We used a similar approach to attention disorders: we evaluated the participants’ sustained, selective, executive, and orienting of attention to assess whether they had attention-disorders, and if so, which functions were impaired. We then evaluated the effect of MBSR on each of the attention functions. Psychological measures including mindfulness, stress, reflection and rumination, lifesatisfaction, depression, anxiety, and sleep-disturbances were also evaluated. Nineteen Hebrew-readers completed a 2-month mindfulness workshop. The results showed that whereas reading errors of letter-migrations within and between words and vowelletter errors did not decrease following the workshop, most participants made fewer reading errors in general following the workshop, with a significant reduction of 19% from their original number of errors. This decrease mainly resulted from a decrease in errors that occur due to reading via the sublexical rather than the lexical route. It seems, therefore, that mindfulness helped reading by keeping the readers on the lexical route. This improvement in reading probably resulted from improved sustained attention: the reduction in sublexical reading was significant for the dyslexic participants who also had attention deficits, and there were significant correlations between reduced reading errors and decreases in impulsivity. Following the meditation workshop, the rate of commission errors decreased, indicating decreased impulsivity, and the variation in RTs in the CPT task decreased, indicating improved sustained attention. Significant improvements were obtained in participants’ mindfulness, perceived-stress, rumination, depression, state-anxiety, and sleep-disturbances. Correlations were also obtained between reading improvement and increased mindfulness following the workshop. Thus, whereas mindfulness training did not affect specific types of errors and did not improve dyslexia, it did affect the reading of adults with developmental dyslexia and ADHD, by helping them to stay on the straight path of the lexical route while reading. Thus, the reading improvement induced by mindfulness sheds light on the intricate relation between attention and reading. Mindfulness reduced impulsivity and improved sustained attention, and this, in turn, improved reading of adults with developmental dyslexia and ADHD, by helping them to read via the straight path of the lexical route. PMID:27242565
Reducing Error Rates for Iris Image using higher Contrast in Normalization process
NASA Astrophysics Data System (ADS)
Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa
2017-08-01
Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Essays in financial economics and econometrics
NASA Astrophysics Data System (ADS)
La Spada, Gabriele
Chapter 1 (my job market paper) asks the following question: Do asset managers reach for yield because of competitive pressures in a low rate environment? I propose a tournament model of money market funds (MMFs) to study this issue. I show that funds with different costs of default respond differently to changes in interest rates, and that it is important to distinguish the role of risk-free rates from that of risk premia. An increase in the risk premium leads funds with lower default costs to increase risk-taking, while funds with higher default costs reduce risk-taking. Without changes in the premium, low risk-free rates reduce risk-taking. My empirical analysis shows that these predictions are consistent with the risk-taking of MMFs during the 2006--2008 period. Chapter 2, co-authored with Fabrizio Lillo and published in Studies in Nonlinear Dynamics and Econometrics (2014), studies the effect of round-off error (or discretization) on stationary Gaussian long-memory process. For large lags, the autocovariance is rescaled by a factor smaller than one, and we compute this factor exactly. Hence, the discretized process has the same Hurst exponent as the underlying one. We show that in presence of round-off error, two common estimators of the Hurst exponent, the local Whittle (LW) estimator and the detrended fluctuation analysis (DFA), are severely negatively biased in finite samples. We derive conditions for consistency and asymptotic normality of the LW estimator applied to discretized processes and compute the asymptotic properties of the DFA for generic long-memory processes that encompass discretized processes. Chapter 3, co-authored with Fabrizio Lillo, studies the effect of round-off error on integrated Gaussian processes with possibly correlated increments. We derive the variance and kurtosis of the realized increment process in the limit of both "small" and "large" round-off errors, and its autocovariance for large lags. We propose novel estimators for the variance and lag-one autocorrelation of the underlying, unobserved increment process. We also show that for fractionally integrated processes, the realized increments have the same Hurst exponent as the underlying ones, but the LW estimator applied to the realized series is severely negatively biased in medium-sized samples.
2014-01-01
Background The Health Information Technology for Economic and Clinical Health (HITECH) Act subsidizes implementation by hospitals of electronic health records with computerized provider order entry (CPOE), which may reduce patient injuries caused by medication errors (preventable adverse drug events, pADEs). Effects on pADEs have not been rigorously quantified, and effects on medication errors have been variable. The objectives of this analysis were to assess the effectiveness of CPOE at reducing pADEs in hospital-related settings, and examine reasons for heterogeneous effects on medication errors. Methods Articles were identified using MEDLINE, Cochrane Library, Econlit, web-based databases, and bibliographies of previous systematic reviews (September 2013). Eligible studies compared CPOE with paper-order entry in acute care hospitals, and examined diverse pADEs or medication errors. Studies on children or with limited event-detection methods were excluded. Two investigators extracted data on events and factors potentially associated with effectiveness. We used random effects models to pool data. Results Sixteen studies addressing medication errors met pooling criteria; six also addressed pADEs. Thirteen studies used pre-post designs. Compared with paper-order entry, CPOE was associated with half as many pADEs (pooled risk ratio (RR) = 0.47, 95% CI 0.31 to 0.71) and medication errors (RR = 0.46, 95% CI 0.35 to 0.60). Regarding reasons for heterogeneous effects on medication errors, five intervention factors and two contextual factors were sufficiently reported to support subgroup analyses or meta-regression. Differences between commercial versus homegrown systems, presence and sophistication of clinical decision support, hospital-wide versus limited implementation, and US versus non-US studies were not significant, nor was timing of publication. Higher baseline rates of medication errors predicted greater reductions (P < 0.001). Other context and implementation variables were seldom reported. Conclusions In hospital-related settings, implementing CPOE is associated with a greater than 50% decline in pADEs, although the studies used weak designs. Decreases in medication errors are similar and robust to variations in important aspects of intervention design and context. This suggests that CPOE implementation, as subsidized under the HITECH Act, may benefit public health. More detailed reporting of the context and process of implementation could shed light on factors associated with greater effectiveness. PMID:24894078
A comparison of orthogonal transformations for digital speech processing.
NASA Technical Reports Server (NTRS)
Campanella, S. J.; Robinson, G. S.
1971-01-01
Discrete forms of the Fourier, Hadamard, and Karhunen-Loeve transforms are examined for their capacity to reduce the bit rate necessary to transmit speech signals. To rate their effectiveness in accomplishing this goal the quantizing error (or noise) resulting for each transformation method at various bit rates is computed and compared with that for conventional companded PCM processing. Based on this comparison, it is found that Karhunen-Loeve provides a reduction in bit rate of 13.5 kbits/s, Fourier 10 kbits/s, and Hadamard 7.5 kbits/s as compared with the bit rate required for companded PCM. These bit-rate reductions are shown to be somewhat independent of the transmission bit rate.
Siebert, Johan N; Ehrler, Frederic; Lovis, Christian; Combescure, Christophe; Haddad, Kevin; Gervaix, Alain; Manzano, Sergio
2017-08-22
During pediatric cardiopulmonary resuscitation (CPR), vasoactive drug preparation for continuous infusions is complex and time-consuming. The need for individual specific weight-based drug dose calculation and preparation places children at higher risk than adults for medication errors. Following an evidence-based and ergonomic driven approach, we developed a mobile device app called Pediatric Accurate Medication in Emergency Situations (PedAMINES), intended to guide caregivers step-by-step from preparation to delivery of drugs requiring continuous infusion. In a prior single center randomized controlled trial, medication errors were reduced from 70% to 0% by using PedAMINES when compared with conventional preparation methods. The purpose of this study is to determine whether the use of PedAMINES in both university and smaller hospitals reduces medication dosage errors (primary outcome), time to drug preparation (TDP), and time to drug delivery (TDD) (secondary outcomes) during pediatric CPR when compared with conventional preparation methods. This is a multicenter, prospective, randomized controlled crossover trial with 2 parallel groups comparing PedAMINES with a conventional and internationally used drug infusion rate table in the preparation of continuous drug infusion. The evaluation setting uses a simulation-based pediatric CPR cardiac arrest scenario with a high-fidelity manikin. The study involving 120 certified nurses (sample size) will take place in the resuscitation rooms of 3 tertiary pediatric emergency departments and 3 smaller hospitals. After epinephrine-induced return of spontaneous circulation, nurses will be asked to prepare a continuous infusion of dopamine using either PedAMINES (intervention group) or the infusion table (control group) and then prepare a continuous infusion of norepinephrine by crossing the procedure. The primary outcome is the medication dosage error rate. The secondary outcome is the time in seconds elapsed since the oral prescription by the physician to drug delivery by the nurse in each allocation group. TDD includes TDP. Stress level during the resuscitation scenario will be assessed for each participant by questionnaire and recorded by the heart rate monitor of a fitness watch. The study is formatted according to the Consolidated Standards of Reporting Trials Statement for Randomized Controlled Trials of Electronic and Mobile Health Applications and Online TeleHealth (CONSORT-EHEALTH) and the Reporting Guidelines for Health Care Simulation Research. Enrollment and data analysis started in March 2017. We anticipate the intervention will be completed in late 2017, and study results will be submitted in early 2018 for publication expected in mid-2018. Results will be reported in line with recommendations from CONSORT-EHEALTH and the Reporting Guidelines for Health Care Simulation Research . This paper describes the protocol used for a clinical trial assessing the impact of a mobile device app to reduce the rate of medication errors, time to drug preparation, and time to drug delivery during pediatric resuscitation. As research in this area is scarce, results generated from this study will be of great importance and might be sufficient to change and improve the pediatric emergency care practice. ClinicalTrials.gov NCT03021122; https://clinicaltrials.gov/ct2/show/NCT03021122 (Archived by WebCite at http://www.webcitation.org/6nfVJ5b4R). ©Johan N Siebert, Frederic Ehrler, Christian Lovis, Christophe Combescure, Kevin Haddad, Alain Gervaix, Sergio Manzano. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 22.08.2017.
The effect of tropospheric fluctuations on the accuracy of water vapor radiometry
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1992-01-01
Line-of-sight path delay calibration accuracies of 1 mm are needed to improve both angular and Doppler tracking capabilities. Fluctuations in the refractivity of tropospheric water vapor limit the present accuracies to about 1 nrad for the angular position and to a delay rate of 3x10(exp -13) sec/sec over a 100-sec time interval for Doppler tracking. This article describes progress in evaluating the limitations of the technique of water vapor radiometry at the 1-mm level. The two effects evaluated here are: (1) errors arising from tip-curve calibration of WVR's in the presence of tropospheric fluctuations and (2) errors due to the use of nonzero beamwidths for water vapor radiometer (WVR) horns. The error caused by tropospheric water vapor fluctuations during instrument calibration from a single tip curve is 0.26 percent in the estimated gain for a tip-curve duration of several minutes or less. This gain error causes a 3-mm bias and a 1-mm scale factor error in the estimated path delay at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor column density present in the troposphere during the astrometric observation. The error caused by WVR beam averaging of tropospheric fluctuations is 3 mm at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor (and is proportionally higher for higher water vapor content) for current WVR beamwidths (full width at half maximum of approximately 6 deg). This is a stochastic error (which cannot be calibrated) and which can be reduced to about half of its instantaneous value by time averaging the radio signal over several minutes. The results presented here suggest two improvements to WVR design: first, the gain of the instruments should be stabilized to 4 parts in 10(exp 4) over a calibration period lasting 5 hours, and second, the WVR antenna beamwidth should be reduced to about 0.2 deg. This will reduce the error induced by water vapor fluctuations in the estimated path delays to less than 1 mm for the elevation range from zenith to 6 deg for most observation weather conditions.
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Influence of video compression on the measurement error of the television system
NASA Astrophysics Data System (ADS)
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.
Is GPS telemetry location error screening beneficial?
Ironside, Kirsten E.; Mattson, David J.; Arundel, Terry; Hansen, Jered R.
2017-01-01
The accuracy of global positioning system (GPS) locations obtained from study animals tagged with GPS monitoring devices has been a concern as to the degree it influences assessments of movement patterns, space use, and resource selection estimates. Many methods have been proposed for screening data to retain the most accurate positions for analysis, based on dilution of precision (DOP) measures, and whether the position is a two dimensional or three dimensional fix. Here we further explore the utility of these measures, by testing a Telonics GEN3 GPS collar's positional accuracy across a wide range of environmental conditions. We found the relationship between location error and fix dimension and DOP metrics extremely weak (r2adj ∼ 0.01) in our study area. Environmental factors such as topographic exposure, canopy cover, and vegetation height explained more of the variance (r2adj = 15.08%). Our field testing covered sites where sky-view was so limited it affected GPS performance to the degree fix attempts failed frequently (fix success rates ranged 0.00–100.00% over 67 sites). Screening data using PDOP did not effectively reduce the location error in the remaining dataset. Removing two dimensional fixes reduced the mean location error by 10.95 meters, but also resulted in a 54.50% data reduction. Therefore screening data under the range of conditions sampled here would reduce information on animal movement with minor improvements in accuracy and potentially introduce bias towards more open terrain and vegetation.
Awareness of deficits and error processing after traumatic brain injury.
Larson, Michael J; Perlstein, William M
2009-10-28
Severe traumatic brain injury is frequently associated with alterations in performance monitoring, including reduced awareness of physical and cognitive deficits. We examined the relationship between awareness of deficits and electrophysiological indices of performance monitoring, including the error-related negativity and posterror positivity (Pe) components of the scalp-recorded event-related potential, in 16 traumatic brain injury survivors who completed a Stroop color-naming task while event-related potential measurements were recorded. Awareness of deficits was measured as the discrepancy between patient and significant-other ratings on the Frontal Systems Behavior Scale. The amplitude of the Pe, but not error-related negativity, was reliably associated with decreased awareness of deficits. Results indicate that Pe amplitude may serve as an electrophysiological indicator of awareness of abilities and deficits.
Common errors of drug administration in infants: causes and avoidance.
Anderson, B J; Ellis, J F
1999-01-01
Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.
Moore, Laura J.; Griggs, Gary B.
2002-01-01
Quantification of cliff retreat rates for the southern half of Santa Cruz County, CA, USA, located within the Monterey Bay National Marine Sanctuary, using the softcopy/geographic information system (GIS) methodology results in average cliff retreat rates of 7–15 cm/yr between 1953 and 1994. The coastal dunes at the southern end of Santa Cruz County migrate seaward and landward through time and display net accretion between 1953 and 1994, which is partially due to development. In addition, three critically eroding segments of coastline with high average erosion rates ranging from 20 to 63 cm/yr are identified as erosion ‘hotspots’. These locations include: Opal Cliffs, Depot Hill and Manresa. Although cliff retreat is episodic, spatially variable at the scale of meters, and the factors affecting cliff retreat vary along the Santa Cruz County coastline, there is a compensation between factors affecting retreat such that over the long-term the coastline maintains a relatively smooth configuration. The softcopy/GIS methodology significantly reduces errors inherent in the calculation of retreat rates in high-relief areas (e.g. erosion rates generated in this study are generally correct to within 10 cm) by removing errors due to relief displacement. Although the resulting root mean squared error for erosion rates is relatively small, simple projections of past erosion rates are inadequate to provide predictions of future cliff position. Improved predictions can be made for individual coastal segments by using a mean erosion rate and the standard deviation as guides to future cliff behavior in combination with an understanding of processes acting along the coastal segments in question. This methodology can be applied on any high-relief coast where retreat rates can be measured.
Development and implementation of a human accuracy program in patient foodservice.
Eden, S H; Wood, S M; Ptak, K M
1987-04-01
For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.
Are "Human Factors" the Primary Cause of Complications in the Field of Implant Dentistry?
Renouard, Franck; Amalberti, René; Renouard, Erell
Complications in medicine and dentistry are usually analyzed from a purely technical point of view. Rarely is the role of human behavior or judgment considered as a reason for adverse outcomes. When the role of human factors is considered, these are usually described in general terms rather than specifically identifying the factors responsible for an adverse event. The impact of cognitive and behavioral factors in the explanation of adverse events has been studied in other high-stakes areas such as aviation and nuclear power. Specific protocols have been developed to reduce rates of human error, and, where human error is unavoidable, to lessen its impact. This approach has dramatically reduced the incidence of accidents in these fields. This article aims to review how a similar approach may prove valuable in the reduction of complications in implant dentistry.
The control of translational accuracy is a determinant of healthy ageing in yeast
Leadsham, Jane E.; Sauvadet, Aimie; Tarrant, Daniel; Adam, Ilectra S.; Saromi, Kofo; Laun, Peter; Rinnerthaler, Mark; Breitenbach-Koller, Hannelore; Breitenbach, Michael; Tuite, Mick F.; Gourlay, Campbell W.
2017-01-01
Life requires the maintenance of molecular function in the face of stochastic processes that tend to adversely affect macromolecular integrity. This is particularly relevant during ageing, as many cellular functions decline with age, including growth, mitochondrial function and energy metabolism. Protein synthesis must deliver functional proteins at all times, implying that the effects of protein synthesis errors like amino acid misincorporation and stop-codon read-through must be minimized during ageing. Here we show that loss of translational accuracy accelerates the loss of viability in stationary phase yeast. Since reduced translational accuracy also reduces the folding competence of at least some proteins, we hypothesize that negative interactions between translational errors and age-related protein damage together overwhelm the cellular chaperone network. We further show that multiple cellular signalling networks control basal error rates in yeast cells, including a ROS signal controlled by mitochondrial activity, and the Ras pathway. Together, our findings indicate that signalling pathways regulating growth, protein homeostasis and energy metabolism may jointly safeguard accurate protein synthesis during healthy ageing. PMID:28100667
The control of translational accuracy is a determinant of healthy ageing in yeast.
von der Haar, Tobias; Leadsham, Jane E; Sauvadet, Aimie; Tarrant, Daniel; Adam, Ilectra S; Saromi, Kofo; Laun, Peter; Rinnerthaler, Mark; Breitenbach-Koller, Hannelore; Breitenbach, Michael; Tuite, Mick F; Gourlay, Campbell W
2017-01-01
Life requires the maintenance of molecular function in the face of stochastic processes that tend to adversely affect macromolecular integrity. This is particularly relevant during ageing, as many cellular functions decline with age, including growth, mitochondrial function and energy metabolism. Protein synthesis must deliver functional proteins at all times, implying that the effects of protein synthesis errors like amino acid misincorporation and stop-codon read-through must be minimized during ageing. Here we show that loss of translational accuracy accelerates the loss of viability in stationary phase yeast. Since reduced translational accuracy also reduces the folding competence of at least some proteins, we hypothesize that negative interactions between translational errors and age-related protein damage together overwhelm the cellular chaperone network. We further show that multiple cellular signalling networks control basal error rates in yeast cells, including a ROS signal controlled by mitochondrial activity, and the Ras pathway. Together, our findings indicate that signalling pathways regulating growth, protein homeostasis and energy metabolism may jointly safeguard accurate protein synthesis during healthy ageing. © 2017 The Authors.
NASA Astrophysics Data System (ADS)
Bojko, Brian T.
Accounting for the effects of finite rate chemistry in reacting flows is intractable when considering the number of species and reactions to be solved for during a large scale flow simulation. This is especially complicated when solid/liquid fuels are also considered. While modeling the reacting boundary layer with the use of finite-rate chemistry may allow for a highly accurate description of the coupling between the flame and fuel surface, it is not tractable in large scale simulations when considering detailed chemical kinetics. It is the goal of this research to investigate a Flamelet-Generated Manifold (FGM) method in order to reduce the finite rate chemistry to a lookup table cataloged by progress variables and queried during runtime. In this study, simplified unsteady 1D flames with mass blowing are considered for a solid biomass fuel where the FGM method is employed as a model reduction strategy for potential application to multidimensional calculations. Two types of FGM are considered. The first are a set of steady-state flames differentiated by their scalar dissipation rate. Results show the use of steady flames produce unacceptable errors compared to the finite-rate chemistry solution, with temperature errors in excess of 45%. To avoid these errors, a new methodology for developing an unsteady FGM (UFGM) is presented that accounts for unsteady diffusion effects and greatly reduces errors in temperature with differences that are under 10%. The FGM modeling is then extended to individual droplet combustion with the development of a Droplet Flamelet-Generated Manifold (DFGM) to account for the effects of finite-rate chemistry of individual droplets. A spherically symmetric droplet model is developed for methanol and aluminum. The inclusion of finite-rate chemistry allows the capturing of the transition from diffusion to kinetically controlled combustion as the droplet diameter decreases. The droplet model is then used to create a DFGM by successively solving the 1D flame equations at varying drop sizes, where the source terms for energy, mixture fraction, and progress variable are cataloged as a function of normalized diameter. A unique coupling of the DFGM and planar UFGM is developed and is used to account for individual and gas phase combustion processes in turbulent combustion situations, such as spray flames, particle laden blasts, etc. The DFGM for the methanol and aluminum droplets are used in mixed Eulerian and Eulerian-Lagrangian formulations of compressible multiphase flows. System level simulations are conducted and compared experimental data for a methanol spray flame and an aluminized blast studied at the Explosives Components Facility (ECF) at Sandia National Laboratories.
Urban, Michal; Leššo, Roman; Pelclová, Daniela
2016-07-01
The purpose of the article was to study unintentional pharmaceutical-related poisonings committed by laypersons that were reported to the Toxicological Information Centre in the Czech Republic. Identifying frequency, sources, reasons and consequences of the medication errors in laypersons could help to reduce the overall rate of medication errors. Records of medication error enquiries from 2013 to 2014 were extracted from the electronic database, and the following variables were reviewed: drug class, dosage form, dose, age of the subject, cause of the error, time interval from ingestion to the call, symptoms, prognosis at the time of the call and first aid recommended. Of the calls, 1354 met the inclusion criteria. Among them, central nervous system-affecting drugs (23.6%), respiratory drugs (18.5%) and alimentary drugs (16.2%) were the most common drug classes involved in the medication errors. The highest proportion of the patients was in the youngest age subgroup 0-5 year-old (46%). The reasons for the medication errors involved the leaflet misinterpretation and mistaken dose (53.6%), mixing up medications (19.2%), attempting to reduce pain with repeated doses (6.4%), erroneous routes of administration (2.2%), psychiatric/elderly patients (2.7%), others (9.0%) or unknown (6.9%). A high proportion of children among the patients may be due to the fact that children's dosages for many drugs vary by their weight, and more medications come in a variety of concentrations. Most overdoses could be prevented by safer labelling, proper cap closure systems for liquid products and medication reconciliation by both physicians and pharmacists. © 2016 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Barati, Omid; Keshtkaran, Ali; Sabetian, Golnar; Shahrokh, , Nasim; Setoodezadeh, Fatemeh
2017-01-01
Background: One way to reduce medical errors associated with physician orders is computerized physician order entry (CPOE) software. This study was conducted to compare prescription orders between 2 groups before and after CPOE implementation in a hospital. Methods: We conducted a before-after prospective study in 2 intensive care unit (ICU) wards (as intervention and control wards) in the largest tertiary public hospital in South of Iran during 2014 and 2016. All prescription orders were validated by a clinical pharmacist and an ICU physician. The rates of ordering the errors in medical orders were compared before (manual ordering) and after implementation of the CPOE. A standard checklist was used for data collection. For the data analysis, SPSS Version 21, descriptive statistics, and analytical tests such as McNemar, chi-square, and logistic regression were used. Results: The CPOE significantly decreased 2 types of errors, illegible orders and lack of writing the drug form, in the intervention ward compared to the control ward (p< 0.05); however, the 2 errors increased due to the defect in the CPOE (p< 0.001). The use of CPOE decreased the prescription errors from 19% to 3% (p= 0.001), However, no differences were observed in the control ward (p<0.05). In addition, more errors occurred in the morning shift (p< 0.001). Conclusion: In general, the use of CPOE significantly reduced the prescription errors. Nonetheless, more caution should be exercised in the use of this system, and its deficiencies should be resolved. Furthermore, it is recommended that CPOE be used to improve the quality of delivered services in hospitals. PMID:29445698
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Barati, Omid; Keshtkaran, Ali; Sabetian, Golnar; Shahrokh, Nasim; Setoodezadeh, Fatemeh
2017-01-01
Background: One way to reduce medical errors associated with physician orders is computerized physician order entry (CPOE) software. This study was conducted to compare prescription orders between 2 groups before and after CPOE implementation in a hospital. Methods: We conducted a before-after prospective study in 2 intensive care unit (ICU) wards (as intervention and control wards) in the largest tertiary public hospital in South of Iran during 2014 and 2016. All prescription orders were validated by a clinical pharmacist and an ICU physician. The rates of ordering the errors in medical orders were compared before (manual ordering) and after implementation of the CPOE. A standard checklist was used for data collection. For the data analysis, SPSS Version 21, descriptive statistics, and analytical tests such as McNemar, chi-square, and logistic regression were used. Results: The CPOE significantly decreased 2 types of errors, illegible orders and lack of writing the drug form, in the intervention ward compared to the control ward (p< 0.05); however, the 2 errors increased due to the defect in the CPOE (p< 0.001). The use of CPOE decreased the prescription errors from 19% to 3% (p= 0.001), However, no differences were observed in the control ward (p<0.05). In addition, more errors occurred in the morning shift (p< 0.001). Conclusion: In general, the use of CPOE significantly reduced the prescription errors. Nonetheless, more caution should be exercised in the use of this system, and its deficiencies should be resolved. Furthermore, it is recommended that CPOE be used to improve the quality of delivered services in hospitals.
Adaptive Trajectory Prediction Algorithm for Climbing Flights
NASA Technical Reports Server (NTRS)
Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz
2012-01-01
Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.
Emergency department discharge prescription errors in an academic medical center
Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.
2017-01-01
This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061
Modeling streamflow from coupled airborne laser scanning and acoustic Doppler current profiler data
Norris, Lam; Kean, Jason W.; Lyon, Steve
2016-01-01
The rating curve enables the translation of water depth into stream discharge through a reference cross-section. This study investigates coupling national scale airborne laser scanning (ALS) and acoustic Doppler current profiler (ADCP) bathymetric survey data for generating stream rating curves. A digital terrain model was defined from these data and applied in a physically based 1-D hydraulic model to generate rating curves for a regularly monitored location in northern Sweden. Analysis of the ALS data showed that overestimation of the streambank elevation could be adjusted with a root mean square error (RMSE) block adjustment using a higher accuracy manual topographic survey. The results of our study demonstrate that the rating curve generated from the vertically corrected ALS data combined with ADCP data had lower errors (RMSE = 0.79 m3/s) than the empirical rating curve (RMSE = 1.13 m3/s) when compared to streamflow measurements. We consider these findings encouraging as hydrometric agencies can potentially leverage national-scale ALS and ADCP instrumentation to reduce the cost and effort required for maintaining and establishing rating curves at gauging station sites similar to the Röån River.
Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...
2014-12-09
Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less
Human Factors and Ergonomics for the Dental Profession.
Ross, Al
2016-09-01
This paper proposes that the science of Human Factors and Ergonomics (HFE) is suitable for wide application in dental education, training and practice to improve safety, quality and efficiency. Three areas of interest are highlighted. First it is proposed that individual and team Non-Technical Skills (NTS), such as communication, leadership and stress management can improve error rates and efficiency of procedures. Secondly, in a physically and technically challenging environment, staff can benefit from ergonomic principles which examine design in supporting safe work. Finally, examination of organizational human factors can help anticipate stressors and plan for flexible responses to multiple, variable demands, and fluctuating resources. Clinical relevance: HFE is an evidence-based approach to reducing error rates and procedural complications, and avoiding problems associated with stress and fatigue. Improved teamwork and organizational planning and efficiency can impact directly on patient outcomes.
The effect of sampling rate and lowpass filters on saccades - A modeling approach.
Mack, David J; Belfanti, Sandro; Schwarz, Urs
2017-12-01
The study of eye movements has become popular in many fields of science. However, using the preprocessed output of an eye tracker without scrutiny can lead to low-quality or even erroneous data. For example, the sampling rate of the eye tracker influences saccadic peak velocity, while inadequate filters fail to suppress noise or introduce artifacts. Despite previously published guiding values, most filter choices still seem motivated by a trial-and-error approach, and a thorough analysis of filter effects is missing. Therefore, we developed a simple and easy-to-use saccade model that incorporates measured amplitude-velocity main sequences and produces saccades with a similar frequency content to real saccades. We also derived a velocity divergence measure to rate deviations between velocity profiles. In total, we simulated 155 saccades ranging from 0.5° to 60° and subjected them to different sampling rates, noise compositions, and various filter settings. The final goal was to compile a list with the best filter settings for each of these conditions. Replicating previous findings, we observed reduced peak velocities at lower sampling rates. However, this effect was highly non-linear over amplitudes and increasingly stronger for smaller saccades. Interpolating the data to a higher sampling rate significantly reduced this effect. We hope that our model and the velocity divergence measure will be used to provide a quickly accessible ground truth without the need for recording and manually labeling saccades. The comprehensive list of filters allows one to choose the correct filter for analyzing saccade data without resorting to trial-and-error methods.
Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom
2016-01-01
Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332
Speech errors of amnesic H.M.: unlike everyday slips-of-the-tongue.
MacKay, Donald G; James, Lori E; Hadley, Christopher B; Fogler, Kethera A
2011-03-01
Three language production studies indicate that amnesic H.M. produces speech errors unlike everyday slips-of-the-tongue. Study 1 was a naturalistic task: H.M. and six controls closely matched for age, education, background and IQ described what makes captioned cartoons funny. Nine judges rated the descriptions blind to speaker identity and gave reliably more negative ratings for coherence, vagueness, comprehensibility, grammaticality, and adequacy of humor-description for H.M. than the controls. Study 2 examined "major errors", a novel type of speech error that is uncorrected and reduces the coherence, grammaticality, accuracy and/or comprehensibility of an utterance. The results indicated that H.M. produced seven types of major errors reliably more often than controls: substitutions, omissions, additions, transpositions, reading errors, free associations, and accuracy errors. These results contradict recent claims that H.M. retains unconscious or implicit language abilities and produces spoken discourse that is "sophisticated," "intact" and "without major errors." Study 3 examined whether three classical types of errors (omissions, additions, and substitutions of words and phrases) differed for H.M. versus controls in basic nature and relative frequency by error type. The results indicated that omissions, and especially multi-word omissions, were relatively more common for H.M. than the controls; and substitutions violated the syntactic class regularity (whereby, e.g., nouns substitute with nouns but not verbs) relatively more often for H.M. than the controls. These results suggest that H.M.'s medial temporal lobe damage impaired his ability to rapidly form new connections between units in the cortex, a process necessary to form complete and coherent internal representations for novel sentence-level plans. In short, different brain mechanisms underlie H.M.'s major errors (which reflect incomplete and incoherent sentence-level plans) versus everyday slips-of-the tongue (which reflect errors in activating pre-planned units in fully intact sentence-level plans). Implications of the results of Studies 1-3 are discussed for systems theory, binding theory and relational memory theories. Copyright © 2010 Elsevier Srl. All rights reserved.
Force-Time Entropy of Isometric Impulse.
Hsieh, Tsung-Yu; Newell, Karl M
2016-01-01
The relation between force and temporal variability in discrete impulse production has been viewed as independent (R. A. Schmidt, H. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979 ) or dependent on the rate of force (L. G. Carlton & K. M. Newell, 1993 ). Two experiments in an isometric single finger force task investigated the joint force-time entropy with (a) fixed time to peak force and different percentages of force level and (b) fixed percentage of force level and different times to peak force. The results showed that the peak force variability increased either with the increment of force level or through a shorter time to peak force that also reduced timing error variability. The peak force entropy and entropy of time to peak force increased on the respective dimension as the parameter conditions approached either maximum force or a minimum rate of force production. The findings show that force error and timing error are dependent but complementary when considered in the same framework with the joint force-time entropy at a minimum in the middle parameter range of discrete impulse.
Flexible sequential designs for multi-arm clinical trials.
Magirr, D; Stallard, N; Jaki, T
2014-08-30
Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.
Contributions of the cerebellum and the motor cortex to acquisition and retention of motor memories
Herzfeld, David J.; Pastor, Damien; Haith, Adrian M.; Rossetti, Yves; Shadmehr, Reza; O’Shea, Jacinta
2014-01-01
We investigated the contributions of the cerebellum and the motor cortex (M1) to acquisition and retention of human motor memories in a force field reaching task. We found that anodal transcranial direct current stimulation (tDCS) of the cerebellum, a technique that is thought to increase neuronal excitability, increased the ability to learn from error and form an internal model of the field, while cathodal cerebellar stimulation reduced this error-dependent learning. In addition, cathodal cerebellar stimulation disrupted the ability to respond to error within a reaching movement, reducing the gain of the sensory-motor feedback loop. By contrast, anodal M1 stimulation had no significant effects on these variables. During sham stimulation, early in training the acquired motor memory exhibited rapid decay in error-clamp trials. With further training the rate of decay decreased, suggesting that with training the motor memory was transformed from a labile to a more stable state. Surprisingly, neither cerebellar nor M1 stimulation altered these decay patterns. Participants returned 24 hours later and were re-tested in error-clamp trials without stimulation. The cerebellar group that had learned the task with cathodal stimulation exhibited significantly impaired retention, and retention was not improved by M1 anodal stimulation. In summary, non-invasive cerebellar stimulation resulted in polarity-dependent up- or down-regulation of error-dependent motor learning. In addition, cathodal cerebellar stimulation during acquisition impaired the ability to retain the motor memory overnight. Thus, in the force field task we found a critical role for the cerebellum in both formation of motor memory and its retention. PMID:24816533
Sediment transport primer: estimating bed-material transport in gravel-bed rivers
Peter Wilcock; John Pitlick; Yantao Cui
2009-01-01
This primer accompanies the release of BAGS, software developed to calculate sediment transport rate in gravel-bed rivers. BAGS and other programs facilitate calculation and can reduce some errors, but cannot ensure that calculations are accurate or relevant. This primer was written to help the software user define relevant and tractable problems, select appropriate...
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen
2003-01-01
The Neural Flight Control System (NFCS) was developed to address the need for control systems that can be produced and tested at lower cost, easily adapted to prototype vehicles and for flight systems that can accommodate damaged control surfaces or changes to aircraft stability and control characteristics resulting from failures or accidents. NFCS utilizes on a neural network-based flight control algorithm which automatically compensates for a broad spectrum of unanticipated damage or failures of an aircraft in flight. Pilot stick and rudder pedal inputs are fed into a reference model which produces pitch, roll and yaw rate commands. The reference model frequencies and gains can be set to provide handling quality characteristics suitable for the aircraft of interest. The rate commands are used in conjunction with estimates of the aircraft s stability and control (S&C) derivatives by a simplified Dynamic Inverse controller to produce virtual elevator, aileron and rudder commands. These virtual surface deflection commands are optimally distributed across the aircraft s available control surfaces using linear programming theory. Sensor data is compared with the reference model rate commands to produce an error signal. A Proportional/Integral (PI) error controller "winds up" on the error signal and adds an augmented command to the reference model output with the effect of zeroing the error signal. In order to provide more consistent handling qualities for the pilot, neural networks learn the behavior of the error controller and add in the augmented command before the integrator winds up. In the case of damage sufficient to affect the handling qualities of the aircraft, an Adaptive Critic is utilized to reduce the reference model frequencies and gains to stay within a flyable envelope of the aircraft.
Wonnemann, Meinolf; Frömke, Cornelia; Koch, Armin
2015-01-01
We investigated different evaluation strategies for bioequivalence trials with highly variable drugs on their resulting empirical type I error and empirical power. The classical 'unscaled' crossover design with average bioequivalence evaluation, the Add-on concept of the Japanese guideline, and the current 'scaling' approach of EMA were compared. Simulation studies were performed based on the assumption of a single dose drug administration while changing the underlying intra-individual variability. Inclusion of Add-on subjects following the Japanese concept led to slight increases of the empirical α-error (≈7.5%). For the approach of EMA we noted an unexpected tremendous increase of the rejection rate at a geometric mean ratio of 1.25. Moreover, we detected error rates slightly above the pre-set limit of 5% even at the proposed 'scaled' bioequivalence limits. With the classical 'unscaled' approach and the Japanese guideline concept the goal of reduced subject numbers in bioequivalence trials of HVDs cannot be achieved. On the other hand, widening the acceptance range comes at the price that quite a number of products will be accepted bioequivalent that had not been accepted in the past. A two-stage design with control of the global α therefore seems the better alternative.
Turtle: identifying frequent k-mers with cache-efficient algorithms.
Roy, Rajat Shuvro; Bhattacharya, Debashish; Schliep, Alexander
2014-07-15
Counting the frequencies of k-mers in read libraries is often a first step in the analysis of high-throughput sequencing data. Infrequent k-mers are assumed to be a result of sequencing errors. The frequent k-mers constitute a reduced but error-free representation of the experiment, which can inform read error correction or serve as the input to de novo assembly methods. Ideally, the memory requirement for counting should be linear in the number of frequent k-mers and not in the, typically much larger, total number of k-mers in the read library. We present a novel method that balances time, space and accuracy requirements to efficiently extract frequent k-mers even for high-coverage libraries and large genomes such as human. Our method is designed to minimize cache misses in a cache-efficient manner by using a pattern-blocked Bloom filter to remove infrequent k-mers from consideration in combination with a novel sort-and-compact scheme, instead of a hash, for the actual counting. Although this increases theoretical complexity, the savings in cache misses reduce the empirical running times. A variant of method can resort to a counting Bloom filter for even larger savings in memory at the expense of false-negative rates in addition to the false-positive rates common to all Bloom filter-based approaches. A comparison with the state-of-the-art shows reduced memory requirements and running times. The tools are freely available for download at http://bioinformatics.rutgers.edu/Software/Turtle and http://figshare.com/articles/Turtle/791582. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Dispensing error rate after implementation of an automated pharmacy carousel system.
Oswald, Scott; Caldwell, Richard
2007-07-01
A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.
Niland, Courtney N.; Jankowsky, Eckhard; Harris, Michael E.
2016-01-01
Quantification of the specificity of RNA binding proteins and RNA processing enzymes is essential to understanding their fundamental roles in biological processes. High Throughput Sequencing Kinetics (HTS-Kin) uses high throughput sequencing and internal competition kinetics to simultaneously monitor the processing rate constants of thousands of substrates by RNA processing enzymes. This technique has provided unprecedented insight into the substrate specificity of the tRNA processing endonuclease ribonuclease P. Here, we investigate the accuracy and robustness of measurements associated with each step of the HTS-Kin procedure. We examine the effect of substrate concentration on the observed rate constant, determine the optimal kinetic parameters, and provide guidelines for reducing error in amplification of the substrate population. Importantly, we find that high-throughput sequencing, and experimental reproducibility contribute their own sources of error, and these are the main sources of imprecision in the quantified results when otherwise optimized guidelines are followed. PMID:27296633
Critical Findings: Attempts at Reducing Notification Errors.
Shahriari, Mona; Liu, Li; Yousem, David M
2016-11-01
Ineffective communication of critical findings (CFs) is a patient safety issue. The aim of this study was to assess whether a feedback program for faculty members failing to correctly report CFs would lead to improved compliance. Fifty randomly selected reports were reviewed by the chief of neuroradiology each month for 42 months. Errors included (1) not calling for a CF, (2) not identifying a CF as such, (3) mischaracterizing non-CFs as CFs, and (4) calling for non-CFs. The number of appropriately handled and mishandled reports in each month was recorded. The trend of error reduction after the division chief provided feedback in the subsequent months was evaluated, and the equality of time interval between errors was tested. Among 2,100 reports, 49 (2.3%) were handled inappropriately. Among non-CF reports, 98.97% (1,817 of 1,836) were appropriately not called and not flagged, and 88.64% (234 of 264) of CF reports were called and flagged appropriately. The error rate during the 11th through 32nd months of review (1.28%) was significantly lower than the error rate in the first 10 months of review (3.98%) (P = .001). This benefit lasted for 21 months. Review and giving feedback to radiologists increased their compliance with the CF protocol and decreased deviations from standard operating procedures for about 2 years (from month 10 to month 32). Developing new ideas for improving CF policy compliance may be required at 2- to 3-year intervals to provide continuous quality improvement. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan
2015-01-01
Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
NASA Astrophysics Data System (ADS)
Mena, Marcelo Andres
During 2004 and 2006 the University of Iowa provided air quality forecast support for flight planning of the ICARTT and MILAGRO field campaigns. A method for improvement of model performance in comparison to observations is showed. The method allows identifying sources of model error from boundary conditions and emissions inventories. Simultaneous analysis of horizontal interpolation of model error and error covariance showed that error in ozone modeling is highly correlated to the error of its precursors, and that there is geographical correlation also. During ICARTT ozone modeling error was improved by updating from the National Emissions Inventory from 1999 and 2001, and furthermore by updating large point source emissions from continuous monitoring data. Further improvements were achieved by reducing area emissions of NOx y 60% for states in the Southeast United States. Ozone error was highly correlated to NOy error during this campaign. Also ozone production in the United States was most sensitive to NOx emissions. During MILAGRO model performance in terms of correlation coefficients was higher, but model error in ozone modeling was high due overestimation of NOx and VOC emissions in Mexico City during forecasting. Large model improvements were shown by decreasing NOx emissions in Mexico City by 50% and VOC by 60%. Recurring ozone error is spatially correlated to CO and NOy error. Sensitivity studies show that Mexico City aerosol can reduce regional photolysis rates by 40% and ozone formation by 5-10%. Mexico City emissions can enhance NOy and O3 concentrations over the Gulf of Mexico in up to 10-20%. Mexico City emissions can convert regional ozone production regimes from VOC to NOx limited. A method of interpolation of observations along flight tracks is shown, which can be used to infer on the direction of outflow plumes. The use of ratios such as O3/NOy and NOx/NOy can be used to provide information on chemical characteristics of the plume, such as age, and ozone production regime. Interpolated MTBE observations can be used as a tracer of urban mobile source emissions. Finally procedures for estimating and gridding emissions inventories in Brazil and Mexico are presented.
Using Gaussian mixture models to detect and classify dolphin whistles and pulses.
Peso Parada, Pablo; Cardenal-López, Antonio
2014-06-01
In recent years, a number of automatic detection systems for free-ranging cetaceans have been proposed that aim to detect not just surfaced, but also submerged, individuals. These systems are typically based on pattern-recognition techniques applied to underwater acoustic recordings. Using a Gaussian mixture model, a classification system was developed that detects sounds in recordings and classifies them as one of four types: background noise, whistles, pulses, and combined whistles and pulses. The classifier was tested using a database of underwater recordings made off the Spanish coast during 2011. Using cepstral-coefficient-based parameterization, a sound detection rate of 87.5% was achieved for a 23.6% classification error rate. To improve these results, two parameters computed using the multiple signal classification algorithm and an unpredictability measure were included in the classifier. These parameters, which helped to classify the segments containing whistles, increased the detection rate to 90.3% and reduced the classification error rate to 18.1%. Finally, the potential of the multiple signal classification algorithm and unpredictability measure for estimating whistle contours and classifying cetacean species was also explored, with promising results.
Designing robust watermark barcodes for multiplex long-read sequencing.
Ezpeleta, Joaquín; Krsticevic, Flavia J; Bulacio, Pilar; Tapia, Elizabeth
2017-03-15
To attain acceptable sample misassignment rates, current approaches to multiplex single-molecule real-time sequencing require upstream quality improvement, which is obtained from multiple passes over the sequenced insert and significantly reduces the effective read length. In order to fully exploit the raw read length on multiplex applications, robust barcodes capable of dealing with the full single-pass error rates are needed. We present a method for designing sequencing barcodes that can withstand a large number of insertion, deletion and substitution errors and are suitable for use in multiplex single-molecule real-time sequencing. The manuscript focuses on the design of barcodes for full-length single-pass reads, impaired by challenging error rates in the order of 11%. The proposed barcodes can multiplex hundreds or thousands of samples while achieving sample misassignment probabilities as low as 10-7 under the above conditions, and are designed to be compatible with chemical constraints imposed by the sequencing process. Software tools for constructing watermark barcode sets and demultiplexing barcoded reads, together with example sets of barcodes and synthetic barcoded reads, are freely available at www.cifasis-conicet.gov.ar/ezpeleta/NS-watermark . ezpeleta@cifasis-conicet.gov.ar. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Long, Elliot; Fitzpatrick, Patrick; Cincotta, Domenic R; Grindlay, Joanne; Barrett, Michael Joseph
2016-01-27
Safety of emergency intubation may be improved by standardising equipment preparation; the efficacy of cognitive aids is unknown. This randomised controlled trial compared no cognitive aid (control) with the use of a checklist or picture template for emergency airway equipment preparation in the Emergency Department of The Royal Children's Hospital, Melbourne. Sixty-three participants were recruited, 21 randomised to each group. Equal numbers of nursing, junior medical, and senior medical staff were included in each group. Compared to controls, the checklist or template group had significantly lower equipment omission rates (median 30% IQR 20-40% control, median 10% IQR 5-10 % checklist, median 10% IQR 5-20% template; p < 0.05). The combined omission rate and sizing error rate was lower using a checklist or template (median 35 % IQR 30-45 % control, median 15% IQR 10-20% checklist, median 15% IQR 10-30% template; p < 0.05). The template group had less variation in equipment location compared to checklist or controls. There was no significant difference in preparation time in controls (mean 3 min 14 s sd 56 s) compared to checklist (mean 3 min 46 s sd 1 min 15 s) or template (mean 3 min 6 s sd 49 s; p = 0.06). Template use reduces variation in airway equipment location during preparation foremergency intubation, with an equivalent reduction in equipment omission rate to the use of a checklist. The use of a template for equipment preparation and a checklist for team, patient, and monitoring preparation may provide the best combination of both cognitive aids. The use of a cognitive aid for emergency airway equipment preparation reduces errors of omission. Template utilisation reduces variation in equipment location. Australian and New Zealand Trials Registry (ACTRN12615000541505).
Drug error in paediatric anaesthesia: current status and where to go now.
Anderson, Brian J
2018-06-01
Medication errors in paediatric anaesthesia and the perioperative setting continue to occur despite widespread recognition of the problem and published advice for reduction of this predicament at international, national, local and individual levels. Current literature was reviewed to ascertain drug error rates and to appraise causes and proposed solutions to reduce these errors. The medication error incidence remains high. There is documentation of reduction through identification of causes with consequent education and application of safety analytics and quality improvement programs in anaesthesia departments. Children remain at higher risk than adults because of additional complexities such as drug dose calculations, increased susceptibility to some adverse effects and changes associated with growth and maturation. Major improvements are best made through institutional system changes rather than a commitment to do better on the part of each practitioner. Medication errors in paediatric anaesthesia represent an important risk to children and most are avoidable. There is now an understanding of the genesis of adverse drug events and this understanding should facilitate the implementation of known effective countermeasures. An institution-wide commitment and strategy are the basis for a worthwhile and sustained improvement in medication safety.
Using warnings to reduce categorical false memories in younger and older adults.
Carmichael, Anna M; Gutchess, Angela H
2016-07-01
Warnings about memory errors can reduce their incidence, although past work has largely focused on associative memory errors. The current study sought to explore whether warnings could be tailored to specifically reduce false recall of categorical information in both younger and older populations. Before encoding word pairs designed to induce categorical false memories, half of the younger and older participants were warned to avoid committing these types of memory errors. Older adults who received a warning committed fewer categorical memory errors, as well as other types of semantic memory errors, than those who did not receive a warning. In contrast, young adults' memory errors did not differ for the warning versus no-warning groups. Our findings provide evidence for the effectiveness of warnings at reducing categorical memory errors in older adults, perhaps by supporting source monitoring, reduction in reliance on gist traces, or through effective metacognitive strategies.
Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems
NASA Astrophysics Data System (ADS)
El-Ghandour, Osama M.; Saha, Debabrata
1991-05-01
A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.
A comparison of acoustic montoring methods for common anurans of the northeastern United States
Brauer, Corinne; Donovan, Therese; Mickey, Ruth M.; Katz, Jonathan; Mitchell, Brian R.
2016-01-01
Many anuran monitoring programs now include autonomous recording units (ARUs). These devices collect audio data for extended periods of time with little maintenance and at sites where traditional call surveys might be difficult. Additionally, computer software programs have grown increasingly accurate at automatically identifying the calls of species. However, increased automation may cause increased error. We collected 435 min of audio data with 2 types of ARUs at 10 wetland sites in Vermont and New York, USA, from 1 May to 1 July 2010. For each minute, we determined presence or absence of 4 anuran species (Hyla versicolor, Pseudacris crucifer, Anaxyrus americanus, and Lithobates clamitans) using 1) traditional human identification versus 2) computer-mediated identification with software package, Song Scope® (Wildlife Acoustics, Concord, MA). Detections were compared with a data set consisting of verified calls in order to quantify false positive, false negative, true positive, and true negative rates. Multinomial logistic regression analysis revealed a strong (P < 0.001) 3-way interaction between the ARU recorder type, identification method, and focal species, as well as a trend in the main effect of rain (P = 0.059). Overall, human surveyors had the lowest total error rate (<2%) compared with 18–31% total errors with automated methods. Total error rates varied by species, ranging from 4% for A. americanus to 26% for L. clamitans. The presence of rain may reduce false negative rates. For survey minutes where anurans were known to be calling, the odds of a false negative were increased when fewer individuals of the same species were calling.
Errors in fluid balance with pump control of continuous hemodialysis.
Roberts, M; Winney, R J
1992-02-01
The use of pumps both proximal and distal to the dialyzer during continuous hemodialysis provides control of dialysate and ultrafiltration flow rates, thereby reducing nursing time. However, we had noted unexpected severe extracellular fluid depletion suggesting that errors in pump delivery may be responsible. We measured in vitro the operation of various pumps under conditions similar to continuous hemodialysis. Fluid delivery of peristaltic and roller pumps varied with how the tubing set was inserted in the pump. Piston and peristaltic pumps with dedicated pump segments were more accurate. Pumps should be calibrated and tested under conditions simulating continuous hemodialysis prior to in vivo use.
On the Reliability of Photovoltaic Short-Circuit Current Temperature Coefficient Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osterwald, Carl R.; Campanelli, Mark; Kelly, George J.
2015-06-14
The changes in short-circuit current of photovoltaic (PV) cells and modules with temperature are routinely modeled through a single parameter, the temperature coefficient (TC). This parameter is vital for the translation equations used in system sizing, yet in practice is very difficult to measure. In this paper, we discuss these inherent problems and demonstrate how they can introduce unacceptably large errors in PV ratings. A method for quantifying the spectral dependence of TCs is derived, and then used to demonstrate that databases of module parameters commonly contain values that are physically unreasonable. Possible ways to reduce measurement errors are alsomore » discussed.« less
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
Decision feedback equalizer for holographic data storage.
Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo
2018-05-20
Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.
NASA Astrophysics Data System (ADS)
Mollenauer, Linn F.; Grant, Andrew; Liu, Xiang; Wei, Xing; Xie, Chongjin; Kang, Inuk
2003-11-01
In an all-Raman amplified, recirculating loop containing 100-km spans, we have tested dense wavelength-division multiplexing at 10 Gbits/s per channel, using dispersion-managed solitons and a novel, periodic-group-delay-complemented dispersion-compensation scheme that greatly reduces the timing jitter from interchannel collisions. The achieved working distances are ~9000 and ~20,000 km for uncorrected bit error rates of <10-8 and <10-3, respectively, the latter corresponding to the use of ``enhanced'' forward error correction; significantly, these distances are very close to those achievable in single-channel transmission in the same system.
Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR
NASA Astrophysics Data System (ADS)
Ma, Yongjun
The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.
Merry, Alan F; Webster, Craig S; Hannam, Jacqueline; Mitchell, Simon J; Henderson, Robert; Reid, Papaarangi; Edwards, Kylie-Ellen; Jardim, Anisoara; Pak, Nick; Cooper, Jeremy; Hopley, Lara; Frampton, Chris; Short, Timothy G
2011-09-22
To clinically evaluate a new patented multimodal system (SAFERSleep) designed to reduce errors in the recording and administration of drugs in anaesthesia. Prospective randomised open label clinical trial. Five designated operating theatres in a major tertiary referral hospital. Eighty nine consenting anaesthetists managing 1075 cases in which there were 10,764 drug administrations. Use of the new system (which includes customised drug trays and purpose designed drug trolley drawers to promote a well organised anaesthetic workspace and aseptic technique; pre-filled syringes for commonly used anaesthetic drugs; large legible colour coded drug labels; a barcode reader linked to a computer, speakers, and touch screen to provide automatic auditory and visual verification of selected drugs immediately before each administration; automatic compilation of an anaesthetic record; an on-screen and audible warning if an antibiotic has not been administered within 15 minutes of the start of anaesthesia; and certain procedural rules-notably, scanning the label before each drug administration) versus conventional practice in drug administration with a manually compiled anaesthetic record. Primary: composite of errors in the recording and administration of intravenous drugs detected by direct observation and by detailed reconciliation of the contents of used drug vials against recorded administrations; and lapses in responding to an intermittent visual stimulus (vigilance latency task). Secondary: outcomes in patients; analyses of anaesthetists' tasks and assessments of workload; evaluation of the legibility of anaesthetic records; evaluation of compliance with the procedural rules of the new system; and questionnaire based ratings of the respective systems by participants. The overall mean rate of drug errors per 100 administrations was 9.1 (95% confidence interval 6.9 to 11.4) with the new system (one in 11 administrations) and 11.6 (9.3 to 13.9) with conventional methods (one in nine administrations) (P = 0.045 for difference). Most were recording errors, and, though fewer drug administration errors occurred with the new system, the comparison with conventional methods did not reach significance. Rates of errors in drug administration were lower when anaesthetists consistently applied two key principles of the new system (scanning the drug barcode before administering each drug and keeping the voice prompt active) than when they did not: mean 6.0 (3.1 to 8.8) errors per 100 administrations v 9.7 (8.4 to 11.1) respectively (P = 0.004). Lapses in the vigilance latency task occurred in 12% (58/471) of cases with the new system and 9% (40/473) with conventional methods (P = 0.052). The records generated by the new system were more legible, and anaesthetists preferred the new system, particularly in relation to long, complex, and emergency cases. There were no differences between new and conventional systems in respect of outcomes in patients or anaesthetists' workload. The new system was associated with a reduction in errors in the recording and administration of drugs in anaesthesia, attributable mainly to a reduction in recording errors. Automatic compilation of the anaesthetic record increased legibility but also increased lapses in a vigilance latency task and decreased time spent watching monitors. Trial registration Australian New Zealand Clinical Trials Registry No 12608000068369.
Executive Council lists and general practitioner files
Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.
1974-01-01
An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
Acetaminophen attenuates error evaluation in cortex.
Randles, Daniel; Kam, Julia W Y; Heine, Steven J; Inzlicht, Michael; Handy, Todd C
2016-06-01
Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants' ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual's Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Kalman Filter Estimation of Spinning Spacecraft Attitude using Markley Variables
NASA Technical Reports Server (NTRS)
Sedlak, Joseph E.; Harman, Richard
2004-01-01
There are several different ways to represent spacecraft attitude and its time rate of change. For spinning or momentum-biased spacecraft, one particular representation has been put forward as a superior parameterization for numerical integration. Markley has demonstrated that these new variables have fewer rapidly varying elements for spinning spacecraft than other commonly used representations and provide advantages when integrating the equations of motion. The current work demonstrates how a Kalman filter can be devised to estimate the attitude using these new variables. The seven Markley variables are subject to one constraint condition, making the error covariance matrix singular. The filter design presented here explicitly accounts for this constraint by using a six-component error state in the filter update step. The reduced dimension error state is unconstrained and its covariance matrix is nonsingular.
Reach distance but not judgment error is associated with falls in older people.
Butler, Annie A; Lord, Stephen R; Fitzpatrick, Richard C
2011-08-01
Reaching is a vital action requiring precise motor coordination and attempting to reach for objects that are too far away can destabilize balance and result in falls and injury. This could be particularly important for many elderly people with age-related loss of sensorimotor function and a reduced ability to recover balance. Here, we investigate the interaction between reaching ability, errors in judging reach, and the incidence of falling (retrospectively and prospectively) in a large cohort of older people. Participants (n = 415, 70-90 years) had to estimate the furthest distance they could reach to retrieve a broomstick hanging in front of them. In an iterative dialog with the experimenter, the stick was moved until it was at the furthest distance they estimated to be reached successfully. At this point, participants were asked to attempt to retrieve the stick. Actual maximal reach was then measured. The difference between attempted reach and actual maximal reach provided a measure of judgment error. One-year retrospective fall rates were obtained at initial assessment and prospective falls were monitored by monthly calendar. Participants with poor maximal reach attempted shorter reaches than those who had good reaching ability. Those with the best reaching ability most accurately judged their maximal reach, whereas poor performers were dichotomous and either underestimated or overestimated their reach with few judging exactly. Fall rates were significantly associated with reach distance but not with reach judgment error. Maximal reach but not error in perceived reach is associated with falls in older people.
Lee, Eunjoo
2016-09-01
This study compared registered nurses' perceptions of safety climate and attitude toward medication error reporting before and after completing a hospital accreditation program. Medication errors are the most prevalent adverse events threatening patient safety; reducing underreporting of medication errors significantly improves patient safety. Safety climate in hospitals may affect medication error reporting. This study employed a longitudinal, descriptive design. Data were collected using questionnaires. A tertiary acute hospital in South Korea undergoing a hospital accreditation program. Nurses, pre- and post-accreditation (217 and 373); response rate: 58% and 87%, respectively. Hospital accreditation program. Perceived safety climate and attitude toward medication error reporting. The level of safety climate and attitude toward medication error reporting increased significantly following accreditation; however, measures of institutional leadership and management did not improve significantly. Participants' perception of safety climate was positively correlated with their attitude toward medication error reporting; this correlation strengthened following completion of the program. Improving hospitals' safety climate increased nurses' medication error reporting; interventions that help hospital administration and managers to provide more supportive leadership may facilitate safety climate improvement. Hospitals and their units should develop more friendly and intimate working environments that remove nurses' fear of penalties. Administration and managers should support nurses who report their own errors. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ultra-deep mutant spectrum profiling: improving sequencing accuracy using overlapping read pairs.
Chen-Harris, Haiyin; Borucki, Monica K; Torres, Clinton; Slezak, Tom R; Allen, Jonathan E
2013-02-12
High throughput sequencing is beginning to make a transformative impact in the area of viral evolution. Deep sequencing has the potential to reveal the mutant spectrum within a viral sample at high resolution, thus enabling the close examination of viral mutational dynamics both within- and between-hosts. The challenge however, is to accurately model the errors in the sequencing data and differentiate real viral mutations, particularly those that exist at low frequencies, from sequencing errors. We demonstrate that overlapping read pairs (ORP) -- generated by combining short fragment sequencing libraries and longer sequencing reads -- significantly reduce sequencing error rates and improve rare variant detection accuracy. Using this sequencing protocol and an error model optimized for variant detection, we are able to capture a large number of genetic mutations present within a viral population at ultra-low frequency levels (<0.05%). Our rare variant detection strategies have important implications beyond viral evolution and can be applied to any basic and clinical research area that requires the identification of rare mutations.
NASA Astrophysics Data System (ADS)
Makrakis, Dimitrios; Mathiopoulos, P. Takis
A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.
An audit of the global carbon budget: identifying and reducing sources of uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Tans, P. P.; Marland, G.; Stocker, B. D.
2012-12-01
Uncertainties in our carbon accounting practices may limit our ability to objectively verify emission reductions on regional scales. Furthermore uncertainties in the global C budget must be reduced to benchmark Earth System Models that incorporate carbon-climate interactions. Here we present an audit of the global C budget where we try to identify sources of uncertainty for major terms in the global C budget. The atmospheric growth rate of CO2 has increased significantly over the last 50 years, while the uncertainty in calculating the global atmospheric growth rate has been reduced from 0.4 ppm/yr to 0.2 ppm/yr (95% confidence). Although we have greatly reduced global CO2 growth rate uncertainties, there remain regions, such as the Southern Hemisphere, Tropics and Arctic, where changes in regional sources/sinks will remain difficult to detect without additional observations. Increases in fossil fuel (FF) emissions are the primary factor driving the increase in global CO2 growth rate; however, our confidence in FF emission estimates has actually gone down. Based on a comparison of multiple estimates, FF emissions have increased from 2.45 ± 0.12 PgC/yr in 1959 to 9.40 ± 0.66 PgC/yr in 2010. Major sources of increasing FF emission uncertainty are increased emissions from emerging economies, such as China and India, as well as subtle differences in accounting practices. Lastly, we evaluate emission estimates from Land Use Change (LUC). Although relative errors in emission estimates from LUC are quite high (2 sigma ~ 50%), LUC emissions have remained fairly constant in recent decades. We evaluate the three commonly used approaches to estimating LUC emissions- Bookkeeping, Satellite Imagery, and Model Simulations- to identify their main sources of error and their ability to detect net emissions from LUC.; Uncertainties in Fossil Fuel Emissions over the last 50 years.
Optimization of the ANFIS using a genetic algorithm for physical work rate classification.
Habibi, Ehsanollah; Salehi, Mina; Yadegarfar, Ghasem; Taheri, Ali
2018-03-13
Recently, a new method was proposed for physical work rate classification based on an adaptive neuro-fuzzy inference system (ANFIS). This study aims to present a genetic algorithm (GA)-optimized ANFIS model for a highly accurate classification of physical work rate. Thirty healthy men participated in this study. Directly measured heart rate and oxygen consumption of the participants in the laboratory were used for training the ANFIS classifier model in MATLAB version 8.0.0 using a hybrid algorithm. A similar process was done using the GA as an optimization technique. The accuracy, sensitivity and specificity of the ANFIS classifier model were increased successfully. The mean accuracy of the model was increased from 92.95 to 97.92%. Also, the calculated root mean square error of the model was reduced from 5.4186 to 3.1882. The maximum estimation error of the optimized ANFIS during the network testing process was ± 5%. The GA can be effectively used for ANFIS optimization and leads to an accurate classification of physical work rate. In addition to high accuracy, simple implementation and inter-individual variability consideration are two other advantages of the presented model.
Mistake proofing: changing designs to reduce error
Grout, J R
2006-01-01
Mistake proofing uses changes in the physical design of processes to reduce human error. It can be used to change designs in ways that prevent errors from occurring, to detect errors after they occur but before harm occurs, to allow processes to fail safely, or to alter the work environment to reduce the chance of errors. Effective mistake proofing design changes should initially be effective in reducing harm, be inexpensive, and easily implemented. Over time these design changes should make life easier and speed up the process. Ideally, the design changes should increase patients' and visitors' understanding of the process. These designs should themselves be mistake proofed and follow the good design practices of other disciplines. PMID:17142609
Short‐term time step convergence in a climate model
Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane
2015-01-01
Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669
Cross, Paul C.; Caillaud, Damien; Heisey, Dennis M.
2013-01-01
Many ecological and epidemiological studies occur in systems with mobile individuals and heterogeneous landscapes. Using a simulation model, we show that the accuracy of inferring an underlying biological process from observational data depends on movement and spatial scale of the analysis. As an example, we focused on estimating the relationship between host density and pathogen transmission. Observational data can result in highly biased inference about the underlying process when individuals move among sampling areas. Even without sampling error, the effect of host density on disease transmission is underestimated by approximately 50 % when one in ten hosts move among sampling areas per lifetime. Aggregating data across larger regions causes minimal bias when host movement is low, and results in less biased inference when movement rates are high. However, increasing data aggregation reduces the observed spatial variation, which would lead to the misperception that a spatially targeted control effort may not be very effective. In addition, averaging over the local heterogeneity will result in underestimating the importance of spatial covariates. Minimizing the bias due to movement is not just about choosing the best spatial scale for analysis, but also about reducing the error associated with using the sampling location as a proxy for an individual’s spatial history. This error associated with the exposure covariate can be reduced by choosing sampling regions with less movement, including longitudinal information of individuals’ movements, or reducing the window of exposure by using repeated sampling or younger individuals.
Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.
Wang, Yibin; Nedelman, Jerry
2002-04-01
To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.
NASA Astrophysics Data System (ADS)
Tacik, Nick; Foucart, Francois; Pfeiffer, Harald P.; Haas, Roland; Ossokine, Serguei; Kaplan, Jeff; Muhlberger, Curran; Duez, Matt D.; Kidder, Lawrence E.; Scheel, Mark A.; Szilágyi, Béla
2016-08-01
The code used in [Phys. Rev. D 92, 124012 (2015)] erroneously computed the enthalpy at the center of the neutron stars. Upon correcting this error, density oscillations in evolutions of rotating neutron stars are significantly reduced (from ˜20 % to ˜0.5 % ). Furthermore, it is possible to construct neutron stars with faster rotation rates.
Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs
NASA Astrophysics Data System (ADS)
Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken
2015-09-01
To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.
System care improves trauma outcome: patient care errors dominate reduced preventable death rate.
Thoburn, E; Norris, P; Flores, R; Goode, S; Rodriguez, E; Adams, V; Campbell, S; Albrink, M; Rosemurgy, A
1993-01-01
A review of 452 trauma deaths in Hillsborough County, Florida, in 1984 documented that 23% of non-CNS trauma deaths were preventable and occurred because of inadequate resuscitation or delay in proper surgical care. In late 1988 Hillsborough County organized a County Trauma Agency (HCTA) to coordinate trauma care among prehospital providers and state-designated trauma centers. The purpose of this study was to review county trauma deaths after the inception of the HCTA to determine the frequency of preventable deaths. 504 trauma deaths occurring between October 1989 and April 1991 were reviewed. Through committee review, 10 deaths were deemed preventable; 2 occurred outside the trauma system. Of the 10 deaths, 5 preventable deaths occurred late in severely injured patients. The preventable death rate has decreased to 7.0% with system care. The causes of preventable deaths have changed from delayed or inadequate intervention to postoperative care errors.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Bishara, Anthony J; Hittner, James B
2012-09-01
It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.
Improved classification accuracy by feature extraction using genetic algorithms
NASA Astrophysics Data System (ADS)
Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.
2003-05-01
A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.
ERIC Educational Resources Information Center
Anderson, Richard C.; And Others
A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…
Estimating genotype error rates from high-coverage next-generation sequence data.
Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil
2014-11-01
Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. © 2014 Wall et al.; Published by Cold Spring Harbor Laboratory Press.
NASA Astrophysics Data System (ADS)
Chida, Y.; Takagawa, T.
2017-12-01
The observation data of GPS buoys which are installed in the offshore of Japan are used for monitoring not only waves but also tsunamis in Japan. The real-time data was successfully used to upgrade the tsunami warnings just after the 2011 Tohoku earthquake. Huge tsunamis can be easily detected because the signal-noise ratio is high enough, but moderate tsunami is not. GPS data sometimes include the error waveforms like tsunamis because of changing accuracy by the number and the position of GPS satellites. To distinguish the true tsunami waveforms from pseudo-tsunami ones is important for tsunami detection. In this research, a method to reduce misdetections of tsunami in the observation data of GPS buoys and to increase the efficiency of tsunami detection was developed.Firstly, the error waveforms were extracted by using the indexes of position dilution of precision, reliability of GPS satellite positioning and satellite number for calculation. Then, the output from this procedure was used for the Continuous Wavelet Transform (CWT) to analyze the time-frequency characteristics of error waveforms and real tsunami waveforms.We found that the error waveforms tended to appear when the accuracy of GPS buoys positioning was low. By extracting these waveforms, it was possible to decrease about 43% error waveforms without the reduction of the tsunami detection rate. Moreover, we found that the amplitudes of power spectra obtained from the error waveforms and real tsunamis were similar in the component of long period (4-65 minutes), on the other hand, the amplitude in the component of short period (< 1 minute) obtained from the error waveforms was significantly larger than that of the real tsunami waveforms. By thresholding of the short-period component, further extraction of error waveforms became possible without a significant reduction of tsunami detection rate.
Speech Errors across the Lifespan
ERIC Educational Resources Information Center
Vousden, Janet I.; Maylor, Elizabeth A.
2006-01-01
Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…
Determining relative error bounds for the CVBEM
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.
Measurement of gas exchange in intensive care: laboratory and clinical validation of a new device.
Takala, J; Keinänen, O; Väisänen, P; Kari, A
1989-10-01
The performance of a new gas exchange monitor was assessed both in laboratory simulation and in ICU patients. Laboratory simulation using N2 and CO2 injections resulted in a mean error of 2 +/- 2% in CO2 production (VCO2) and 4 +/- 4% in oxygen consumption (VO2) in respirator measurements (n = 55) and in a mean error of 3 +/- 2% in VCO2 and 4 +/- 2% in VO2 in canopy measurements (n = 25). The mean error in RQ during ethanol burning was 2 +/- 2% in respirator measurements (n = 45) and 1 +/- 1% in canopy measurements. FIO2 had little effect on the accuracy of VCO2, whereas the accuracy on high rates of VO2 (VO2 = 400 ml/min) was reduced, when FIO2 increased: the error ranged from 1 +/- 1% to 6 +/- 1%, except at VO2 400 ml/min during FIO2 0.8, where the error was 16 +/- 3%. Neither peak airway pressure (+13 to +63 cm H2O) nor PEEP (0 to +20 cm H2O) had an effect on the accuracy. The highest level of minute ventilation studied (22.5 L/min) reduced the accuracy slightly (mean error of VCO2 4 +/- 1% and VO2 7 +/- 2%). In patients during controlled mechanical ventilation, increasing FIO2 from 0.4 to 0.6 had no effect on the results. VO2 was consistently higher by gas exchange than by the Fick principle: 16 +/- 9% during controlled ventilation (n = 20), 21 +/- 8% on synchronized intermittent mandatory ventilation (n = 10) and 25 +/- 8% during spontaneous breathing. We conclude that the device proved to be accurate for gas exchange measurements in the ICU.
Angular rate optimal design for the rotary strapdown inertial navigation system.
Yu, Fei; Sun, Qian
2014-04-22
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.
Reduction of Orifice-Induced Pressure Errors
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.
1987-01-01
Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
Reducing errors benefits the field-based learning of a fundamental movement skill in children.
Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W
2013-03-01
Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.
Corrections of clinical chemistry test results in a laboratory information system.
Wang, Sihe; Ho, Virginia
2004-08-01
The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.
Data-driven region-of-interest selection without inflating Type I error rate.
Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard
2017-01-01
In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection
Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-01-01
Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.
Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-08-18
The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.
Quality improvement through implementation of discharge order reconciliation.
Lu, Yun; Clifford, Pamela; Bjorneby, Andreas; Thompson, Bruce; VanNorman, Samuel; Won, Katie; Larsen, Kevin
2013-05-01
A coordinated multidisciplinary process to reduce medication errors related to patient discharges to skilled-nursing facilities (SNFs) is described. After determining that medication errors were a frequent cause of readmission among patients discharged to SNFs, a medical center launched a two-phase quality-improvement project focused on cardiac and medical patients. Phase one of the project entailed a three-month failure modes and effects analysis of existing procedures discharge, followed by the development and pilot testing of a multidisciplinary, closed-loop workflow process involving staff and resident physicians, clinical nurse coordinators, and clinical pharmacists. During pilot testing of the new workflow process, the rate of discharge medication errors involving SNF patients was tracked, and data on medication-related readmissions in a designated intervention group (n = 87) and a control group of patients (n = 1893) discharged to SNFs via standard procedures during a nine-month period were collected, with the data stratified using severity of illness (SOI) classification. Analysis of the collected data indicated a cumulative 30-day medication-related readmission rate for study group patients in the minor, moderate, and major SOI categories of 5.4% (4 of 74 patients), compared with a rate of 9.5% (169 of 1780 patients) in the control group. In phase 2 of the project, the revised SNF discharge medication reconciliation procedure was implemented throughout the hospital; since hospitalwide implementation of the new workflow, the readmission rate for SNF patients has been maintained at about 6.7%. Implementing a standardized discharge order reconciliation process that includes pharmacists led to decreased readmission rates and improved care for patients discharged to SNFs.
Evaluation of causes and frequency of medication errors during information technology downtime.
Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F
2009-06-15
The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nose, Takayuki, E-mail: nose-takayuki@nms.ac.jp; Chatani, Masashi; Otani, Yuki
Purpose: High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Methods and Materials: Conventional X-ray fluoroscopic images aremore » degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Results: Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. Conclusions: With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use.« less
Nose, Takayuki; Chatani, Masashi; Otani, Yuki; Teshima, Teruki; Kumita, Shinichirou
2017-03-15
High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Conventional X-ray fluoroscopic images are degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use. Copyright © 2016 Elsevier Inc. All rights reserved.
Load Sharing Behavior of Star Gearing Reducer for Geared Turbofan Engine
NASA Astrophysics Data System (ADS)
Mo, Shuai; Zhang, Yidu; Wu, Qiong; Wang, Feiming; Matsumura, Shigeki; Houjoh, Haruo
2017-07-01
Load sharing behavior is very important for power-split gearing system, star gearing reducer as a new type and special transmission system can be used in many industry fields. However, there is few literature regarding the key multiple-split load sharing issue in main gearbox used in new type geared turbofan engine. Further mechanism analysis are made on load sharing behavior among star gears of star gearing reducer for geared turbofan engine. Comprehensive meshing error analysis are conducted on eccentricity error, gear thickness error, base pitch error, assembly error, and bearing error of star gearing reducer respectively. Floating meshing error resulting from meshing clearance variation caused by the simultaneous floating of sun gear and annular gear are taken into account. A refined mathematical model for load sharing coefficient calculation is established in consideration of different meshing stiffness and supporting stiffness for components. The regular curves of load sharing coefficient under the influence of interactions, single action and single variation of various component errors are obtained. The accurate sensitivity of load sharing coefficient toward different errors is mastered. The load sharing coefficient of star gearing reducer is 1.033 and the maximum meshing force in gear tooth is about 3010 N. This paper provides scientific theory evidences for optimal parameter design and proper tolerance distribution in advanced development and manufacturing process, so as to achieve optimal effects in economy and technology.
Improved Correction System for Vibration Sensitive Inertial Angle of Attack Measurement Devices
NASA Technical Reports Server (NTRS)
Crawford, Bradley L.; Finley, Tom D.
2000-01-01
Inertial angle of attack (AoA) devices currently in use at NASA Langley Research Center (LaRC) are subject to inaccuracies due to centrifugal accelerations caused by model dynamics, also known as sting whip. Recent literature suggests that these errors can be as high as 0.25 deg. With the current AoA accuracy target at LaRC being 0.01 deg., there is a dire need for improvement. With other errors in the inertial system (temperature, rectification, resolution, etc.) having been reduced to acceptable levels, a system is currently being developed at LaRC to measure and correct for the sting-whip-induced errors. By using miniaturized piezoelectric accelerometers and magnetohydrodynamic rate sensors, not only can the total centrifugal acceleration be measured, but yaw and pitch dynamics in the tunnel can also be characterized. These corrections can be used to determine a tunnel's past performance and can also indicate where efforts need to be concentrated to reduce these dynamics. Included in this paper are data on individual sensors, laboratory testing techniques, package evaluation, and wind tunnel test results on a High Speed Research (HSR) model in the Langley 16-Foot Transonic Wind Tunnel.
El Camino Hospital: using health information technology to promote patient safety.
Bukunt, Susan; Hunter, Christine; Perkins, Sharon; Russell, Diana; Domanico, Lee
2005-10-01
El Camino Hospital is a leader in the use of health information technology to promote patient safety, including bar coding, computerized order entry, electronic medical records, and wireless communications. Each year, El Camino Hospital's board of directors sets performance expectations for the chief executive officer, which are tied to achievement of local, regional, and national safety and quality standards, including the six Institute of Medicine quality dimensions. He then determines a set of explicit quality goals and measurable actions, which serve as guidelines for the overall hospital. The goals and progress reports are widely shared with employees, medical staff, patients and families, and the public. For safety, for example, the medication error reduction team tracks and reviews medication error rates. The hospital has virtually eliminated transcription errors through its 100% use of computerized physician order entry. Clinical pathways and standard order sets have reduced practice variation, providing a safer environment. Many projects focused on timeliness, such as emergency department wait time, lab turnaround time, and pneumonia time to initial antibiotic. Results have been mixed, with projects most successful when a link was established with patient outcomes, such as in reducing time to percutaneous transluminal coronary angioplasty for patients with acute myocardial infarction.
Integrating GPS with GLONASS for high-rate seismogeodesy
NASA Astrophysics Data System (ADS)
Geng, Jianghui; Jiang, Peng; Liu, Jingnan
2017-04-01
High-rate GPS is a precious seismogeodetic tool to capture coseismic displacements unambiguously and usually improved by sidereal filtering to mitigate multipath effects dominating the periods of tens of seconds to minutes. We further introduced GLONASS (Globalnaya navigatsionnaya sputnikovaya sistema) data into high-rate GPS to deliver over 2000 24 h displacements at 99 stations in Europe. We find that the major displacement errors induced by orbits and atmosphere on the low-frequency band that are not characterized by sidereal repeatabilities can be amplified markedly by up to 40% after GPS sidereal filtering. In contrast, integration with GLONASS can reduce the noise of high-rate GPS significantly and near uniformly over the entire frequency band, especially for the north components by up to 40%, suggesting that this integration is able to mitigate more errors than only multipath within high-rate GPS. Integrating GPS with GLONASS outperforms GPS sidereal filtering substantially in ameliorating displacement noise by up to 60% over a wide frequency band (e.g., 2 s-0.5 days) except a minor portion between 100 and 1000 s. High-rate multi-GNSS (Global Navigation Satellite System) can be enhanced further by sidereal filtering, which should however be carefully implemented to avoid adverse complications of the noise spectrum of displacements.
Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I
2003-01-01
Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel
NASA Technical Reports Server (NTRS)
Hinton, David A.; Lohr, Gary W.
1988-01-01
Studies have shown that radio communications between pilots and air traffic control contribute to high pilot workload and are subject to various errors. These errors result from congestion on the voice radio channel, and missed and misunderstood messages. The use of digital data link has been proposed as a means of reducing this workload and error rate. A critical factor, however, in determining the potential benefit of data link will be the interface between future data link systems and the operator of those systems, both in the air and on the ground. The purpose of this effort was to evaluate the pilot interface with various levels of data link capability, in simulated general aviation, single-pilot instrument flight rule operations. Results show that the data link reduced demands on pilots' short-term memory, reduced the number of communication transmissions, and permitted the pilots to more easily allocate time to critical cockpit tasks while receiving air traffic control messages. The pilots who participated unanimously indicated a preference for data link communications over voice-only communications. There were, however, situations in which the pilot preferred the use of voice communications, and the ability for pilots to delay processing the data link messages, during high workload events, caused delays in the acknowledgement of messages to air traffic control.
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.
Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2016-01-01
Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.
Wireless visual sensor network resource allocation using cross-layer optimization
NASA Astrophysics Data System (ADS)
Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.
2009-01-01
In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.
Personal digital assistant-based drug information sources: potential to improve medication safety.
Galt, Kimberly A; Rule, Ann M; Houghton, Bruce; Young, Daniel O; Remington, Gina
2005-04-01
This study compared the potential for personal digital assistant (PDA)-based drug information sources to minimize potential medication errors dependent on accurate and complete drug information at the point of care. A quality and safety framework for drug information resources was developed to evaluate 11 PDA-based drug information sources. Three drug information sources met the criteria of the framework: Eprocrates Rx Pro, Lexi-Drugs, and mobileMICROMEDEX. Medication error types related to drug information at the point of care were then determined. Forty-seven questions were developed to test the potential of the sources to prevent these error types. Pharmacists and physician experts from Creighton University created these questions based on the most common types of questions asked by primary care providers. Three physicians evaluated the drug information sources, rating the source for each question: 1=no information available, 2=some information available, or 3 = adequate amount of information available. The mean ratings for the drug information sources were: 2.0 (Eprocrates Rx Pro), 2.5 (Lexi-Drugs), and 2.03 (mobileMICROMEDEX). Lexi-Drugs was significantly better (mobileMICROMEDEX t test; P=0.05; Eprocrates Rx Pro t test; P=0.01). Lexi-Drugs was found to be the most specific and complete PDA resource available to optimize medication safety by reducing potential errors associated with drug information. No resource was sufficient to address the patient safety information needs for all cases.
Islam, Mohammad Tariqul; Tanvir Ahmed, Sk.; Zabir, Ishmam; Shahnaz, Celia
2018-01-01
Photoplethysmographic (PPG) signal is getting popularity for monitoring heart rate in wearable devices because of simplicity of construction and low cost of the sensor. The task becomes very difficult due to the presence of various motion artefacts. In this study, an algorithm based on cascade and parallel combination (CPC) of adaptive filters is proposed in order to reduce the effect of motion artefacts. First, preliminary noise reduction is performed by averaging two channel PPG signals. Next in order to reduce the effect of motion artefacts, a cascaded filter structure consisting of three cascaded adaptive filter blocks is developed where three-channel accelerometer signals are used as references to motion artefacts. To further reduce the affect of noise, a scheme based on convex combination of two such cascaded adaptive noise cancelers is introduced, where two widely used adaptive filters namely recursive least squares and least mean squares filters are employed. Heart rates are estimated from the noise reduced PPG signal in spectral domain. Finally, an efficient heart rate tracking algorithm is designed based on the nature of the heart rate variability. The performance of the proposed CPC method is tested on a widely used public database. It is found that the proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach. PMID:29515812
Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol
2017-10-24
Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Learning feature representations with a cost-relevant sparse autoencoder.
Längkvist, Martin; Loutfi, Amy
2015-02-01
There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.
Resource allocation for error resilient video coding over AWGN using optimization approach.
An, Cheolhong; Nguyen, Truong Q
2008-12-01
The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.
Uy, Raymonde Charles Y; Kury, Fabricio P; Fontelo, Paul A
2015-01-01
The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions.
Density-matrix simulation of small surface codes under current and projected experimental noise
NASA Astrophysics Data System (ADS)
O'Brien, T. E.; Tarasinski, B.; DiCarlo, L.
2017-09-01
We present a density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-5 Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of the surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. However, Surface-49 is required to surpass the break-even point of computation at state-of-the-art qubit relaxation times and readout speeds.
NASA Astrophysics Data System (ADS)
Han, Xifeng; Zhou, Wen
2018-03-01
Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.
Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark
1999-01-01
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
Giardina, M; Castiglia, F; Tomarchio, E
2014-12-01
Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.
Objective Assessment of Patient Inhaler User Technique Using an Audio-Based Classification Approach.
Taylor, Terence E; Zigel, Yaniv; Egan, Clarice; Hughes, Fintan; Costello, Richard W; Reilly, Richard B
2018-02-01
Many patients make critical user technique errors when using pressurised metered dose inhalers (pMDIs) which reduce the clinical efficacy of respiratory medication. Such critical errors include poor actuation coordination (poor timing of medication release during inhalation) and inhaling too fast (peak inspiratory flow rate over 90 L/min). Here, we present a novel audio-based method that objectively assesses patient pMDI user technique. The Inhaler Compliance Assessment device was employed to record inhaler audio signals from 62 respiratory patients as they used a pMDI with an In-Check Flo-Tone device attached to the inhaler mouthpiece. Using a quadratic discriminant analysis approach, the audio-based method generated a total frame-by-frame accuracy of 88.2% in classifying sound events (actuation, inhalation and exhalation). The audio-based method estimated the peak inspiratory flow rate and volume of inhalations with an accuracy of 88.2% and 83.94% respectively. It was detected that 89% of patients made at least one critical user technique error even after tuition from an expert clinical reviewer. This method provides a more clinically accurate assessment of patient inhaler user technique than standard checklist methods.
Processing Images of Craters for Spacecraft Navigation
NASA Technical Reports Server (NTRS)
Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.
2009-01-01
A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.
Factors associated with aberrant imprint methylation and oligozoospermia
Kobayashi, Norio; Miyauchi, Naoko; Tatsuta, Nozomi; Kitamura, Akane; Okae, Hiroaki; Hiura, Hitoshi; Sato, Akiko; Utsunomiya, Takafumi; Yaegashi, Nobuo; Nakai, Kunihiko; Arima, Takahiro
2017-01-01
Disturbingly, the number of patients with oligozoospermia (low sperm count) has been gradually increasing in industrialized countries. Epigenetic alterations are believed to be involved in this condition. Recent studies have clarified that intrinsic and extrinsic factors can induce epigenetic transgenerational phenotypes through apparent reprogramming of the male germ line. Here we examined DNA methylation levels of 22 human imprinted loci in a total of 221 purified sperm samples from infertile couples and found methylation alterations in 24.8% of the patients. Structural equation model suggested that the cause of imprint methylation errors in sperm might have been environmental factors. More specifically, aberrant methylation and a particular lifestyle (current smoking, excess consumption of carbonated drinks) were associated with severe oligozoospermia, while aging probably affected this pathology indirectly through the accumulation of PCB in the patients. Next we examined the pregnancy outcomes for patients when the sperm had abnormal imprint methylation. The live-birth rate decreased and the miscarriage rate increased with the methylation errors. Our research will be useful for the prevention of methylation errors in sperm from infertile men, and sperm with normal imprint methylation might increase the safety of assisted reproduction technology (ART) by reducing methylation-induced diseases of children conceived via ART. PMID:28186187
ERIC Educational Resources Information Center
Boedigheimer, Dan
2010-01-01
Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…
Perceptual processing deficits underlying reduced FFOV efficiency in older adults.
Power, Garry F; Conlon, Elizabeth G
2017-01-01
Older adults are known to perform more poorly on measures of the functional field of view (FFOV) than younger adults. Specific contributions by poor bottom-up and or top-down control of visual attention to the reduced FFOV of older adults were investigated. Error rates of older and younger adults were compared on a FFOV task in which a central identification task, peripheral localization task, and peripheral distractors were presented in high and low contrast. Older adults made more errors in all conditions. The effect of age was independent of the contrast of the peripheral target or distractors. The performance cost of including the central task was measured and found to be negligible for younger adults. For older adults performance costs were present in all conditions, greater with distractors than without, and greater for a low rather than high contrast central stimulus when the peripheral target was high contrast. These results are consistent with older adults compensating for reduced sensory input or bottom-up capture of attention by relying more heavily on top-down control for which they are resource limited.
Reducing diagnostic errors in medicine: what's the goal?
Graber, Mark; Gordon, Ruthanna; Franklin, Nancy
2002-10-01
This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.
Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M
2010-01-01
Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196
Approximation of Bit Error Rates in Digital Communications
2007-06-01
and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase
Implementing an error disclosure coaching model: A multicenter case study.
White, Andrew A; Brock, Douglas M; McCotter, Patricia I; Shannon, Sarah E; Gallagher, Thomas H
2017-01-01
National guidelines call for health care organizations to provide around-the-clock coaching for medical error disclosure. However, frontline clinicians may not always seek risk managers for coaching. As part of a demonstration project designed to improve patient safety and reduce malpractice liability, we trained multidisciplinary disclosure coaches at 8 health care organizations in Washington State. The training was highly rated by participants, although not all emerged confident in their coaching skill. This multisite intervention can serve as a model for other organizations looking to enhance existing disclosure capabilities. Success likely requires cultural change and repeated practice opportunities for coaches. © 2017 American Society for Healthcare Risk Management of the American Hospital Association.
Symmetric Blind Information Reconciliation for Quantum Key Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen
Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less
Symmetric Blind Information Reconciliation for Quantum Key Distribution
Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen; ...
2017-10-27
Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less
Symmetric Blind Information Reconciliation for Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Kiktenko, E. O.; Trushechkin, A. S.; Lim, C. C. W.; Kurochkin, Y. V.; Fedorov, A. K.
2017-10-01
Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. The proposed technique is based on introducing symmetry in operations of parties, and the consideration of results of unsuccessful belief-propagation decodings.
Laser in situ keratomileusis in 2012: a review.
Sutton, Gerard; Lawless, Michael; Hodge, Christopher
2014-01-01
Laser in situ keratomileusis (LASIK) is a safe and effective treatment for refractive error. A combination of technological advances and increasing surgeon experience has served to further refine refractive outcomes and reduce complication rates. In this article, we review LASIK as it stands in late 2012: the procedure, indications, technology, complications and refractive outcomes. © 2013 The Authors. Clinical and Experimental Optometry © 2013 Optometrists Association Australia.
Vigoda, Michael M; Gencorelli, Frank J; Lubarsky, David A
2007-10-01
Accurate recording of disposition of controlled substances is required by regulatory agencies. Linking anesthesia information management systems (AIMS) with medication dispensing systems may facilitate automated reconciliation of medication discrepancies. In this retrospective investigation at a large academic hospital, we reviewed 11,603 cases (spanning an 8-mo period) comparing records of medications (i.e., narcotics, benzodiazepines, ketamine, and thiopental) recorded as removed from our automated medication dispensing system with medications recorded as administered in our AIMS. In 15% of cases, we found discrepancies between dispensed versus administered medications. Discrepancies occurred in both the AIMS (8% cases) and the medication dispensing system (10% cases). Although there were many different types of user errors, nearly 75% of them resulted from either an error in the amount of drug waste documented in the medication dispensing system (35%); or an error in documenting the medication in the AIMS (40%). A significant percentage of cases contained data entry errors in both the automated dispensing and AIMS. This error rate limits the current practicality of automating the necessary reconciliation. An electronic interface between an AIMS and a medication dispensing system could alert users of medication entry errors prior to finalizing a case, thus reducing the time (and cost) of reconciling discrepancies.
Li, Shumei; Tian, Junzhang; Bauer, Andreas; Huang, Ruiwang; Wen, Hua; Li, Meng; Wang, Tianyue; Xia, Likun; Jiang, Guihua
2016-08-01
Purpose To analyze the integrity of white matter (WM) tracts in primary insomnia patients and provide better characterization of abnormal WM integrity and its relationship with disease duration and clinical features of primary insomnia. Materials and Methods This prospective study was approved by the ethics committee of the Guangdong No. 2 Provincial People's Hospital. Tract-based spatial statistics were used to compare changes in diffusion parameters of WM tracts from 23 primary insomnia patients and 30 healthy control (HC) participants, and the accuracy of these changes in distinguishing insomnia patients from HC participants was evaluated. Voxel-wise statistics across subjects was performed by using a 5000-permutation set with family-wise error correction (family-wise error, P < .05). Multiple regressions were used to analyze the associations between the abnormal fractional anisotropy (FA) in WM with disease duration, Pittsburgh Sleep Quality Index, insomnia severity index, self-rating anxiety scale, and the self-rating depression scale in primary insomnia. Characteristics for abnormal WM were also investigated in tract-level analyses. Results Primary insomnia patients had lower FA values mainly in the right anterior limb of the internal capsule, right posterior limb of the internal capsule, right anterior corona radiata, right superior corona radiata, right superior longitudinal fasciculus, body of the corpus callosum, and right thalamus (P < .05, family-wise error correction). The receiver operating characteristic areas for the seven regions were acceptable (range, 0.60-0.74; 60%-74%). Multiple regression models showed abnormal FA values in the thalamus and body corpus callosum were associated with the disease duration, self-rating depression scale, and Pittsburgh Sleep Quality Index scores. Tract-level analysis suggested that the reduced FA values might be related to greater radial diffusivity. Conclusion This study showed that WM tracts related to regulation of sleep and wakefulness, and limbic cognitive and sensorimotor regions, are disrupted in the right brain in patients with primary insomnia. The reduced integrity of these WM tracts may be because of loss of myelination. (©) RSNA, 2016.
Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree
NASA Astrophysics Data System (ADS)
Zheng, Amin; Cheung, Gene; Florencio, Dinei
2018-07-01
With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x'|y) of an estimated string x' given y with its code rate R(x'). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.
Arba-Mosquera, Samuel; Aslanides, Ioannis M.
2012-01-01
Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1988-01-01
Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate.
Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System
Yu, Fei; Sun, Qian
2014-01-01
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115
Atik, Alp
2013-10-01
In 2006, the National Inpatient Medication Chart (NIMC) was introduced as a uniform medication chart in Australian public hospitals with the aim of reducing prescription error. The rate of regular medication prescription error in the NIMC was assessed. Data was collected using the NIMC Audit Tool and analyzed with respect to causes of error per medication prescription and per medication chart. The following prescription requirements were assessed: date, generic drug name, route of administration, dose, frequency, administration time, indication, signature, name and contact details. A total of 1877 medication prescriptions were reviewed. 1653 prescriptions (88.07%) had no contact number, 1630 (86.84%) did not have an indication, 1230 and 675 (35.96%) used a drug's trade name. Within 261 medication charts, all had at least one entry, which did not include an indication, 258 (98.85%) had at least one entry, which did not have a contact number and 200 (76.63%) had at least one entry, which used a trade name. The introduction of a uniform national medication chart is a positive step, but more needs to be done to address the root causes of prescription error. © 2012 John Wiley & Sons Ltd.
Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.
2018-03-01
Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1
Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.
Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D
2016-10-01
Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2013 CFR
2013-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2014 CFR
2014-10-01
....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2012 CFR
2012-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2011 CFR
2011-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
The performance of the standard rate turn (SRT) by student naval helicopter pilots.
Chapman, F; Temme, L A; Still, D L
2001-04-01
During flight training, student naval helicopter pilots learn the use of flight instruments through a prescribed series of simulator training events. The training simulator is a 6-degrees-of-freedom, motion-based, high-fidelity instrument trainer. From the final basic instrument simulator flights of student pilots, we selected for evaluation and analysis their performance of the Standard Rate Turn (SRT), a routine flight maneuver. The performance of the SRT was scored with air speed, altitude and heading average error from target values and standard deviations. These average errors and standard deviations were used in a Multiple Analysis of Variance (MANOVA) to evaluate the effects of three independent variables: 1) direction of turn (left vs. right), 2) degree of turn (180 vs. 360 degrees); and 3) segment of turn (roll-in, first 30 s, last 30 s, and roll-out of turn). Only the main effects of the three independent variables were significant; there were no significant interactions. This result greatly reduces the number of different conditions that should be scored separately for the evaluation of SRT performance. The results also showed that the magnitude of the heading and altitude errors at the beginning of the SRT correlated with the magnitude of the heading and altitude errors throughout the turn. This result suggests that for the turn to be well executed, it is important for it to begin with little error in these two response parameters. The observations reported here should be considered when establishing SRT performance norms and comparing student scores. Furthermore, it seems easier for pilots to maintain good performance than to correct poor performance.
Inhibitory saccadic dysfunction is associated with cerebellar injury in multiple sclerosis.
Kolbe, Scott C; Kilpatrick, Trevor J; Mitchell, Peter J; White, Owen; Egan, Gary F; Fielding, Joanne
2014-05-01
Cognitive dysfunction is common in patients with multiple sclerosis (MS). Saccadic eye movement paradigms such as antisaccades (AS) can sensitively interrogate cognitive function, in particular, the executive and attentional processes of response selection and inhibition. Although we have previously demonstrated significant deficits in the generation of AS in MS patients, the neuropathological changes underlying these deficits were not elucidated. In this study, 24 patients with relapsing-remitting MS underwent testing using an AS paradigm. Rank correlation and multiple regression analyses were subsequently used to determine whether AS errors in these patients were associated with: (i) neurological and radiological abnormalities, as measured by standard clinical techniques, (ii) cognitive dysfunction, and (iii) regionally specific cerebral white and gray-matter damage. Although AS error rates in MS patients did not correlate with clinical disability (using the Expanded Disability Status Score), T2 lesion load or brain parenchymal fraction, AS error rate did correlate with performance on the Paced Auditory Serial Addition Task and the Symbol Digit Modalities Test, neuropsychological tests commonly used in MS. Further, voxel-wise regression analyses revealed associations between AS errors and reduced fractional anisotropy throughout most of the cerebellum, and increased mean diffusivity in the cerebellar vermis. Region-wise regression analyses confirmed that AS errors also correlated with gray-matter atrophy in the cerebellum right VI subregion. These results support the use of the AS paradigm as a marker for cognitive dysfunction in MS and implicate structural and microstructural changes to the cerebellum as a contributing mechanism for AS deficits in these patients. Copyright © 2013 Wiley Periodicals, Inc.
What Can We Learn From Point-of-Care Blood Glucose Values Deleted and Repeated by Nurses?
Corl, Dawn; Yin, Tom; Ulibarri, May; Lien, Heather; Tylee, Tracy; Chao, Jing; Wisse, Brent E
2018-03-01
Hospitals rely on point-of-care (POC) blood glucose (BG) values to guide important decisions related to insulin administration and glycemic control. Evaluation of POC BG in hospitalized patients is associated with measurement and operator errors. Based on a previous quality improvement (QI) project we introduced an option for operators to delete and repeat POC BG values suspected as erroneous. The current project evaluated our experience with deleted POC BG values over a 2-year period. A retrospective QI project included all patients hospitalized at two regional academic medical centers in the Pacific Northwest during 2014 and 2015. Laboratory Medicine POC BG data were reviewed to evaluate all inpatient episodes of deleted and repeated POC BG. Inpatient operators choose to delete and repeat only 0.8% of all POC BG tests. Hypoglycemic and extreme hyperglycemic BG values are more likely to be deleted and repeated. Of initial values <40 mg/dL, 58% of deleted values (18% of all values) are errors. Of values >400 mg/dL, 40% of deleted values (5% of all values) are errors. Not all repeated POC BG values are first deleted. Optimal use of the option to delete and repeat POC BG values <40 mg/dL could decrease reported rates of severe hypoglycemia by as much as 40%. This project demonstrates that operators are frequently able to identify POC BG values that are measurement/operator errors. Eliminating these errors significantly reduces documented rates of severe hypoglycemia and hyperglycemia, and has the potential to improve patient safety.
Impact of an antiretroviral stewardship strategy on medication error rates.
Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E
2018-05-02
The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Balasuriya, Lilanthi; Vyles, David; Bakerman, Paul; Holton, Vanessa; Vaidya, Vinay; Garcia-Filion, Pamela; Westdorp, Joan; Sanchez, Christine; Kurz, Rhonda
2017-09-01
An enhanced dose range checking (DRC) system was developed to evaluate prescription error rates in the pediatric intensive care unit and the pediatric cardiovascular intensive care unit. An enhanced DRC system incorporating "soft" and "hard" alerts was designed and implemented. Practitioner responses to alerts for patients admitted to the pediatric intensive care unit and the pediatric cardiovascular intensive care unit were retrospectively reviewed. Alert rates increased from 0.3% to 3.4% after "go-live" (P < 0.001). Before go-live, all alerts were soft alerts. In the period after go-live, 68% of alerts were soft alerts and 32% were hard alerts. Before go-live, providers reduced doses only 1 time for every 10 dose alerts. After implementation of the enhanced computerized physician order entry system, the practitioners responded to soft alerts by reducing doses to more appropriate levels in 24.7% of orders (70/283), compared with 10% (3/30) before go-live (P = 0.0701). The practitioners deleted orders in 9.5% of cases (27/283) after implementation of the enhanced DRC system, as compared with no cancelled orders before go-live (P = 0.0774). Medication orders that triggered a soft alert were submitted unmodified in 65.7% (186/283) as compared with 90% (27/30) of orders before go-live (P = 0.0067). After go-live, 28.7% of hard alerts resulted in a reduced dose, 64% resulted in a cancelled order, and 7.4% were submitted as written. Before go-live, alerts were often clinically irrelevant. After go-live, there was a statistically significant decrease in orders that were submitted unmodified and an increase in the number of orders that were reduced or cancelled.
NASA Astrophysics Data System (ADS)
Young, A. J.; Kuiken, T. A.; Hargrove, L. J.
2014-10-01
Objective. The purpose of this study was to determine the contribution of electromyography (EMG) data, in combination with a diverse array of mechanical sensors, to locomotion mode intent recognition in transfemoral amputees using powered prostheses. Additionally, we determined the effect of adding time history information using a dynamic Bayesian network (DBN) for both the mechanical and EMG sensors. Approach. EMG signals from the residual limbs of amputees have been proposed to enhance pattern recognition-based intent recognition systems for powered lower limb prostheses, but mechanical sensors on the prosthesis—such as inertial measurement units, position and velocity sensors, and load cells—may be just as useful. EMG and mechanical sensor data were collected from 8 transfemoral amputees using a powered knee/ankle prosthesis over basic locomotion modes such as walking, slopes and stairs. An offline study was conducted to determine the benefit of different sensor sets for predicting intent. Main results. EMG information was not as accurate alone as mechanical sensor information (p < 0.05) for any classification strategy. However, EMG in combination with the mechanical sensor data did significantly reduce intent recognition errors (p < 0.05) both for transitions between locomotion modes and steady-state locomotion. The sensor time history (DBN) classifier significantly reduced error rates compared to a linear discriminant classifier for steady-state steps, without increasing the transitional error, for both EMG and mechanical sensors. Combining EMG and mechanical sensor data with sensor time history reduced the average transitional error from 18.4% to 12.2% and the average steady-state error from 3.8% to 1.0% when classifying level-ground walking, ramps, and stairs in eight transfemoral amputee subjects. Significance. These results suggest that a neural interface in combination with time history methods for locomotion mode classification can enhance intent recognition performance; this strategy should be considered for future real-time experiments.
Grammatical Constraints on Language Switching: Language Control is not Just Executive Control
Gollan, Tamar H.; Goldrick, Matthew
2016-01-01
The current study investigated the roles of grammaticality and executive control on bilingual language selection by examining production speed and failures of language control, or intrusion errors (e.g., saying el instead of the), in young and aging bilinguals. Production of mixed-language connected speech was elicited by asking Spanish-English bilinguals to read aloud paragraphs that had mostly grammatical (conforming to naturally occurring constraints) or mostly ungrammatical (haphazard mixing) language switches, and low or high switching rate. Mixed-language speech was slower and less accurate when switch-rate was high, but especially (for speed) or only (for intrusion errors) if switches were also ungrammatical. Executive function ability (measured with a variety of tasks in young bilinguals in Experiment 1, and aging bilinguals in Experiment 2), slowed production and increased intrusion rate in a generalized fashion, but with little or no interaction with grammaticality. Aging effects appeared to reflect reduced monitoring ability (evidenced by a lower rate of self-corrected intrusions). These results demonstrate robust effects of grammatical encoding on language selection, and imply that executive control influences bilingual language production only after sentence planning and lexical selection. PMID:27667899
Pilots Rate Augmented Generalized Predictive Control for Reconfiguration
NASA Technical Reports Server (NTRS)
Soloway, Don; Haley, Pam
2004-01-01
The objective of this paper is to report the results from the research being conducted in reconfigurable fight controls at NASA Ames. A study was conducted with three NASA Dryden test pilots to evaluate two approaches of reconfiguring an aircraft's control system when failures occur in the control surfaces and engine. NASA Ames is investigating both a Neural Generalized Predictive Control scheme and a Neural Network based Dynamic Inverse controller. This paper highlights the Predictive Control scheme where a simple augmentation to reduce zero steady-state error led to the neural network predictor model becoming redundant for the task. Instead of using a neural network predictor model, a nominal single point linear model was used and then augmented with an error corrector. This paper shows that the Generalized Predictive Controller and the Dynamic Inverse Neural Network controller perform equally well at reconfiguration, but with less rate requirements from the actuators. Also presented are the pilot ratings for each controller for various failure scenarios and two samples of the required control actuation during reconfiguration. Finally, the paper concludes by stepping through the Generalized Predictive Control's reconfiguration process for an elevator failure.
Spencer, Bruce D
2012-06-01
Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
A Spanish Pillbox App for Elderly Patients Taking Multiple Medications: Randomized Controlled Trial
Mira, José Joaquín; Navarro, Isabel; Botella, Federico; Borrás, Fernando; Orozco, Domingo; Iglesias-Alonso, Fuencisla; Pérez-Pérez, Pastora; Lorenzo, Susana; Toro, Nuria
2014-01-01
Background Nonadherence and medication errors are common among patients with complex drug regimens. Apps for smartphones and tablets are effective for improving adherence, but they have not been tested in elderly patients with complex chronic conditions and who typically have less experience with this type of technology. Objective The objective of this study was to design, implement, and evaluate a medication self-management app (called ALICE) for elderly patients taking multiple medications with the intention of improving adherence and safe medication use. Methods A single-blind randomized controlled trial was conducted with a control and an experimental group (N=99) in Spain in 2013. The characteristics of ALICE were specified based on the suggestions of 3 nominal groups with a total of 23 patients and a focus group with 7 professionals. ALICE was designed for Android and iOS to allow for the personalization of prescriptions and medical advice, showing images of each of the medications (the packaging and the medication itself) together with alerts and multiple reminders for each alert. The randomly assigned patients in the control group received oral and written information on the safe use of their medications and the patients in the experimental group used ALICE for 3 months. Pre and post measures included rate of missed doses and medication errors reported by patients, scores from the 4-item Morisky Medication Adherence Scale (MMAS-4), level of independence, self-perceived health status, and biochemical test results. In the experimental group, data were collected on their previous experience with information and communication technologies, their rating of ALICE, and their perception of the level of independence they had achieved. The intergroup intervention effects were calculated by univariate linear models and ANOVA, with the pre to post intervention differences as the dependent variables. Results Data were obtained from 99 patients (48 and 51 in the control and experimental groups, respectively). Patients in the experimental group obtained better MMAS-4 scores (P<.001) and reported fewer missed doses of medication (P=.02). ALICE only helped to significantly reduce medication errors in patients with an initially higher rate of errors (P<.001). Patients with no experience with information and communication technologies reported better adherence (P<.001), fewer missed doses (P<.001), and fewer medication errors (P=.02). The mean satisfaction score for ALICE was 8.5 out of 10. In all, 45 of 51 patients (88%) felt that ALICE improved their independence in managing their medications. Conclusions The ALICE app improves adherence, helps reduce rates of forgetting and of medication errors, and increases perceived independence in managing medication. Elderly patients with no previous experience with information and communication technologies are capable of effectively using an app designed to help them take their medicine more safely. Trial Registration Clinicaltrials.gov NCT02071498; http://clinicaltrials.gov/ct2/show/NCT02071498 (Archived by WebCite at http://www.webcitation.org/6OJjdHVhD). PMID:24705022
A Spanish pillbox app for elderly patients taking multiple medications: randomized controlled trial.
Mira, José Joaquín; Navarro, Isabel; Botella, Federico; Borrás, Fernando; Nuño-Solinís, Roberto; Orozco, Domingo; Iglesias-Alonso, Fuencisla; Pérez-Pérez, Pastora; Lorenzo, Susana; Toro, Nuria
2014-04-04
Nonadherence and medication errors are common among patients with complex drug regimens. Apps for smartphones and tablets are effective for improving adherence, but they have not been tested in elderly patients with complex chronic conditions and who typically have less experience with this type of technology. The objective of this study was to design, implement, and evaluate a medication self-management app (called ALICE) for elderly patients taking multiple medications with the intention of improving adherence and safe medication use. A single-blind randomized controlled trial was conducted with a control and an experimental group (N=99) in Spain in 2013. The characteristics of ALICE were specified based on the suggestions of 3 nominal groups with a total of 23 patients and a focus group with 7 professionals. ALICE was designed for Android and iOS to allow for the personalization of prescriptions and medical advice, showing images of each of the medications (the packaging and the medication itself) together with alerts and multiple reminders for each alert. The randomly assigned patients in the control group received oral and written information on the safe use of their medications and the patients in the experimental group used ALICE for 3 months. Pre and post measures included rate of missed doses and medication errors reported by patients, scores from the 4-item Morisky Medication Adherence Scale (MMAS-4), level of independence, self-perceived health status, and biochemical test results. In the experimental group, data were collected on their previous experience with information and communication technologies, their rating of ALICE, and their perception of the level of independence they had achieved. The intergroup intervention effects were calculated by univariate linear models and ANOVA, with the pre to post intervention differences as the dependent variables. Data were obtained from 99 patients (48 and 51 in the control and experimental groups, respectively). Patients in the experimental group obtained better MMAS-4 scores (P<.001) and reported fewer missed doses of medication (P=.02). ALICE only helped to significantly reduce medication errors in patients with an initially higher rate of errors (P<.001). Patients with no experience with information and communication technologies reported better adherence (P<.001), fewer missed doses (P<.001), and fewer medication errors (P=.02). The mean satisfaction score for ALICE was 8.5 out of 10. In all, 45 of 51 patients (88%) felt that ALICE improved their independence in managing their medications. The ALICE app improves adherence, helps reduce rates of forgetting and of medication errors, and increases perceived independence in managing medication. Elderly patients with no previous experience with information and communication technologies are capable of effectively using an app designed to help them take their medicine more safely. Clinicaltrials.gov NCT02071498; http://clinicaltrials.gov/ct2/show/NCT02071498.
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Evaluating a medical error taxonomy.
Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie
2002-01-01
Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.
Insight into biases and sequencing errors for amplicon sequencing with the Illumina MiSeq platform.
Schirmer, Melanie; Ijaz, Umer Z; D'Amore, Rosalinda; Hall, Neil; Sloan, William T; Quince, Christopher
2015-03-31
With read lengths of currently up to 2 × 300 bp, high throughput and low sequencing costs Illumina's MiSeq is becoming one of the most utilized sequencing platforms worldwide. The platform is manageable and affordable even for smaller labs. This enables quick turnaround on a broad range of applications such as targeted gene sequencing, metagenomics, small genome sequencing and clinical molecular diagnostics. However, Illumina error profiles are still poorly understood and programs are therefore not designed for the idiosyncrasies of Illumina data. A better knowledge of the error patterns is essential for sequence analysis and vital if we are to draw valid conclusions. Studying true genetic variation in a population sample is fundamental for understanding diseases, evolution and origin. We conducted a large study on the error patterns for the MiSeq based on 16S rRNA amplicon sequencing data. We tested state-of-the-art library preparation methods for amplicon sequencing and showed that the library preparation method and the choice of primers are the most significant sources of bias and cause distinct error patterns. Furthermore we tested the efficiency of various error correction strategies and identified quality trimming (Sickle) combined with error correction (BayesHammer) followed by read overlapping (PANDAseq) as the most successful approach, reducing substitution error rates on average by 93%. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
To evaluate the cost-effectiveness of an automated medication system (AMS) implemented in a Danish hospital setting. An economic evaluation was performed alongside a controlled before-and-after effectiveness study with one control ward and one intervention ward. The primary outcome measure was the number of errors in the medication administration process observed prospectively before and after implementation. To determine the difference in proportion of errors after implementation of the AMS, logistic regression was applied with the presence of error(s) as the dependent variable. Time, group, and interaction between time and group were the independent variables. The cost analysis used the hospital perspective with a short-term incremental costing approach. The total 6-month costs with and without the AMS were calculated as well as the incremental costs. The number of avoided administration errors was related to the incremental costs to obtain the cost-effectiveness ratio expressed as the cost per avoided administration error. The AMS resulted in a statistically significant reduction in the proportion of errors in the intervention ward compared with the control ward. The cost analysis showed that the AMS increased the ward's 6-month cost by €16,843. The cost-effectiveness ratio was estimated at €2.01 per avoided administration error, €2.91 per avoided procedural error, and €19.38 per avoided clinical error. The AMS was effective in reducing errors in the medication administration process at a higher overall cost. The cost-effectiveness analysis showed that the AMS was associated with affordable cost-effectiveness rates. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
The statistical validity of nursing home survey findings.
Woolley, Douglas C
2011-11-01
The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.
How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?
Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina
2015-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615
How does aging affect the types of error made in a visual short-term memory 'object-recall' task?
Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina
2014-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.
Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.
Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep
2014-01-01
Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
7 CFR 275.23 - Determination of State agency program performance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...
Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M
2002-01-01
Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B
2013-09-01
In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Grane, Venke Arntsberg; Endestad, Tor; Pinto, Arnfrid Farbu; Solbakk, Anne-Kristin
2014-01-01
We investigated performance-derived measures of executive control, and their relationship with self- and informant reported executive functions in everyday life, in treatment-naive adults with newly diagnosed Attention Deficit Hyperactivity Disorder (ADHD; n = 36) and in healthy controls (n = 35). Sustained attentional control and response inhibition were examined with the Test of Variables of Attention (T.O.V.A.). Delayed responses, increased reaction time variability, and higher omission error rate to Go signals in ADHD patients relative to controls indicated fluctuating levels of attention in the patients. Furthermore, an increment in NoGo commission errors when Go stimuli increased relative to NoGo stimuli suggests reduced inhibition of task-irrelevant stimuli in conditions demanding frequent responding. The ADHD group reported significantly more cognitive and behavioral executive problems than the control group on the Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A). There were overall not strong associations between task performance and ratings of everyday executive function. However, for the ADHD group, T.O.V.A. omission errors predicted self-reported difficulties on the Organization of Materials scale, and commission errors predicted informant reported difficulties on the same scale. Although ADHD patients endorsed more symptoms of depression and anxiety on the Achenbach System of Empirically Based Assessment (ASEBA) than controls, ASEBA scores were not significantly associated with T.O.V.A. performance scores. Altogether, the results indicate multifaceted alteration of attentional control in adult ADHD, and accompanying subjective difficulties with several aspects of executive function in everyday living. The relationships between the two sets of data were modest, indicating that the measures represent non-redundant features of adult ADHD. PMID:25545156
Grane, Venke Arntsberg; Endestad, Tor; Pinto, Arnfrid Farbu; Solbakk, Anne-Kristin
2014-01-01
We investigated performance-derived measures of executive control, and their relationship with self- and informant reported executive functions in everyday life, in treatment-naive adults with newly diagnosed Attention Deficit Hyperactivity Disorder (ADHD; n = 36) and in healthy controls (n = 35). Sustained attentional control and response inhibition were examined with the Test of Variables of Attention (T.O.V.A.). Delayed responses, increased reaction time variability, and higher omission error rate to Go signals in ADHD patients relative to controls indicated fluctuating levels of attention in the patients. Furthermore, an increment in NoGo commission errors when Go stimuli increased relative to NoGo stimuli suggests reduced inhibition of task-irrelevant stimuli in conditions demanding frequent responding. The ADHD group reported significantly more cognitive and behavioral executive problems than the control group on the Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A). There were overall not strong associations between task performance and ratings of everyday executive function. However, for the ADHD group, T.O.V.A. omission errors predicted self-reported difficulties on the Organization of Materials scale, and commission errors predicted informant reported difficulties on the same scale. Although ADHD patients endorsed more symptoms of depression and anxiety on the Achenbach System of Empirically Based Assessment (ASEBA) than controls, ASEBA scores were not significantly associated with T.O.V.A. performance scores. Altogether, the results indicate multifaceted alteration of attentional control in adult ADHD, and accompanying subjective difficulties with several aspects of executive function in everyday living. The relationships between the two sets of data were modest, indicating that the measures represent non-redundant features of adult ADHD.
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
Govindan, Siva Shangari; Agamuthu, P
2014-10-01
Waste management can be regarded as a cross-cutting environmental 'mega-issue'. Sound waste management practices support the provision of basic needs for general health, such as clean air, clean water and safe supply of food. In addition, climate change mitigation efforts can be achieved through reduction of greenhouse gas emissions from waste management operations, such as landfills. Landfills generate landfill gas, especially methane, as a result of anaerobic degradation of the degradable components of municipal solid waste. Evaluating the mode of generation and collection of landfill gas has posted a challenge over time. Scientifically, landfill gas generation rates are presently estimated using numerical models. In this study the Intergovernmental Panel on Climate Change's Waste Model is used to estimate the methane generated from a Malaysian sanitary landfill. Key parameters of the model, which are the decay rate and degradable organic carbon, are analysed in two different approaches; the bulk waste approach and waste composition approach. The model is later validated using error function analysis and optimum decay rate, and degradable organic carbon for both approaches were also obtained. The best fitting values for the bulk waste approach are a decay rate of 0.08 y(-1) and degradable organic carbon value of 0.12; and for the waste composition approach the decay rate was found to be 0.09 y(-1) and degradable organic carbon value of 0.08. From this validation exercise, the estimated error was reduced by 81% and 69% for the bulk waste and waste composition approach, respectively. In conclusion, this type of modelling could constitute a sensible starting point for landfills to introduce careful planning for efficient gas recovery in individual landfills. © The Author(s) 2014.
Rudall, Nicola; McKenzie, Catherine; Landa, June; Bourne, Richard S; Bates, Ian; Shulman, Rob
2017-08-01
Clinical pharmacist (CP) interventions from the PROTECTED-UK cohort, a multi-site critical care interventions study, were further analysed to assess effects of: time on critical care, number of interventions, CP expertise and days of week, on impact of intervention and ultimately contribution to patient care. Intervention data were collected from 21 adult critical care units over 14 days. Interventions could be error, optimisation or consults, and were blind-coded to ensure consistency, prior to bivariate analysis. Pharmacy service demographics were further collated by investigator survey. Of the 20 758 prescriptions reviewed, 3375 interventions were made (intervention rate 16.1%). CPs spent 3.5 h per day (mean, ±SD 1.7) on direct patient care, reviewed 10.3 patients per day (±SD 4.2) and required 22.5 min (±SD 9.5) per review. Intervention rate had a moderate inverse correlation with the time the pharmacist spent on critical care (P = 0.05; r = 0.4). Optimisation rate had a strong inverse association with total number of prescriptions reviewed per day (P = 0.001; r = 0.7). A consultant CP had a moderate inverse correlation with number of errors identified (P = 0.008; r = 0.6). No correlation existed between the presence of electronic prescribing in critical care and any intervention rate. Few centres provided weekend services, although the intervention rate was significantly higher on weekends than weekdays. A CP is essential for safe and optimised patient medication therapy; an extended and developed pharmacy service is expected to reduce errors. CP services should be adequately staffed to enable adequate time for prescription review and maximal therapy optimisation. © 2016 Royal Pharmaceutical Society.
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Fottrell, Edward; Byass, Peter; Berhane, Yemane
2008-03-25
As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.
Word-level language modeling for P300 spellers based on discriminative graphical models
NASA Astrophysics Data System (ADS)
Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat
2015-04-01
Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.
Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps
Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi
2015-01-01
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404
Mogull, Scott A
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).
Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro
2016-01-01
There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.
Discrepancies in reporting the CAG repeat lengths for Huntington's disease
Quarrell, Oliver W; Handley, Olivia; O'Donovan, Kirsty; Dumoulin, Christine; Ramos-Arroyo, Maria; Biunno, Ida; Bauer, Peter; Kline, Margaret; Landwehrmeyer, G Bernhard
2012-01-01
Huntington's disease results from a CAG repeat expansion within the Huntingtin gene; this is measured routinely in diagnostic laboratories. The European Huntington's Disease Network REGISTRY project centrally measures CAG repeat lengths on fresh samples; these were compared with the original results from 121 laboratories across 15 countries. We report on 1326 duplicate results; a discrepancy in reporting the upper allele occurred in 51% of cases, this reduced to 13.3% and 9.7% when we applied acceptable measurement errors proposed by the American College of Medical Genetics and the Draft European Best Practice Guidelines, respectively. Duplicate results were available for 1250 lower alleles; discrepancies occurred in 40% of cases. Clinically significant discrepancies occurred in 4.0% of cases with a potential unexplained misdiagnosis rate of 0.3%. There was considerable variation in the discrepancy rate among 10 of the countries participating in this study. Out of 1326 samples, 348 were re-analysed by an accredited diagnostic laboratory, based in Germany, with concordance rates of 93% and 94% for the upper and lower alleles, respectively. This became 100% if the acceptable measurement errors were applied. The central laboratory correctly reported allele sizes for six standard reference samples, blind to the known result. Our study differs from external quality assessment (EQA) schemes in that these are duplicate results obtained from a large sample of patients across the whole diagnostic range. We strongly recommend that laboratories state an error rate for their measurement on the report, participate in EQA schemes and use reference materials regularly to adjust their own internal standards. PMID:21811303
ERIC Educational Resources Information Center
Birjandi, Parviz; Siyyari, Masood
2016-01-01
This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…
National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?
ERIC Educational Resources Information Center
Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.
2010-01-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…
The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency
ERIC Educational Resources Information Center
Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ
2012-01-01
This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2013-01-01
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
An adaptive reentry guidance method considering the influence of blackout zone
NASA Astrophysics Data System (ADS)
Wu, Yu; Yao, Jianyao; Qu, Xiangju
2018-01-01
Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.
On the statistical assessment of classifiers using DNA microarray data
Ancona, N; Maglietta, R; Piepoli, A; D'Addabbo, A; Cotugno, R; Savino, M; Liuni, S; Carella, M; Pesole, G; Perri, F
2006-01-01
Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22) and tumor (25) specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA) classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045) as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS) and Support Vector Machines (SVM) classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035) and e = 18% (p = 0.037) respectively. Moreover, the error rate decreases as the training set size increases, reaching its best performances with 35 training examples. In this case, RLS and SVM have error rates of e = 14% (p = 0.027) and e = 11% (p = 0.019). Concerning the number of genes, we found about 6000 genes (p < 0.05) correlated with the pathology, resulting from the signal-to-noise statistic. Moreover the performances of RLS and SVM classifiers do not change when 74% of genes is used. They progressively reduce up to e = 16% (p < 0.05) when only 2 genes are employed. The biological relevance of a set of genes determined by our statistical analysis and the major roles they play in colorectal tumorigenesis is discussed. Conclusions The method proposed provides statistically significant answers to precise questions relevant for the diagnosis and prognosis of cancer. We found that, with as few as 15 examples, it is possible to train statistically significant classifiers for colon cancer diagnosis. As for the definition of the number of genes sufficient for a reliable classification of colon cancer, our results suggest that it depends on the accuracy required. PMID:16919171