Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh
2017-03-01
Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.
Peterson, J.; Dunham, J.B.
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.
Effects of dynamic aeroelasticity on handling qualities and pilot rating
NASA Technical Reports Server (NTRS)
Swaim, R. L.; Yen, W.-Y.
1978-01-01
Pilot performance parameters, such as pilot ratings, tracking errors, and pilot comments, were recorded and analyzed for a longitudinal pitch tracking task on a large, flexible aircraft. The tracking task was programmed on a fixed-base simulator with a CRT attitude director display of pitch angle command, pitch angle, and pitch angle error. Parametric variations in the undamped natural frequencies of the two lowest frequency symmetric elastic modes were made to induce varying degrees of rigid body and elastic mode interaction. The results indicate that such mode interaction can drastically affect the handling qualities and pilot ratings of the task.
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin
2016-01-01
An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.
Failure analysis and modeling of a VAXcluster system
NASA Technical Reports Server (NTRS)
Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.
1990-01-01
This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.
Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro
2016-01-01
There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
Shcherbina, Anna; Mattsson, C. Mikael; Waggott, Daryl; Salisbury, Heidi; Christle, Jeffrey W.; Hastie, Trevor; Wheeler, Matthew T.; Ashley, Euan A.
2017-01-01
The ability to measure physical activity through wrist-worn devices provides an opportunity for cardiovascular medicine. However, the accuracy of commercial devices is largely unknown. The aim of this work is to assess the accuracy of seven commercially available wrist-worn devices in estimating heart rate (HR) and energy expenditure (EE) and to propose a wearable sensor evaluation framework. We evaluated the Apple Watch, Basis Peak, Fitbit Surge, Microsoft Band, Mio Alpha 2, PulseOn, and Samsung Gear S2. Participants wore devices while being simultaneously assessed with continuous telemetry and indirect calorimetry while sitting, walking, running, and cycling. Sixty volunteers (29 male, 31 female, age 38 ± 11 years) of diverse age, height, weight, skin tone, and fitness level were selected. Error in HR and EE was computed for each subject/device/activity combination. Devices reported the lowest error for cycling and the highest for walking. Device error was higher for males, greater body mass index, darker skin tone, and walking. Six of the devices achieved a median error for HR below 5% during cycling. No device achieved an error in EE below 20 percent. The Apple Watch achieved the lowest overall error in both HR and EE, while the Samsung Gear S2 reported the highest. In conclusion, most wrist-worn devices adequately measure HR in laboratory-based activities, but poorly estimate EE, suggesting caution in the use of EE measurements as part of health improvement programs. We propose reference standards for the validation of consumer health devices (http://precision.stanford.edu/). PMID:28538708
Shcherbina, Anna; Mattsson, C Mikael; Waggott, Daryl; Salisbury, Heidi; Christle, Jeffrey W; Hastie, Trevor; Wheeler, Matthew T; Ashley, Euan A
2017-05-24
The ability to measure physical activity through wrist-worn devices provides an opportunity for cardiovascular medicine. However, the accuracy of commercial devices is largely unknown. The aim of this work is to assess the accuracy of seven commercially available wrist-worn devices in estimating heart rate (HR) and energy expenditure (EE) and to propose a wearable sensor evaluation framework. We evaluated the Apple Watch, Basis Peak, Fitbit Surge, Microsoft Band, Mio Alpha 2, PulseOn, and Samsung Gear S2. Participants wore devices while being simultaneously assessed with continuous telemetry and indirect calorimetry while sitting, walking, running, and cycling. Sixty volunteers (29 male, 31 female, age 38 ± 11 years) of diverse age, height, weight, skin tone, and fitness level were selected. Error in HR and EE was computed for each subject/device/activity combination. Devices reported the lowest error for cycling and the highest for walking. Device error was higher for males, greater body mass index, darker skin tone, and walking. Six of the devices achieved a median error for HR below 5% during cycling. No device achieved an error in EE below 20 percent. The Apple Watch achieved the lowest overall error in both HR and EE, while the Samsung Gear S2 reported the highest. In conclusion, most wrist-worn devices adequately measure HR in laboratory-based activities, but poorly estimate EE, suggesting caution in the use of EE measurements as part of health improvement programs. We propose reference standards for the validation of consumer health devices (http://precision.stanford.edu/).
Teamwork and clinical error reporting among nurses in Korean hospitals.
Hwang, Jee-In; Ahn, Jeonghoon
2015-03-01
To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.
Vogel, Erin A.; Billups, Sarah J.; Herner, Sheryl J.
2016-01-01
Summary Objective The purpose of this study was to compare the effectiveness of an outpatient renal dose adjustment alert via a computerized provider order entry (CPOE) clinical decision support system (CDSS) versus a CDSS with alerts made to dispensing pharmacists. Methods This was a retrospective analysis of patients with renal impairment and 30 medications that are contraindicated or require dose-adjustment in such patients. The primary outcome was the rate of renal dosing errors for study medications that were dispensed between August and December 2013, when a pharmacist-based CDSS was in place, versus August through December 2014, when a prescriber-based CDSS was in place. A dosing error was defined as a prescription for one of the study medications dispensed to a patient where the medication was contraindicated or improperly dosed based on the patient’s renal function. The denominator was all prescriptions for the study medications dispensed during each respective study period. Results During the pharmacist- and prescriber-based CDSS study periods, 49,054 and 50,678 prescriptions, respectively, were dispensed for one of the included medications. Of these, 878 (1.8%) and 758 (1.5%) prescriptions were dispensed to patients with renal impairment in the respective study periods. Patients in each group were similar with respect to age, sex, and renal function stage. Overall, the five-month error rate was 0.38%. Error rates were similar between the two groups: 0.36% and 0.40% in the pharmacist- and prescriber-based CDSS, respectively (p=0.523). The medication with the highest error rate was dofetilide (0.51% overall) while the medications with the lowest error rate were dabigatran, fondaparinux, and spironolactone (0.00% overall). Conclusions Prescriber- and pharmacist-based CDSS provided comparable, low rates of potential medication errors. Future studies should be undertaken to examine patient benefits of the prescriber-based CDSS. PMID:27466041
Kim, Myoung Soo
2012-08-01
The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.
A prospective audit of a nurse independent prescribing within critical care.
Carberry, Martin; Connelly, Sarah; Murphy, Jennifer
2013-05-01
To determine the prescribing activity of different staff groups within intensive care unit (ICU) and combined high dependency unit (HDU), namely trainee and consultant medical staff and advanced nurse practitioners in critical care (ANPCC); to determine the number and type of prescription errors; to compare error rates between prescribing groups and to raise awareness of prescribing activity within critical care. The introduction of government legislation has led to the development of non-medical prescribing roles in acute care. This has facilitated an opportunity for the ANPCC working in critical care to develop a prescribing role. The audit was performed over 7 days (Monday-Sunday), on rolling days over a 7-week period in September and October 2011 in three ICUs. All drug entries made on the ICU prescription by the three groups, trainee medical staff, ANPCCs and consultant anaesthetists, were audited once for errors. Data were collected by reviewing all drug entries for errors namely, patient data, drug dose, concentration, rate and frequency, legibility and prescriber signature. A paper data collection tool was used initially; data was later entered onto a Microsoft Access data base. A total of 1418 drug entries were audited from 77 patient prescription Cardexes. Error rates were reported as, 40 errors in 1418 prescriptions (2·8%): ANPCC errors, n = 2 in 388 prescriptions (0·6%); trainee medical staff errors, n = 33 in 984 (3·4%); consultant errors, n = 5 in 73 (6·8%). The error rates were significantly different for different prescribing groups (p < 0·01). This audit shows that prescribing error rates were low (2·8%). Having the lowest error rate, the nurse practitioners are at least as effective as other prescribing groups within this audit, in terms of errors only, in prescribing diligence. National data is required in order to benchmark independent nurse prescribing practice in critical care. These findings could be used to inform research and role development within the critical care. © 2012 The Authors. Nursing in Critical Care © 2012 British Association of Critical Care Nurses.
Zhang, Juanjuan; Collins, Steven H.
2017-01-01
This study uses theory and experiments to investigate the relationship between the passive stiffness of series elastic actuators and torque tracking performance in lower-limb exoskeletons during human walking. Through theoretical analysis with our simplified system model, we found that the optimal passive stiffness matches the slope of the desired torque-angle relationship. We also conjectured that a bandwidth limit resulted in a maximum rate of change in torque error that can be commanded through control input, which is fixed across desired and passive stiffness conditions. This led to hypotheses about the interactions among optimal control gains, passive stiffness and desired quasi-stiffness. Walking experiments were conducted with multiple angle-based desired torque curves. The observed lowest torque tracking errors identified for each combination of desired and passive stiffnesses were shown to be linearly proportional to the magnitude of the difference between the two stiffnesses. The proportional gains corresponding to the lowest observed errors were seen inversely proportional to passive stiffness values and to desired stiffness. These findings supported our hypotheses, and provide guidance to application-specific hardware customization as well as controller design for torque-controlled robotic legged locomotion. PMID:29326580
Hashim, Syaratul-Emma; Tan, Hui-Ken; Wan-Hazabbah, W H; Ibrahim, Mohtar
2008-11-01
Refractive error remains one of the primary causes of visual impairment in children worldwide, and the prevalence of refractive error varies widely. The objective of this study was to determine the prevalence of refractive error and study the possible associated factors inducing refractive error among primary school children of Malay ethnicity in the suburban area of Kota Bharu, Kelantan, Malaysia. A school-based cross-sectional study was performed from January to July 2006 by random selection on Standard 1 to Standard 6 students of 10 primary schools in the Kota Bharu district. Visual acuity assessment was measured using logMAR ETDRS chart. Positive predictive value of uncorrected visual acuity equal or worse than 20/40, was used as a cut-off point for further evaluation by automated refraction and retinoscopic refraction. A total of 840 students were enumerated but only 705 were examined. The prevalence of uncorrected visual impairment was seen in 54 (7.7%) children. The main cause of the uncorrected visual impairment was refractive error which contributed to 90.7% of the total, and with 7.0% prevalence for the studied population. Myopia is the most common type of refractive error among children aged 6 to 12 years with prevalence of 5.4%, followed by hyperopia at 1.0% and astigmatism at 0.6%. A significant positive correlation was noted between myopia development with increasing age (P <0.005), more hours spent on reading books (P <0.005) and background history of siblings with glasses (P <0.005) and whose parents are of higher educational level (P <0.005). Malays in suburban Kelantan (5.4%) have the lowest prevalence of myopia compared with Malays in the metropolitan cities of Kuala Lumpur (9.2%) and Singapore (22.1%). The ethnicity-specific prevalence rate of myopia was the lowest among Malays in Kota Bharu, followed by Kuala Lumpur, and is the highest among Singaporean Malays. Better socio-economic factors could have contributed to higher myopia rates in the cities, since the genetic background of these ethnic Malays are similar.
Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A
2015-08-01
This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.
Alsulami, Zayed; Choonara, Imti; Conroy, Sharon
2014-06-01
To evaluate how closely double-checking policies are followed by nurses in paediatric areas and also to identify the types, frequency and rates of medication administration errors that occur despite the double-checking process. Double-checking by two nurses is an intervention used in many UK hospitals to prevent or reduce medication administration errors. There is, however, insufficient evidence to either support or refute the practice of double-checking in terms of medication error risk reduction. Prospective observational study. This was a prospective observational study of paediatric nurses' adherence to the double-checking process for medication administration from April-July 2012. Drug dose administration events (n = 2000) were observed. Independent drug dose calculation, rate of administering intravenous bolus drugs and labelling of flush syringes were the steps with lowest adherence rates. Drug dose calculation was only double-checked independently in 591 (30%) drug administrations. There was a statistically significant difference in nurses' adherence rate to the double-checking steps between weekdays and weekends in nine of the 15 evaluated steps. Medication administration errors (n = 191) or deviations from policy were observed, at a rate of 9·6% of drug administrations. These included 64 drug doses, which were left for parents to administer without nurse observation. There was variation between paediatric nurses' adherence to double-checking steps during medication administration. The most frequent type of administration errors or deviation from policy involved the medicine being given to the parents to administer to the child when the nurse was not present. © 2013 John Wiley & Sons Ltd.
A Modified MinMax k-Means Algorithm Based on PSO.
Wang, Xiaoyan; Bai, Yanping
The MinMax k -means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k -means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k -means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k -means algorithm and the original MinMax k -means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.
Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A
2013-01-01
Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.
Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E
2009-07-01
The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.
An audit on the reporting of critical results in a tertiary institute.
Rensburg, Megan A; Nutt, Louise; Zemlin, Annalise E; Erasmus, Rajiv T
2009-03-01
Critical result reporting is a requirement for accreditation by accreditation bodies worldwide. Accurate, prompt communication of results to the clinician by the laboratory is of extreme importance. Repeating of the critical result by the recipient has been used as a means to improve the accuracy of notification. Our objective was to assess the accuracy of notification of critical chemical pathology laboratory results telephoned out to clinicians/clinical areas. We hypothesize that read-back of telephoned critical laboratory results by the recipient may improve the accuracy of the notification. This was a prospective study, where all critical results telephoned by chemical pathologists and registrars at Tygerberg Hospital were monitored for one month. The recipient was required to repeat the result (patient name, folder number and test results). Any error, as well as the designation of the recipient was logged. Of 472 outgoing telephone calls, 51 errors were detected (error rate 10.8%). Most errors were made when recording the folder number (64.7%), with incorrect patient name being the lowest (5.9%). Calls to the clinicians had the highest error rate (20%), most of them being the omission of recording folder numbers. Our audit highlights the potential errors during the post-analytical phase of laboratory testing. The importance of critical result reporting is still poorly recognized in South Africa. Implementation of a uniform accredited practice for communication of critical results can reduce error and improve patient safety.
A Modified MinMax k-Means Algorithm Based on PSO
2016-01-01
The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically. PMID:27656201
Horvath, Gabor; Farkas, Etelka; Boncz, Ildiko; Blaho, Miklos; Kriska, Gyorgy
2012-01-01
The experts of animal locomotion well know the characteristics of quadruped walking since the pioneering work of Eadweard Muybridge in the 1880s. Most of the quadrupeds advance their legs in the same lateral sequence when walking, and only the timing of their supporting feet differ more or less. How did this scientific knowledge influence the correctness of quadruped walking depictions in the fine arts? Did the proportion of erroneous quadruped walking illustrations relative to their total number (i.e. error rate) decrease after Muybridge? How correctly have cavemen (upper palaeolithic Homo sapiens) illustrated the walking of their quadruped prey in prehistoric times? The aim of this work is to answer these questions. We have analyzed 1000 prehistoric and modern artistic quadruped walking depictions and determined whether they are correct or not in respect of the limb attitudes presented, assuming that the other aspects of depictions used to determine the animals gait are illustrated correctly. The error rate of modern pre-Muybridgean quadruped walking illustrations was 83.5%, much more than the error rate of 73.3% of mere chance. It decreased to 57.9% after 1887, that is in the post-Muybridgean period. Most surprisingly, the prehistoric quadruped walking depictions had the lowest error rate of 46.2%. All these differences were statistically significant. Thus, cavemen were more keenly aware of the slower motion of their prey animals and illustrated quadruped walking more precisely than later artists. PMID:23227149
Horvath, Gabor; Farkas, Etelka; Boncz, Ildiko; Blaho, Miklos; Kriska, Gyorgy
2012-01-01
The experts of animal locomotion well know the characteristics of quadruped walking since the pioneering work of Eadweard Muybridge in the 1880s. Most of the quadrupeds advance their legs in the same lateral sequence when walking, and only the timing of their supporting feet differ more or less. How did this scientific knowledge influence the correctness of quadruped walking depictions in the fine arts? Did the proportion of erroneous quadruped walking illustrations relative to their total number (i.e. error rate) decrease after Muybridge? How correctly have cavemen (upper palaeolithic Homo sapiens) illustrated the walking of their quadruped prey in prehistoric times? The aim of this work is to answer these questions. We have analyzed 1000 prehistoric and modern artistic quadruped walking depictions and determined whether they are correct or not in respect of the limb attitudes presented, assuming that the other aspects of depictions used to determine the animals gait are illustrated correctly. The error rate of modern pre-Muybridgean quadruped walking illustrations was 83.5%, much more than the error rate of 73.3% of mere chance. It decreased to 57.9% after 1887, that is in the post-Muybridgean period. Most surprisingly, the prehistoric quadruped walking depictions had the lowest error rate of 46.2%. All these differences were statistically significant. Thus, cavemen were more keenly aware of the slower motion of their prey animals and illustrated quadruped walking more precisely than later artists.
Determination of Lowest Achievable Emission Rate for Coors Container Corporation
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Transfer of Technology in Determining Lowest Achievable Emission Rate (LAER)
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Hayashino, Yasuaki; Utsugi-Ozaki, Makiko; Feldman, Mitchell D.; Fukuhara, Shunichi
2012-01-01
The presence of hope has been found to influence an individual's ability to cope with stressful situations. The objective of this study is to evaluate the relationship between medical errors, hope and burnout among practicing physicians using validated metrics. Prospective cohort study was conducted among hospital based physicians practicing in Japan (N = 836). Measures included the validated Burnout Scale, self-assessment of medical errors and Herth Hope Index (HHI). The main outcome measure was the frequency of self-perceived medical errors, and Poisson regression analysis was used to evaluate the association between hope and medical error. A total of 361 errors were reported in 836 physician-years. We observed a significant association between hope and self-report of medical errors. Compared with the lowest tertile category of HHI, incidence rate ratios (IRRs) of self-perceived medical errors of physicians in the highest category were 0.44 (95%CI, 0.34 to 0.58) and 0.54 (95%CI, 0.42 to 0.70) respectively, for the 2nd and 3rd tertile. In stratified analysis by hope score, among physicians with a low hope score, those who experienced higher burnout reported higher incidence of errors; physicians with high hope scores did not report high incidences of errors, even if they experienced high burnout. Self-perceived medical errors showed a strong association with physicians' hope, and hope modified the association between physicians' burnout and self-perceived medical errors. PMID:22530055
Tembuyser, Lien; Tack, Véronique; Zwaenepoel, Karen; Pauwels, Patrick; Miller, Keith; Bubendorf, Lukas; Kerr, Keith; Schuuring, Ed; Thunnissen, Erik; Dequeker, Elisabeth M. C.
2014-01-01
Background and Purpose Molecular profiling should be performed on all advanced non-small cell lung cancer with non-squamous histology to allow treatment selection. Currently, this should include EGFR mutation testing and testing for ALK rearrangements. ROS1 is another emerging target. ALK rearrangement status is a critical biomarker to predict response to tyrosine kinase inhibitors such as crizotinib. To promote high quality testing in non-small cell lung cancer, the European Society of Pathology has introduced an external quality assessment scheme. This article summarizes the results of the first two pilot rounds organized in 2012–2013. Materials and Methods Tissue microarray slides consisting of cell-lines and resection specimens were distributed with the request for routine ALK testing using IHC or FISH. Participation in ALK FISH testing included the interpretation of four digital FISH images. Results Data from 173 different laboratories was obtained. Results demonstrate decreased error rates in the second round for both ALK FISH and ALK IHC, although the error rates were still high and the need for external quality assessment in laboratories performing ALK testing is evident. Error rates obtained by FISH were lower than by IHC. The lowest error rates were observed for the interpretation of digital FISH images. Conclusion There was a large variety in FISH enumeration practices. Based on the results from this study, recommendations for the methodology, analysis, interpretation and result reporting were issued. External quality assessment is a crucial element to improve the quality of molecular testing. PMID:25386659
Tembuyser, Lien; Tack, Véronique; Zwaenepoel, Karen; Pauwels, Patrick; Miller, Keith; Bubendorf, Lukas; Kerr, Keith; Schuuring, Ed; Thunnissen, Erik; Dequeker, Elisabeth M C
2014-01-01
Molecular profiling should be performed on all advanced non-small cell lung cancer with non-squamous histology to allow treatment selection. Currently, this should include EGFR mutation testing and testing for ALK rearrangements. ROS1 is another emerging target. ALK rearrangement status is a critical biomarker to predict response to tyrosine kinase inhibitors such as crizotinib. To promote high quality testing in non-small cell lung cancer, the European Society of Pathology has introduced an external quality assessment scheme. This article summarizes the results of the first two pilot rounds organized in 2012-2013. Tissue microarray slides consisting of cell-lines and resection specimens were distributed with the request for routine ALK testing using IHC or FISH. Participation in ALK FISH testing included the interpretation of four digital FISH images. Data from 173 different laboratories was obtained. Results demonstrate decreased error rates in the second round for both ALK FISH and ALK IHC, although the error rates were still high and the need for external quality assessment in laboratories performing ALK testing is evident. Error rates obtained by FISH were lower than by IHC. The lowest error rates were observed for the interpretation of digital FISH images. There was a large variety in FISH enumeration practices. Based on the results from this study, recommendations for the methodology, analysis, interpretation and result reporting were issued. External quality assessment is a crucial element to improve the quality of molecular testing.
Quotation accuracy in medical journal articles-a systematic review and meta-analysis.
Jergas, Hannah; Baethge, Christopher
2015-01-01
Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.
Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions
NASA Astrophysics Data System (ADS)
Güler, Marifi
2017-10-01
Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.
Effects of input device and motion type on a cursor-positioning task.
Yau, Yi-Jan; Hwang, Sheue-Ling; Chao, Chin-Jung
2008-02-01
Many studies have investigated the performance of using nonkey-board input devices under static situations, but few have considered the effects of motion type on manipulating these input devices. In this study comparison of 12 mens' performance using four input devices (three trackballs: currently used, trackman wheel, and erectly held trackballs, as well as a touch screen) under five motion types of static, heave, roll, pitch, and random movements was conducted. The input device and motion type significantly affected movement speed and accuracy, and their interaction significantly affected the movement speed. The touch screen was the fastest but the least accurate input device. The erectly held trackball was the slowest, whereas the error rate of the currently used trackball was the lowest. Impairments of the random motion on movement time and error rate were larger than those of other motion types. Considering objective and subjective evaluations, the trackman wheel and currently used trackball were more efficient in operation than the erectly held trackball and touch screen under the motion environments.
A comparison of acoustic montoring methods for common anurans of the northeastern United States
Brauer, Corinne; Donovan, Therese; Mickey, Ruth M.; Katz, Jonathan; Mitchell, Brian R.
2016-01-01
Many anuran monitoring programs now include autonomous recording units (ARUs). These devices collect audio data for extended periods of time with little maintenance and at sites where traditional call surveys might be difficult. Additionally, computer software programs have grown increasingly accurate at automatically identifying the calls of species. However, increased automation may cause increased error. We collected 435 min of audio data with 2 types of ARUs at 10 wetland sites in Vermont and New York, USA, from 1 May to 1 July 2010. For each minute, we determined presence or absence of 4 anuran species (Hyla versicolor, Pseudacris crucifer, Anaxyrus americanus, and Lithobates clamitans) using 1) traditional human identification versus 2) computer-mediated identification with software package, Song Scope® (Wildlife Acoustics, Concord, MA). Detections were compared with a data set consisting of verified calls in order to quantify false positive, false negative, true positive, and true negative rates. Multinomial logistic regression analysis revealed a strong (P < 0.001) 3-way interaction between the ARU recorder type, identification method, and focal species, as well as a trend in the main effect of rain (P = 0.059). Overall, human surveyors had the lowest total error rate (<2%) compared with 18–31% total errors with automated methods. Total error rates varied by species, ranging from 4% for A. americanus to 26% for L. clamitans. The presence of rain may reduce false negative rates. For survey minutes where anurans were known to be calling, the odds of a false negative were increased when fewer individuals of the same species were calling.
Quotation accuracy in medical journal articles—a systematic review and meta-analysis
Jergas, Hannah
2015-01-01
Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420
2014-01-01
Myoelectric control has been used for decades to control powered upper limb prostheses. Conventional, amplitude-based control has been employed to control a single prosthesis degree of freedom (DOF) such as closing and opening of the hand. Within the last decade, new and advanced arm and hand prostheses have been constructed that are capable of actuating numerous DOFs. Pattern recognition control has been proposed to control a greater number of DOFs than conventional control, but has traditionally been limited to sequentially controlling DOFs one at a time. However, able-bodied individuals use multiple DOFs simultaneously, and it may be beneficial to provide amputees the ability to perform simultaneous movements. In this study, four amputees who had undergone targeted motor reinnervation (TMR) surgery with previous training using myoelectric prostheses were configured to use three control strategies: 1) conventional amplitude-based myoelectric control, 2) sequential (one-DOF) pattern recognition control, 3) simultaneous pattern recognition control. Simultaneous pattern recognition was enabled by having amputees train each simultaneous movement as a separate motion class. For tasks that required control over just one DOF, sequential pattern recognition based control performed the best with the lowest average completion times, completion rates and length error. For tasks that required control over 2 DOFs, the simultaneous pattern recognition controller performed the best with the lowest average completion times, completion rates and length error compared to the other control strategies. In the two strategies in which users could employ simultaneous movements (conventional and simultaneous pattern recognition), amputees chose to use simultaneous movements 78% of the time with simultaneous pattern recognition and 64% of the time with conventional control for tasks that required two DOF motions to reach the target. These results suggest that when amputees are given the ability to control multiple DOFs simultaneously, they choose to perform tasks that utilize multiple DOFs with simultaneous movements. Additionally, they were able to perform these tasks with higher performance (faster speed, lower length error and higher completion rates) without losing substantial performance in 1 DOF tasks. PMID:24410948
The error analysis of Lobular and segmental division of right liver by volume measurement.
Zhang, Jianfei; Lin, Weigang; Chi, Yanyan; Zheng, Nan; Xu, Qiang; Zhang, Guowei; Yu, Shengbo; Li, Chan; Wang, Bin; Sui, Hongjin
2017-07-01
The aim of this study is to explore the inconsistencies between right liver volume as measured by imaging and the actual anatomical appearance of the right lobe. Five healthy donated livers were studied. The liver slices were obtained with hepatic segments multicolor-infused through the portal vein. In the slices, the lobes were divided by two methods: radiological landmarks and real anatomical boundaries. The areas of the right anterior lobe (RAL) and right posterior lobe (RPL) on each slice were measured using Photoshop CS5 and AutoCAD, and the volumes of the two lobes were calculated. There was no statistically significant difference between the volumes of the RAL or RPL as measured by the radiological landmarks (RL) and anatomical boundaries (AB) methods. However, the curves of the square error value of the RAL and RPL measured using CT showed that the three lowest points were at the cranial, intermediate, and caudal levels. The U- or V-shaped curves of the square error rate of the RAL and RPL revealed that the lowest value is at the intermediate level and the highest at the cranial and caudal levels. On CT images, less accurate landmarks were used to divide the RAL and RPL at the cranial and caudal layers. The measured volumes of hepatic segments VIII and VI would be less than their true values, and the measured volumes of hepatic segments VII and V would be greater than their true values, according to radiological landmarks. Clin. Anat. 30:585-590, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Single-sample method for the estimation of glomerular filtration rate in children
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tauxe, W.N.; Bagchi, A.; Tepe, P.G.
1987-03-01
A method for the determination of the glomerular filtration rate (GFR) in children which involves the use of a single-plasma sample (SPS) after the injection of a radioactive indicator such as radioiodine labeled diatrizoate (Hypaque) has been developed. This is analogous to previously published SPS techniques of effective renal plasma flow (ERPF) in adults and children and GFR SPS techniques in adults. As a reference standard, GFR has been calculated from compartment analysis of injected radiopharmaceuticals (Sapirstein Method). Theoretical volumes of distribution were calculated at various times after injection (Vt) by dividing the total injected counts (I) by the plasmamore » concentration (Ct) expressed in liters, determined by counting an aliquot of plasma in a well type scintillation counter. Errors of predicting GFR from the various Vt values were determined as the standard error of estimate (Sy.x) in ml/min. They were found to be relatively high early after injection and to fall to a nadir of 3.9 ml/min at 91 min. The Sy.x Vt relationship was examined in linear, quadratic, and exponential form, but the simpler linear relationship was found to yield the lowest error. Other data calculated from the compartment analysis of the reference plasma disappearance curves are presented, but at this time have apparently little clinical relevance.« less
Attitude Error Representations for Kalman Filtering
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2002-01-01
The quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation. The quaternion must obey a unit norm constraint, though, which has led to the development of an extended Kalman filter using a quaternion for the global attitude estimate and a three-component representation for attitude errors. We consider various attitude error representations for this Multiplicative Extended Kalman Filter and its second-order extension.
Robust Timing Synchronization in Aeronautical Mobile Communication Systems
NASA Technical Reports Server (NTRS)
Xiong, Fu-Qin; Pinchak, Stanley
2004-01-01
This work details a study of robust synchronization schemes suitable for satellite to mobile aeronautical applications. A new scheme, the Modified Sliding Window Synchronizer (MSWS), is devised and compared with existing schemes, including the traditional Early-Late Gate Synchronizer (ELGS), the Gardner Zero-Crossing Detector (GZCD), and the Sliding Window Synchronizer (SWS). Performance of the synchronization schemes is evaluated by a set of metrics that indicate performance in digital communications systems. The metrics are convergence time, mean square phase error (or root mean-square phase error), lowest SNR for locking, initial frequency offset performance, midstream frequency offset performance, and system complexity. The performance of the synchronizers is evaluated by means of Matlab simulation models. A simulation platform is devised to model the satellite to mobile aeronautical channel, consisting of a Quadrature Phase Shift Keying modulator, an additive white Gaussian noise channel, and a demodulator front end. Simulation results show that the MSWS provides the most robust performance at the cost of system complexity. The GZCD provides a good tradeoff between robustness and system complexity for communication systems that require high symbol rates or low overall system costs. The ELGS has a high system complexity despite its average performance. Overall, the SWS, originally designed for multi-carrier systems, performs very poorly in single-carrier communications systems. Table 5.1 in Section 5 provides a ranking of each of the synchronization schemes in terms of the metrics set forth in Section 4.1. Details of comparison are given in Section 5. Based on the results presented in Table 5, it is safe to say that the most robust synchronization scheme examined in this work is the high-sample-rate Modified Sliding Window Synchronizer. A close second is its low-sample-rate cousin. The tradeoff between complexity and lowest mean-square phase error determines the rankings of the Gardner Zero-Crossing Detector and both versions of the Early-Late Gate Synchronizer. The least robust models are the high and low-sample-rate Sliding Window Synchronizers. Consequently, the recommended replacement synchronizer for NASA's Advanced Air Transportation Technologies mobile aeronautical communications system is the high-sample-rate Modified Sliding Window Synchronizer. By incorporating this synchronizer into their system, NASA can be assured that their system will be operational in extremely adverse conditions. The quick convergence time of the MSWS should allow the use of high-level protocols. However, if NASA feels that reduced system complexity is the most important aspect of their replacement synchronizer, the Gardner Zero-Crossing Detector would be the best choice.
Dudley, Robert W.
2015-12-03
The largest average errors of prediction are associated with regression equations for the lowest streamflows derived for months during which the lowest streamflows of the year occur (such as the 5 and 1 monthly percentiles for August and September). The regression equations have been derived on the basis of streamflow and basin characteristics data for unregulated, rural drainage basins without substantial streamflow or drainage modifications (for example, diversions and (or) regulation by dams or reservoirs, tile drainage, irrigation, channelization, and impervious paved surfaces), therefore using the equations for regulated or urbanized basins with substantial streamflow or drainage modifications will yield results of unknown error. Input basin characteristics derived using techniques or datasets other than those documented in this report or using values outside the ranges used to develop these regression equations also will yield results of unknown error.
Ellakwa, A; Elnajar, S; Littlefair, D; Sara, G
2018-05-03
The aim of the current study is to develop a novel method to investigate the accuracy of 3D scanners and digital articulation systems. An upper and a lower poured stone model were created by taking impression of fully dentate male (fifty years old) participant. Titanium spheres were added to the models to allow for an easily recognisable geometric shape for measurement after scanning and digital articulation. Measurements were obtained using a Coordinate Measuring Machine to record volumetric error, articulation error and clinical effect error. Three scanners were compared, including the Imetric 3D iScan d104i, Shining 3D AutoScan-DS100 and 3Shape D800, as well as their respective digital articulation software packages. Stoneglass Industries PDC digital articulation system was also applied to the Imetric scans for comparison with the CMM measurements. All the scans displayed low volumetric error (p⟩0.05), indicating that the scanners themselves had a minor contribution to the articulation and clinical effect errors. The PDC digital articulation system was found to deliver the lowest average errors, with good repeatability of results. The new measuring technique in the current study was able to assess the scanning and articulation accuracy of the four systems investigated. The PDC digital articulation system using Imetric scans was recommended as it displayed the lowest articulation error and clinical effect error with good repeatability. The low errors from the PDC system may have been due to its use of a 3D axis for alignment rather than the use of a best fit. Copyright© 2018 Dennis Barber Ltd.
Loaiza-Echeverri, A M; Bergmann, J A G; Toral, F L B; Osorio, J P; Carmo, A S; Mendonça, L F; Moustacas, V S; Henry, M
2013-03-15
The objective was to use various nonlinear models to describe scrotal circumference (SC) growth in Guzerat bulls on three farms in the state of Minas Gerais, Brazil. The nonlinear models were: Brody, Logistic, Gompertz, Richards, Von Bertalanffy, and Tanaka, where parameter A is the estimated testis size at maturity, B is the integration constant, k is a maturating index and, for the Richards and Tanaka models, m determines the inflection point. In Tanaka, A is an indefinite size of the testis, and B and k adjust the shape and inclination of the curve. A total of 7410 SC records were obtained every 3 months from 1034 bulls with ages varying between 2 and 69 months (<240 days of age = 159; 241-365 days = 451; 366-550 days = 1443; 551-730 days = 1705; and >731 days = 3652 SC measurements). Goodness of fit was evaluated by coefficients of determination (R(2)), error sum of squares, average prediction error (APE), and mean absolute deviation. The Richards model did not reach the convergence criterion. The R(2) were similar for all models (0.68-0.69). The error sum of squares was lowest for the Tanaka model. All models fit the SC data poorly in the early and late periods. Logistic was the model which best estimated SC in the early phase (based on APE and mean absolute deviation). The Tanaka and Logistic models had the lowest APE between 300 and 1600 days of age. The Logistic model was chosen for analysis of the environmental influence on parameters A and k. Based on absolute growth rate, SC increased from 0.019 cm/d, peaking at 0.025 cm/d between 318 and 435 days of age. Farm, year, and season of birth significantly affected size of adult SC and SC growth rate. An increase in SC adult size (parameter A) was accompanied by decreased SC growth rate (parameter k). In conclusion, SC growth in Guzerat bulls was characterized by an accelerated growth phase, followed by decreased growth; this was best represented by the Logistic model. The inflection point occurred at approximately 376 days of age (mean SC of 17.9 cm). We inferred that early selection of testicular size might result in smaller testes at maturity. Copyright © 2013 Elsevier Inc. All rights reserved.
Effect of partition board color on mood and autonomic nervous function.
Sakuragi, Sokichi; Sugiyama, Yoshiki
2011-12-01
The purpose of this study was to evaluate the effects of the presence or absence (control) of a partition board and its color (red, yellow, blue) on subjective mood ratings and changes in autonomic nervous system indicators induced by a video game task. The increase in the mean Profile of Mood States (POMS) Fatigue score and mean Oppressive feeling rating after the task was lowest with the blue partition board. Multiple-regression analysis identified oppressive feeling and error scores on the second half of the task as statistically significant contributors to Fatigue. While explanatory variables were limited to the physiological indices, multiple-regression analysis identified a significant contribution of autonomic reactivity (assessed by heart rate variability) to Fatigue. These results suggest that a blue partition board would reduce task-induced subjective fatigue, in part by lowering the oppressive feeling of being enclosed during the task, possibly by increasing autonomic reactivity.
Porter, Teresita M.; Golding, G. Brian
2012-01-01
Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1) a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN); 2) a composition-based method (Ribosomal Database Project naïve Bayesian classifier, NBC); and, 3) a phylogeny-based method (Statistical Assignment Package, SAP). We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50–100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys. PMID:22558215
Recommender systems in knowledge-mining
NASA Astrophysics Data System (ADS)
Volna, Eva
2017-07-01
The subject of the paper is to analyse the possibilities of application of recommender systems in the field of data mining. The work focuses on three basic types of recommender systems (collaborative, content-based and hybrid). The goal of the article is to evaluate which of these three concepts of recommender systems provides forecast with the lowest error rate in the domain of recommending movies. This target is fulfilled by the practical part of the work - at first, the own recommender system was designed and created, capable of obtaining movies recommendation from the database based on the user's preferences. Next, we verified experimentally which recommender system produces more accurate results.
Smartphone Text Input Method Performance, Usability, and Preference With Younger and Older Adults.
Smith, Amanda L; Chaparro, Barbara S
2015-09-01
User performance, perceived usability, and preference for five smartphone text input methods were compared with younger and older novice adults. Smartphones are used for a variety of functions other than phone calls, including text messaging, e-mail, and web browsing. Research comparing performance with methods of text input on smartphones reveals a high degree of variability in reported measures, procedures, and results. This study reports on a direct comparison of five of the most common input methods among a population of younger and older adults, who had no experience with any of the methods. Fifty adults (25 younger, 18-35 years; 25 older, 60-84 years) completed a text entry task using five text input methods (physical Qwerty, onscreen Qwerty, tracing, handwriting, and voice). Entry and error rates, perceived usability, and preference were recorded. Both age groups input text equally fast using voice input, but older adults were slower than younger adults using all other methods. Both age groups had low error rates when using physical Qwerty and voice, but older adults committed more errors with the other three methods. Both younger and older adults preferred voice and physical Qwerty input to the remaining methods. Handwriting consistently performed the worst and was rated lowest by both groups. Voice and physical Qwerty input methods proved to be the most effective for both younger and older adults, and handwriting input was the least effective overall. These findings have implications to the design of future smartphone text input methods and devices, particularly for older adults. © 2015, Human Factors and Ergonomics Society.
Samsuri, Srima Elina; Pei Lin, Lua; Fahrni, Mathumalar Loganathan
2015-01-01
Objective To assess the safety attitudes of pharmacists, provide a profile of their domains of safety attitude and correlate their attitudes with self-reported rates of medication errors. Design A cross-sectional study utilising the Safety Attitudes Questionnaire (SAQ). Setting 3 public hospitals and 27 health clinics. Participants 117 pharmacists. Main outcome measure(s) Safety culture mean scores, variation in scores across working units and between hospitals versus health clinics, predictors of safety culture, and medication errors and their correlation. Results Response rate was 83.6% (117 valid questionnaires returned). Stress recognition (73.0±20.4) and working condition (54.8±17.4) received the highest and lowest mean scores, respectively. Pharmacists exhibited positive attitudes towards: stress recognition (58.1%), job satisfaction (46.2%), teamwork climate (38.5%), safety climate (33.3%), perception of management (29.9%) and working condition (15.4%). With the exception of stress recognition, those who worked in health clinics scored higher than those in hospitals (p<0.05) and higher scores (overall score as well as score for each domain except for stress recognition) correlated negatively with reported number of medication errors. Conversely, those working in hospital (versus health clinic) were 8.9 times more likely (p<0.01) to report a medication error (OR 8.9, CI 3.08 to 25.7). As stress recognition increased, the number of medication errors reported increased (p=0.023). Years of work experience (p=0.017) influenced the number of medication errors reported. For every additional year of work experience, pharmacists were 0.87 times less likely to report a medication error (OR 0.87, CI 0.78 to 0.98). Conclusions A minority (20.5%) of the pharmacists working in hospitals and health clinics was in agreement with the overall SAQ questions and scales. Pharmacists in outpatient and ambulatory units and those in health clinics had better perceptions of safety culture. As perceptions improved, the number of medication errors reported decreased. Group-specific interventions that target specific domains are necessary to improve the safety culture. PMID:26610761
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almasbahi, M.S.
In a world of generalized floating exchange rates, it is not enough to solve the problem of exchange rate policy by determining whether to peg or float the currency under consideration. It is also necessary to choose to what major currency to peg. The main purpose of this study is to investigate and determine empirically the optimum currency peg for the Saudi riyal. To accomplish this goal, a simple conventional trade model, that includes variables found in many other studies of import and export demand, was used. In addition, an exchange rate term was added as a separate independent variablemore » in the import and export demand equations in order to assess the effect of exchange rate on the trade flows. The criteria for the optimal currency peg in this study were based on two factors. First, the error statistics for projected imports and exports using alternative exchange rate regimes. Second, variances of projected imports, exports and trade balance using alternative exchange rate regimes. The exchange rate has a significant impact on the Saudia Arabian trade flows which implies that changes in the riyals value affect the Saudi trade deficit. Moreover, the exchange rate has a more powerful effect on its aggregate imports than on the world demand for its exports. There is also a strong support for the hypothesis that the exchange rate affects the value of the Saudi bilateral trade with its five major trade partners. On the aggregate level, the SDR peg seems to be the best currency peg for the Saudi riyal since it provides the best prediction errors and the lowest variance for the trade balance. Finally, on the disaggregate level, the US dollar provides the best performance and yields the best results among all the six currency pegs considered in this study.« less
Ly, Thomas; Pamer, Carol; Dang, Oanh; Brajovic, Sonja; Haider, Shahrukh; Botsis, Taxiarchis; Milward, David; Winter, Andrew; Lu, Susan; Ball, Robert
2018-05-31
The FDA Adverse Event Reporting System (FAERS) is a primary data source for identifying unlabeled adverse events (AEs) in a drug or biologic drug product's postmarketing phase. Many AE reports must be reviewed by drug safety experts to identify unlabeled AEs, even if the reported AEs are previously identified, labeled AEs. Integrating the labeling status of drug product AEs into FAERS could increase report triage and review efficiency. Medical Dictionary for Regulatory Activities (MedDRA) is the standard for coding AE terms in FAERS cases. However, drug manufacturers are not required to use MedDRA to describe AEs in product labels. We hypothesized that natural language processing (NLP) tools could assist in automating the extraction and MedDRA mapping of AE terms in drug product labels. We evaluated the performance of three NLP systems, (ETHER, I2E, MetaMap) for their ability to extract AE terms from drug labels and translate the terms to MedDRA Preferred Terms (PTs). Pharmacovigilance-based annotation guidelines for extracting AE terms from drug labels were developed for this study. We compared each system's output to MedDRA PT AE lists, manually mapped by FDA pharmacovigilance experts using the guidelines, for ten drug product labels known as the "gold standard AE list" (GSL) dataset. Strict time and configuration conditions were imposed in order to test each system's capabilities under conditions of no human intervention and minimal system configuration. Each NLP system's output was evaluated for precision, recall and F measure in comparison to the GSL. A qualitative error analysis (QEA) was conducted to categorize a random sample of each NLP system's false positive and false negative errors. A total of 417, 278, and 250 false positive errors occurred in the ETHER, I2E, and MetaMap outputs, respectively. A total of 100, 80, and 187 false negative errors occurred in ETHER, I2E, and MetaMap outputs, respectively. Precision ranged from 64% to 77%, recall from 64% to 83% and F measure from 67% to 79%. I2E had the highest precision (77%), recall (83%) and F measure (79%). ETHER had the lowest precision (64%). MetaMap had the lowest recall (64%). The QEA found that the most prevalent false positive errors were context errors such as "Context error/General term", "Context error/Instructions or monitoring parameters", "Context error/Medical history preexisting condition underlying condition risk factor or contraindication", and "Context error/AE manifestations or secondary complication". The most prevalent false negative errors were in the "Incomplete or missed extraction" error category. Missing AE terms were typically due to long terms, or terms containing non-contiguous words which do not correspond exactly to MedDRA synonyms. MedDRA mapping errors were a minority of errors for ETHER and I2E but were the most prevalent false positive errors for MetaMap. The results demonstrate that it may be feasible to use NLP tools to extract and map AE terms to MedDRA PTs. However, the NLP tools we tested would need to be modified or reconfigured to lower the error rates to support their use in a regulatory setting. Tools specific for extracting AE terms from drug labels and mapping the terms to MedDRA PTs may need to be developed to support pharmacovigilance. Conducting research using additional NLP systems on a larger, diverse GSL would also be informative. Copyright © 2018. Published by Elsevier Inc.
A greedy algorithm for species selection in dimension reduction of combustion chemistry
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.
2010-09-01
Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.
Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten
2015-10-07
Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor
2016-07-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám
2016-01-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations. PMID:27493566
Discrimination of plant-parasitic nematodes from complex soil communities using ecometagenetics.
Porazinska, Dorota L; Morgan, Matthew J; Gaspar, John M; Court, Leon N; Hardy, Christopher M; Hodda, Mike
2014-07-01
Many plant pathogens are microscopic, cryptic, and difficult to diagnose. The new approach of ecometagenetics, involving ultrasequencing, bioinformatics, and biostatistics, has the potential to improve diagnoses of plant pathogens such as nematodes from the complex mixtures found in many agricultural and biosecurity situations. We tested this approach on a gradient of complexity ranging from a few individuals from a few species of known nematode pathogens in a relatively defined substrate to a complex and poorly known suite of nematode pathogens in a complex forest soil, including its associated biota of unknown protists, fungi, and other microscopic eukaryotes. We added three known but contrasting species (Pratylenchus neglectus, the closely related P. thornei, and Heterodera avenae) to half the set of substrates, leaving the other half without them. We then tested whether all nematode pathogens-known and unknown, indigenous, and experimentally added-were detected consistently present or absent. We always detected the Pratylenchus spp. correctly and with the number of sequence reads proportional to the numbers added. However, a single cyst of H. avenae was only identified approximately half the time it was present. Other plant-parasitic nematodes and nematodes from other trophic groups were detected well but other eukaryotes were detected less consistently. DNA sampling errors or informatic errors or both were involved in misidentification of H. avenae; however, the proportions of each varied in the different bioinformatic pipelines and with different parameters used. To a large extent, false-positive and false-negative errors were complementary: pipelines and parameters with the highest false-positive rates had the lowest false-negative rates and vice versa. Sources of error identified included assumptions in the bioinformatic pipelines, slight differences in primer regions, the number of sequence reads regarded as the minimum threshold for inclusion in analysis, and inaccessible DNA in resistant life stages. Identification of the sources of error allows us to suggest ways to improve identification using ecometagenetics.
A two-phase sampling design for increasing detections of rare species in occupancy surveys
Pacifici, Krishna; Dorazio, Robert M.; Dorazio, Michael J.
2012-01-01
1. Occupancy estimation is a commonly used tool in ecological studies owing to the ease at which data can be collected and the large spatial extent that can be covered. One major obstacle to using an occupancy-based approach is the complications associated with designing and implementing an efficient survey. These logistical challenges become magnified when working with rare species when effort can be wasted in areas with none or very few individuals. 2. Here, we develop a two-phase sampling approach that mitigates these problems by using a design that places more effort in areas with higher predicted probability of occurrence. We compare our new sampling design to traditional single-season occupancy estimation under a range of conditions and population characteristics. We develop an intuitive measure of predictive error to compare the two approaches and use simulations to assess the relative accuracy of each approach. 3. Our two-phase approach exhibited lower predictive error rates compared to the traditional single-season approach in highly spatially correlated environments. The difference was greatest when detection probability was high (0·75) regardless of the habitat or sample size. When the true occupancy rate was below 0·4 (0·05-0·4), we found that allocating 25% of the sample to the first phase resulted in the lowest error rates. 4. In the majority of scenarios, the two-phase approach showed lower error rates compared to the traditional single-season approach suggesting our new approach is fairly robust to a broad range of conditions and design factors and merits use under a wide variety of settings. 5. Synthesis and applications. Conservation and management of rare species are a challenging task facing natural resource managers. It is critical for studies involving rare species to efficiently allocate effort and resources as they are usually of a finite nature. We believe our approach provides a framework for optimal allocation of effort while maximizing the information content of the data in an attempt to provide the highest conservation value per unit of effort.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
NASA Astrophysics Data System (ADS)
Książek, Judyta
2015-10-01
At present, there has been a great interest in the development of texture based image classification methods in many different areas. This study presents the results of research carried out to assess the usefulness of selected textural features for detection of asbestos-cement roofs in orthophotomap classification. Two different orthophotomaps of southern Poland (with ground resolution: 5 cm and 25 cm) were used. On both orthoimages representative samples for two classes: asbestos-cement roofing sheets and other roofing materials were selected. Estimation of texture analysis usefulness was conducted using machine learning methods based on decision trees (C5.0 algorithm). For this purpose, various sets of texture parameters were calculated in MaZda software. During the calculation of decision trees different numbers of texture parameters groups were considered. In order to obtain the best settings for decision trees models cross-validation was performed. Decision trees models with the lowest mean classification error were selected. The accuracy of the classification was held based on validation data sets, which were not used for the classification learning. For 5 cm ground resolution samples, the lowest mean classification error was 15.6%. The lowest mean classification error in the case of 25 cm ground resolution was 20.0%. The obtained results confirm potential usefulness of the texture parameter image processing for detection of asbestos-cement roofing sheets. In order to improve the accuracy another extended study should be considered in which additional textural features as well as spectral characteristics should be analyzed.
Sequeira, Ivana R.; Lentle, Roger G.; Kruger, Marlena C.; Hurst, Roger D.
2014-01-01
Background Lactulose mannitol ratio tests are clinically useful for assessing disorders characterised by changes in gut permeability and for assessing mixing in the intestinal lumen. Variations between currently used test protocols preclude meaningful comparisons between studies. We determined the optimal sampling period and related this to intestinal residence. Methods Half-hourly lactulose and mannitol urinary excretions were determined over 6 hours in 40 healthy female volunteers after administration of either 600 mg aspirin or placebo, in randomised order at weekly intervals. Gastric and small intestinal transit times were assessed by the SmartPill in 6 subjects from the same population. Half-hourly percentage recoveries of lactulose and mannitol were grouped on a basis of compartment transit time. The rate of increase or decrease of each sugar within each group was explored by simple linear regression to assess the optimal period of sampling. Key Results The between subject standard errors for each half-hourly lactulose and mannitol excretion were lowest, the correlation of the quantity of each sugar excreted with time was optimal and the difference between the two sugars in this temporal relationship maximal during the period from 2½-4 h after ingestion. Half-hourly lactulose excretions were generally increased after dosage with aspirin whilst those of mannitol were unchanged as was the temporal pattern and period of lowest between subject standard error for both sugars. Conclusion The results indicate that between subject variation in the percentage excretion of the two sugars would be minimised and the differences in the temporal patterns of excretion would be maximised if the period of collection of urine used in clinical tests of small intestinal permeability were restricted to 2½-4 h post dosage. This period corresponds to a period when the column of digesta column containing the probes is passing from the small to the large intestine. PMID:24901524
Is 50 Hz high enough ECG sampling frequency for accurate HRV analysis?
Mahdiani, Shadi; Jeyhani, Vala; Peltokangas, Mikko; Vehkaoja, Antti
2015-01-01
With the worldwide growth of mobile wireless technologies, healthcare services can be provided at anytime and anywhere. Usage of wearable wireless physiological monitoring system has been extensively increasing during the last decade. These mobile devices can continuously measure e.g. the heart activity and wirelessly transfer the data to the mobile phone of the patient. One of the significant restrictions for these devices is usage of energy, which leads to requiring low sampling rate. This article is presented in order to investigate the lowest adequate sampling frequency of ECG signal, for achieving accurate enough time domain heart rate variability (HRV) parameters. For this purpose the ECG signals originally measured with high 5 kHz sampling rate were down-sampled to simulate the measurement with lower sampling rate. Down-sampling loses information, decreases temporal accuracy, which was then restored by interpolating the signals to their original sampling rates. The HRV parameters obtained from the ECG signals with lower sampling rates were compared. The results represent that even when the sampling rate of ECG signal is equal to 50 Hz, the HRV parameters are almost accurate with a reasonable error.
Samsuri, Srima Elina; Pei Lin, Lua; Fahrni, Mathumalar Loganathan
2015-11-26
To assess the safety attitudes of pharmacists, provide a profile of their domains of safety attitude and correlate their attitudes with self-reported rates of medication errors. A cross-sectional study utilising the Safety Attitudes Questionnaire (SAQ). 3 public hospitals and 27 health clinics. 117 pharmacists. Safety culture mean scores, variation in scores across working units and between hospitals versus health clinics, predictors of safety culture, and medication errors and their correlation. Response rate was 83.6% (117 valid questionnaires returned). Stress recognition (73.0±20.4) and working condition (54.8±17.4) received the highest and lowest mean scores, respectively. Pharmacists exhibited positive attitudes towards: stress recognition (58.1%), job satisfaction (46.2%), teamwork climate (38.5%), safety climate (33.3%), perception of management (29.9%) and working condition (15.4%). With the exception of stress recognition, those who worked in health clinics scored higher than those in hospitals (p<0.05) and higher scores (overall score as well as score for each domain except for stress recognition) correlated negatively with reported number of medication errors. Conversely, those working in hospital (versus health clinic) were 8.9 times more likely (p<0.01) to report a medication error (OR 8.9, CI 3.08 to 25.7). As stress recognition increased, the number of medication errors reported increased (p=0.023). Years of work experience (p=0.017) influenced the number of medication errors reported. For every additional year of work experience, pharmacists were 0.87 times less likely to report a medication error (OR 0.87, CI 0.78 to 0.98). A minority (20.5%) of the pharmacists working in hospitals and health clinics was in agreement with the overall SAQ questions and scales. Pharmacists in outpatient and ambulatory units and those in health clinics had better perceptions of safety culture. As perceptions improved, the number of medication errors reported decreased. Group-specific interventions that target specific domains are necessary to improve the safety culture. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197
Bedini, José Luis; Wallace, Jane F.; Pardo, Scott; Petruschke, Thorsten
2015-01-01
Background: Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients’ health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Methods: Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. Results: All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. Conclusions: All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. PMID:26445813
Umut, İlhan; Çentik, Güven
2016-01-01
The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present. PMID:27213008
Umut, İlhan; Çentik, Güven
2016-01-01
The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present.
PAN, CHEN-WEI; KLEIN, BARBARA E.K.; COTCH, MARY FRANCES; SHRAGER, SANDI; KLEIN, RONALD; FOLSOM, AARON; KRONMAL, RICHARD; SHEA, STEVEN J.; BURKE, GREGORY L.; SAW, SEANG-MEI; WONG, TIEN Y.
2013-01-01
PURPOSE To describe racial variations in the prevalence of refractive errors among adult white, Chinese, Hispanic, and black subjects in the United States. DESIGN Cross-sectional data from a prospective cohort study—the Multi-Ethnic Study of Atherosclerosis (MESA). METHODS A total of 6000 adults aged 45 to 84 years living in the United States participated in the study. Refractive error was assessed, without cycloplegia, in both eyes of all participants using an autorefractor. After excluding eyes with cataract, cataract surgery, or previous refractive surgery, the eye with the larger absolute spherical equivalent (SE) value for each participant was used to classify refractive error. Any myopia was defined as SE of −1.0 diopters (D) or less; high myopia was defined as SE of −5.0 D or less; any hyperopia was defined as SE of +1.0 D or more; clinically significant hyperopia was defined as SE of +3.0 D or more. Astigmatism was defined as a cylinder value of +1.0 D or more. RESULTS After excluding 508 participants with cataracts in both eyes, 838 participants with cataract surgery, 90 participants with laser refractive surgery, and 134 participants who refused to remove their contact lenses for the refraction measurement, 4430 adults with refractive error assessment in at least 1 eye contributed to the analysis. The prevalence of myopia among MESA participants was 25.1%, with lowest rates in Hispanic participants (14.2%), followed by black (21.5%) and white participants (31.0%), and highest rates in Chinese participants (37.2%). The overall rates of high myopia and astigmatism were 4.6% and 45.0%, respectively, with Chinese subjects also having the highest rates of high myopia (11.8%) and astigmatism (53.4%). The overall prevalence of any hyperopia was 38.2% and clinically significant hyperopia was 6.1%, with Hispanic participants having the highest rates of hyperopia (50.2%) and clinically significant hyperopia (8.8%). In multivariate analyses adjusting for age, sex, race, and study site, higher education level, being employed, and being taller were associated with a higher prevalence of myopia. In contrast, lower educational level and being shorter were associated with a higher prevalence of hyperopia. CONCLUSIONS Myopia and astigmatism were most prevalent in the Chinese population, with Chinese subjects having 3 times the prevalence of myopia as Hispanic subjects. Hyperopia was most common in Hispanic subjects. These findings provide further insights into variations in refractive errors among different racial groups and have important implications for the eye care services in the United States. PMID:23453694
Pan, Chen-Wei; Klein, Barbara E K; Cotch, Mary Frances; Shrager, Sandi; Klein, Ronald; Folsom, Aaron; Kronmal, Richard; Shea, Steven J; Burke, Gregory L; Saw, Seang-Mei; Wong, Tien Y
2013-06-01
To describe racial variations in the prevalence of refractive errors among adult white, Chinese, Hispanic, and black subjects in the United States. Cross-sectional data from a prospective cohort study-the Multi-Ethnic Study of Atherosclerosis (MESA). A total of 6000 adults aged 45 to 84 years living in the United States participated in the study. Refractive error was assessed, without cycloplegia, in both eyes of all participants using an autorefractor. After excluding eyes with cataract, cataract surgery, or previous refractive surgery, the eye with the larger absolute spherical equivalent (SE) value for each participant was used to classify refractive error. Any myopia was defined as SE of -1.0 diopters (D) or less; high myopia was defined as SE of -5.0 D or less; any hyperopia was defined as SE of +1.0 D or more; clinically significant hyperopia was defined as SE of +3.0 D or more. Astigmatism was defined as a cylinder value of +1.0 D or more. After excluding 508 participants with cataracts in both eyes, 838 participants with cataract surgery, 90 participants with laser refractive surgery, and 134 participants who refused to remove their contact lenses for the refraction measurement, 4430 adults with refractive error assessment in at least 1 eye contributed to the analysis. The prevalence of myopia among MESA participants was 25.1%, with lowest rates in Hispanic participants (14.2%), followed by black (21.5%) and white participants (31.0%), and highest rates in Chinese participants (37.2%). The overall rates of high myopia and astigmatism were 4.6% and 45.0%, respectively, with Chinese subjects also having the highest rates of high myopia (11.8%) and astigmatism (53.4%). The overall prevalence of any hyperopia was 38.2% and clinically significant hyperopia was 6.1%, with Hispanic participants having the highest rates of hyperopia (50.2%) and clinically significant hyperopia (8.8%). In multivariate analyses adjusting for age, sex, race, and study site, higher education level, being employed, and being taller were associated with a higher prevalence of myopia. In contrast, lower educational level and being shorter were associated with a higher prevalence of hyperopia. Myopia and astigmatism were most prevalent in the Chinese population, with Chinese subjects having 3 times the prevalence of myopia as Hispanic subjects. Hyperopia was most common in Hispanic subjects. These findings provide further insights into variations in refractive errors among different racial groups and have important implications for the eye care services in the United States. Copyright © 2013 Elsevier Inc. All rights reserved.
Application of a Hybrid Model for Predicting the Incidence of Tuberculosis in Hubei, China
Zhang, Guoliang; Huang, Shuqiong; Duan, Qionghong; Shu, Wen; Hou, Yongchun; Zhu, Shiyu; Miao, Xiaoping; Nie, Shaofa; Wei, Sheng; Guo, Nan; Shan, Hua; Xu, Yihua
2013-01-01
Background A prediction model for tuberculosis incidence is needed in China which may be used as a decision-supportive tool for planning health interventions and allocating health resources. Methods The autoregressive integrated moving average (ARIMA) model was first constructed with the data of tuberculosis report rate in Hubei Province from Jan 2004 to Dec 2011.The data from Jan 2012 to Jun 2012 were used to validate the model. Then the generalized regression neural network (GRNN)-ARIMA combination model was established based on the constructed ARIMA model. Finally, the fitting and prediction accuracy of the two models was evaluated. Results A total of 465,960 cases were reported between Jan 2004 and Dec 2011 in Hubei Province. The report rate of tuberculosis was highest in 2005 (119.932 per 100,000 population) and lowest in 2010 (84.724 per 100,000 population). The time series of tuberculosis report rate show a gradual secular decline and a striking seasonal variation. The ARIMA (2, 1, 0) × (0, 1, 1)12 model was selected from several plausible ARIMA models. The residual mean square error of the GRNN-ARIMA model and ARIMA model were 0.4467 and 0.6521 in training part, and 0.0958 and 0.1133 in validation part, respectively. The mean absolute error and mean absolute percentage error of the hybrid model were also less than the ARIMA model. Discussion and Conclusions The gradual decline in tuberculosis report rate may be attributed to the effect of intensive measures on tuberculosis. The striking seasonal variation may have resulted from several factors. We suppose that a delay in the surveillance system may also have contributed to the variation. According to the fitting and prediction accuracy, the hybrid model outperforms the traditional ARIMA model, which may facilitate the allocation of health resources in China. PMID:24223232
2017-03-01
proposed. Expected profiles can incorporate a level of overdesign. Finally, the Design Integrity measuring techniques are applied to five Test Article ...Inserted into Test System Table 2 presents the results of the analysis applied to each of the test article designs. Each of the domains are...the lowest integrities. Based on the analysis, the DI metric shows measurable differentiation between all five Test Article Error Location Error
Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.
2009-01-01
A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.
NASA Astrophysics Data System (ADS)
Baker, S.; Berryman, E.; Hawbaker, T. J.; Ewers, B. E.
2015-12-01
While much attention has been focused on large scale forest disturbances such as fire, harvesting, drought and insect attacks, small scale forest disturbances that create gaps in forest canopies and below ground root and mycorrhizal networks may accumulate to impact regional scale carbon budgets. In a lodgepole pine (Pinus contorta) forest near Fox Park, WY, clusters of 15 and 30 trees were removed in 1988 to assess the effect of tree gap disturbance on fine root density and nitrogen transformation. Twenty seven years later the gaps remain with limited regeneration present only in the center of the 30 tree plots, beyond the influence of roots from adjacent intact trees. Soil respiration was measured in the summer of 2015 to assess the influence of these disturbances on carbon cycling in Pinus contorta forests. Positions at the centers of experimental disturbances were found to have the lowest respiration rates (mean 2.45 μmol C/m2/s, standard error 0.17 C/m2/s), control plots in the undisturbed forest were highest (mean 4.15 μmol C/m2/s, standard error 0.63 C/m2/s), and positions near the margin of the disturbance were intermediate (mean 3.7 μmol C/m2/s, standard error 0.34 C/m2/s). Fine root densities, soil nitrogen, and microclimate changes were also measured and played an important role in respiration rates of disturbed plots. This demonstrates that a long-term effect on carbon cycling occurs when gaps are created in the canopy and root network of lodgepole forests.
ERIC Educational Resources Information Center
Marling, David
2012-01-01
Native American Nations have perpetually had the highest rates of poverty and unemployment and the lowest per capita income of any ethnic population in the United States. Additionally, American Indian students have the highest high school dropout rates and lowest academic performance rates as well as the lowest college admission and retention…
Estimating top-of-atmosphere thermal infrared radiance using MERRA-2 atmospheric data
NASA Astrophysics Data System (ADS)
Kleynhans, Tania; Montanaro, Matthew; Gerace, Aaron; Kanan, Christopher
2017-05-01
Thermal infrared satellite images have been widely used in environmental studies. However, satellites have limited temporal resolution, e.g., 16 day Landsat or 1 to 2 day Terra MODIS. This paper investigates the use of the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product, produced by NASA's Global Modeling and Assimilation Office (GMAO) to predict global topof-atmosphere (TOA) thermal infrared radiance. The high temporal resolution of the MERRA-2 data product presents opportunities for novel research and applications. Various methods were applied to estimate TOA radiance from MERRA-2 variables namely (1) a parameterized physics based method, (2) Linear regression models and (3) non-linear Support Vector Regression. Model prediction accuracy was evaluated using temporally and spatially coincident Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared data as reference data. This research found that Support Vector Regression with a radial basis function kernel produced the lowest error rates. Sources of errors are discussed and defined. Further research is currently being conducted to train deep learning models to predict TOA thermal radiance
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
Fisher, Moria E; Huang, Felix C; Wright, Zachary A; Patton, James L
2014-01-01
Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Batch growth kinetic studies of locally isolated cyanide-degrading Serratia marcescens strain AQ07.
Karamba, Kabiru Ibrahim; Ahmad, Siti Aqlima; Zulkharnain, Azham; Yasid, Nur Adeela; Ibrahim, Salihu; Shukor, Mohd Yunus
2018-01-01
The evaluation of degradation and growth kinetics of Serratia marcescens strain AQ07 was carried out using three half-order models at all the initial concentrations of cyanide with the values of regression exceeding 0.97. The presence of varying cyanide concentrations reveals that the growth and degradation of bacteria were affected by the increase in cyanide concentration with a total halt at 700 ppm KCN after 72 h incubation. In this study, specific growth and degradation rates were found to trail the substrate inhibition kinetics. These two rates fitted well to the kinetic models of Teissier, Luong, Aiba and Heldane, while the performance of Monod model was found to be unsatisfactory. These models were used to clarify the substrate inhibition on the bacteria growth. The analyses of these models have shown that Luong model has fitted the experimental data with the highest coefficient of determination ( R 2 ) value of 0.9794 and 0.9582 with the lowest root mean square error (RMSE) value of 0.000204 and 0.001, respectively, for the specific rate of degradation and growth. It is the only model that illustrates the maximum substrate concentration ( S m ) of 713.4 and empirical constant ( n ) of 1.516. Tessier and Aiba fitted the experimental data with a R 2 value of 0.8002 and 0.7661 with low RMSE of 0.0006, respectively, for specific biodegradation rate, while having a R 2 value of 0.9 and RMSE of 0.001, respectively, for specific growth rate. Haldane has the lowest R 2 value of 0.67 and 0.78 for specific biodegradation and growth rate with RMSE of 0.0006 and 0.002, respectively. This indicates the level of the bacteria stability in varying concentrations of cyanide and the maximum cyanide concentration it can tolerate within a specific time period. The biokinetic constant predicted from this model demonstrates a good ability of the locally isolated bacteria in cyanide remediation in industrial effluents.
Resident self-other assessor agreement: influence of assessor, competency, and performance level.
Lipsett, Pamela A; Harris, Ilene; Downing, Steven
2011-08-01
To review the literature on self-assessment in the context of resident performance and to determine the correlation between self-assessment across competencies in high- and low-performing residents and assessments performed by raters from a variety of professional roles (peers, nurses, and faculty). Retrospective analysis of prospectively collected anonymous self-assessment and multiprofessional (360) performance assessments by competency and overall. University-based academic general surgical program. Sixty-two residents rotating in general surgery. Mean difference for each self-assessment dyad (self-peer, self-nurse, and self-attending physician) by resident performance quartile, adjusted for measurement error, correlation coefficients, and summed differences across all competencies. Irrespective of self-other dyad, residents asked to rate their global performance overestimated their skills. Residents in the upper quartile underestimated their specific skills while those in the lowest-performing quartile overestimated their abilities when compared with faculty, peers, and especially nurse raters. Moreover, overestimation was greatest in competencies related to interpersonal skills, communication, teamwork, and professionalism. Rater, level of performance, and the competency being assessed all influence the comparison of the resident's self-assessment and those of other raters. Self-assessment of competencies related to behavior may be inaccurate when compared with raters from various professions. Residents in the lowest-performing quartile are least able to identify their weakness. These data have important implications for residents, program directors, and the public and suggest that strategies that help the lowest-performing residents recognize areas in need of improvement are needed.
Eagles-Smith, Collin A.; Ackerman, Joshua T.
2014-01-01
Estuaries are globally important areas for methylmercury bioaccumulation because of high methylmercury production rates and use by fish and wildlife. We measured total mercury (THg) concentrations in ten fish species from 32 wetland and open bay sites in San Francisco Bay Estuary (2005–2008). Fish THg concentrations (μg/g dry weight ± standard error) differed by up to 7.4× among estuary habitats. Concentrations were lowest in open bay (0.17 ± 0.02) and tidal wetlands (0.42 ± 0.02), and highest in managed seasonal saline wetlands (1.27 ± 0.05) and decommissioned high salinity salt ponds (1.14 ± 0.07). Mercury also differed among fishes, with Mississippi silversides (0.87 ± 0.03) having the highest and longjaw mudsuckers (0.37 ± 0.01) the lowest concentrations. Overall, 26% and 12% of fish exceeded toxicity benchmarks for fish (0.20 μg/g wet weight) and piscivorous bird (0.30 μg/g wet weight) health, respectively. Our results suggest that despite managed wetlands' limited abundance within estuaries, they may be disproportionately important habitats of Hg risk to coastal wildlife.
Reliability study of biometrics "do not contact" in myopia.
Migliorini, R; Fratipietro, M; Comberiati, A M; Pattavina, L; Arrico, L
The aim of the study is a comparison between the actually achieved after surgery condition versus the expected refractive condition of the eye as calculated via a biometer. The study was conducted in a random group of 38 eyes of patients undergoing surgery by phacoemulsification. The mean absolute error was calculated between the predicted values from the measurements with the optical biometer and those obtained in the post-operative error which was at around 0.47% Our study shows results not far from those reported in the literature, and in relation, to the mean absolute error is among the lowest values at 0.47 ± 0.11 SEM.
State variation in rates of cesarean and VBAC delivery: 1989 and 1993.
Clarke, S C; Taffel, S M
1996-01-01
There is wide variation among states in rates of cesarean and vaginal births after cesarean (VBAC) deliveries. In general, states in the South have the highest cesarean rates, states in the West have the lowest, and states in the Northeast and Midwest are intermediate. Louisiana had the highest overall rate in 1993 (27.7 per 100 births) while Alaska had the lowest rate (15.2). The majority of states had declines in their cesarean rate between 1989 and 1993. Patterns in primary cesarean rates are similar to those of the overall rate-states in the South generally have the highest rates while states in the West have the lowest rates. Primary cesarean rates ranged between a high of 19.6 in Louisiana to a low of 10.6 in Wisconsin. In general, states with low cesarean rates have among the highest rates of VBAC delivery. Alaska had the highest VBAC rate (40.0), which was almost quadruple the rate of Louisiana (11.2), the state with the lowest rate. Most states had substantial increases in VBAC rates between 1989 and 1993. When examining cesarean rates by maternal age and birth order, states with the highest overall rates also have among the highest age/birth order-specific rates. Cesarean rates were lowest for mothers under 25 years of age having a second or higher order birth in Alaska, 10.4, and highest for mothers 35 years of age or over having a first birth in Mississippi, 51.3. Standardized cesarean rates which were adjusted for differences between states in maternal age and birth order distributions did not diminish the variation among areas.
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
The Structure of Titan's Atmosphere from Cassini Radio Occultations
NASA Technical Reports Server (NTRS)
Schinder, Paul J.; Flasar, F. Michael; Marouf, Essam A.; French, Richard G.; McGhee, Colleen A.; Kliore, Arvydas J.; Rappaport, Nicole J.; Barbinis, Elias; Fleischman, Don; Anabtawi, Aseel
2011-01-01
We present results from the two radio occultations of the Cassini spacecraft by Titan in 2006, which probed mid-southern latitudes. Three of the ingress and egress soundings occurred within a narrow latitude range, 31.34 deg S near the surface, and the fourth at 52.8 deg S. Temperature - altitude profiles for all four occultation soundings are presented, and compared with the results of the Voyager 1 radio occultation (Lindal et al., 1983), the HASI instrument on the Huygens descent probe (Fulchignoni et al., 2005), and Cassini CIRS results (Flasar et al., 2005; Achterberg et al., 2008b). Sources of error in the retrieved temperature - altitude profiles are also discussed, and a major contribution is from spacecraft velocity errors in the reconstructed ephemeris. These can be reduced by using CIRS data at 300 km to make along-track adjustments of the spacecraft timing. The occultation soundings indicate that the temperatures just above the surface at 31-34 deg S are about 93 K, while that at 53 deg S is about 1 K colder. At the tropopause, the temperatures at the lower latitudes are all about 70 K, while the 53 deg S profile is again 1 K colder. The temperature lapse rate in the lowest 2 km for the two ingress (dawn) profiles at 31 and 33 deg S lie along a dry adiabat except within approximately 200m of the surface, where a small stable inversion occurs. This could be explained by turbulent mixing with low viscosity near the surface. The egress profile near 34 deg S shows a more complex structure in the lowest 2 km, while the egress profile at 53 deg S is more stable.
[Analysis of causes of incorrect use of dose aerosols].
Petro, W; Gebert, P; Lauber, B
1994-03-01
Preparations administered by inhalation make relatively high demands on the skill and knowledge of the patient in handling this form of application, for the effectivity of the therapy is inseparably linked to its faultless application. The present article aims at analysing possible mistakes in handling and at finding the most effective way of avoiding them. Several groups of patients with different previous knowledge were analysed in respect of handling skill and the influence of training on an improvement of the same; the patients' self-assessment was analysed by questioning them. Most mistakes are committed by patients whose only information consists of the contents of the package circular. Written instructions alone cannot convey sufficient information especially on how to synchronize the release operations. Major mistakes are insufficient expiration before application in 85.6% of the patients and lack of synchronisation in 55.9%, while the lowest rate of errors in respect of handling was seen in patients who had undergone training and instruction. Training in application associated with demonstration and subsequent exercise reduces the error ratio to a tolerable level. Pulverizers free from propelling gas and preparations applied by means of a spacer are clearly superior to others in respect of a comparatively low error rate. 99.3% of all patients believe they are correctly following the instructions, but on going into the question more deeply it becomes apparent that 37.1% of them make incorrect statements. Hence, practical training in application should get top priority in the treatment of obstructive diseases of the airways. The individual steps of inhalation technique must be explained in detail and demonstrated by means of a placebo dosage aerosol.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
1981-08-01
provide the lowest rate of momentum outflow and thus yield maximum diffuser efficiency. In their study, Wolf and Johnston (Ref. 1.12) used screens made...other words, the uniform velocity at the diffuser exit implies the lowest exit velocity attainable for a given flow rate and lowest rate of momentum ... momentum , and energy and the equation of state. The procedures of manipulating these partial differential iations into an analytical model for analyzing
Differential Item Functioning of the Psychological Domain of the Menopause Rating Scale.
Monterrosa-Castro, Alvaro; Portela-Buelvas, Katherin; Oviedo, Heidi C; Herazo, Edwin; Campo-Arias, Adalberto
2016-01-01
Introduction. Quality of life could be quantified with the Menopause Rating Scale (MRS), which evaluates the severity of somatic, psychological, and urogenital symptoms in menopause. However, differential item functioning (DIF) analysis has not been applied previously. Objective . To establish the DIF of the psychological domain of the MRS in Colombian women. Methods . 4,009 women aged between 40 and 59 years, who participated in the CAVIMEC (Calidad de Vida en la Menopausia y Etnias Colombianas) project, were included. Average age was 49.0 ± 5.9 years. Women were classified in mestizo, Afro-Colombian, and indigenous. The results were presented as averages and standard deviation ( X ± SD). A p value <0.001 was considered statistically significant. Results . In mestizo women, the highest X ± SD were obtained in physical and mental exhaustion (PME) (0.86 ± 0.93) and the lowest ones in anxiety (0.44 ± 0.79). In Afro-Colombian women, an average score of 0.99 ± 1.07 for PME and 0.63 ± 0.88 for anxiety was gotten. Indigenous women obtained an increased average score for PME (1.33 ± 0.93). The lowest score was evidenced in depressive mood (0.50 ± 0.81), which is different from other Colombian women ( p < 0.001). Conclusions . The psychological items of the MRS show differential functioning according to the ethnic group, which may induce systematic error in the measurement of the construct.
Differential Item Functioning of the Psychological Domain of the Menopause Rating Scale
Portela-Buelvas, Katherin; Oviedo, Heidi C.; Herazo, Edwin; Campo-Arias, Adalberto
2016-01-01
Introduction. Quality of life could be quantified with the Menopause Rating Scale (MRS), which evaluates the severity of somatic, psychological, and urogenital symptoms in menopause. However, differential item functioning (DIF) analysis has not been applied previously. Objective. To establish the DIF of the psychological domain of the MRS in Colombian women. Methods. 4,009 women aged between 40 and 59 years, who participated in the CAVIMEC (Calidad de Vida en la Menopausia y Etnias Colombianas) project, were included. Average age was 49.0 ± 5.9 years. Women were classified in mestizo, Afro-Colombian, and indigenous. The results were presented as averages and standard deviation (X ± SD). A p value <0.001 was considered statistically significant. Results. In mestizo women, the highest X ± SD were obtained in physical and mental exhaustion (PME) (0.86 ± 0.93) and the lowest ones in anxiety (0.44 ± 0.79). In Afro-Colombian women, an average score of 0.99 ± 1.07 for PME and 0.63 ± 0.88 for anxiety was gotten. Indigenous women obtained an increased average score for PME (1.33 ± 0.93). The lowest score was evidenced in depressive mood (0.50 ± 0.81), which is different from other Colombian women (p < 0.001). Conclusions. The psychological items of the MRS show differential functioning according to the ethnic group, which may induce systematic error in the measurement of the construct. PMID:27847825
Belley, Matthew D; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J; Chen, Benny J; Dewhirst, Mark W; Yoshizumi, Terry T
2014-03-01
Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Average doses in soft-tissue organs were found to vary by as much as 23%-32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.
Belley, Matthew D.; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J.; Chen, Benny J.; Dewhirst, Mark W.; Yoshizumi, Terry T.
2014-01-01
Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigning a single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs. PMID:24593746
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belley, Matthew D.; Wang, Chu; Nguyen, Giao
2014-03-15
Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application formore » tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.« less
Analyzing communication errors in an air medical transport service.
Dalto, Joseph D; Weir, Charlene; Thomas, Frank
2013-01-01
Poor communication can result in adverse events. Presently, no standards exist for classifying and analyzing air medical communication errors. This study sought to determine the frequency and types of communication errors reported within an air medical quality and safety assurance reporting system. Of 825 quality assurance reports submitted in 2009, 278 were randomly selected and analyzed for communication errors. Each communication error was classified and mapped to Clark's communication level hierarchy (ie, levels 1-4). Descriptive statistics were performed, and comparisons were evaluated using chi-square analysis. Sixty-four communication errors were identified in 58 reports (21% of 278). Of the 64 identified communication errors, only 18 (28%) were classified by the staff to be communication errors. Communication errors occurred most often at level 1 (n = 42/64, 66%) followed by level 4 (21/64, 33%). Level 2 and 3 communication failures were rare (, 1%). Communication errors were found in a fifth of quality and safety assurance reports. The reporting staff identified less than a third of these errors. Nearly all communication errors (99%) occurred at either the lowest level of communication (level 1, 66%) or the highest level (level 4, 33%). An air medical communication ontology is necessary to improve the recognition and analysis of communication errors. Copyright © 2013 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.
Does a better model yield a better argument? An info-gap analysis
NASA Astrophysics Data System (ADS)
Ben-Haim, Yakov
2017-04-01
Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.
Resolving Mixed Algal Species in Hyperspectral Images
Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.
2014-01-01
We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451
Kuriakose, Saji; Joe, I Hubert
2013-11-01
Determination of the authenticity of essential oils has become more significant, in recent years, following some illegal adulteration and contamination scandals. The present investigative study focuses on the application of near infrared spectroscopy to detect sample authenticity and quantify economic adulteration of sandalwood oils. Several data pre-treatments are investigated for calibration and prediction using partial least square regression (PLSR). The quantitative data analysis is done using a new spectral approach - full spectrum or sequential spectrum. The optimum number of PLS components is obtained according to the lowest root mean square error of calibration (RMSEC=0.00009% v/v). The lowest root mean square error of prediction (RMSEP=0.00016% v/v) in the test set and the highest coefficient of determination (R(2)=0.99989) are used as the evaluation tools for the best model. A nonlinear method, locally weighted regression (LWR), is added to extract nonlinear information and to compare with the linear PLSR model. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuriakose, Saji; Joe, I. Hubert
2013-11-01
Determination of the authenticity of essential oils has become more significant, in recent years, following some illegal adulteration and contamination scandals. The present investigative study focuses on the application of near infrared spectroscopy to detect sample authenticity and quantify economic adulteration of sandalwood oils. Several data pre-treatments are investigated for calibration and prediction using partial least square regression (PLSR). The quantitative data analysis is done using a new spectral approach - full spectrum or sequential spectrum. The optimum number of PLS components is obtained according to the lowest root mean square error of calibration (RMSEC = 0.00009% v/v). The lowest root mean square error of prediction (RMSEP = 0.00016% v/v) in the test set and the highest coefficient of determination (R2 = 0.99989) are used as the evaluation tools for the best model. A nonlinear method, locally weighted regression (LWR), is added to extract nonlinear information and to compare with the linear PLSR model.
Influence of nuclei segmentation on breast cancer malignancy classification
NASA Astrophysics Data System (ADS)
Jelen, Lukasz; Fevens, Thomas; Krzyzak, Adam
2009-02-01
Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides and its influence on malignancy classification. Classification of malignancy plays a very important role during the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable information about the cancer malignancy grade which helps to choose an appropriate treatment. This process involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important. In this work we compare three powerful segmentation approaches and test their impact on the classification of breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation and textural segmentation based on co-occurrence matrix. Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes four different classifiers were trained and tested with previously extracted features. The compared classifiers are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network (PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the best results over the three compared approaches and leads to a good feature extraction with a lowest average error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron with an error rate of 3.07% using fuzzy c-means segmentation.
Influence of survey strategy and interpolation model on DEM quality
NASA Astrophysics Data System (ADS)
Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.
2009-11-01
Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.
Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code
NASA Technical Reports Server (NTRS)
Gonzales, Fabian O.; Velez-Reyes, Miguel
1997-01-01
When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,
NASA Technical Reports Server (NTRS)
Gaier, T.; Kangaslahti, P.; Lawrence, C. R.; Leitch, E. M.; Wollack, E. J.
2012-01-01
The Q/U Imaging ExperimenT (QUIET) is designed to measure polarization in the Cosmic Microwave Background, targeting the imprint of inflationary gravitational waves at large angular scales ( approx 1 deg.) . Between 2008 October and 2010 December, two independent receiver arrays were deployed sequentially on a 1.4 m side-fed Dragonian telescope. The polarimeters which form the focal planes use a highly compact design based on High Electron Mobility Transistors (HEMTs) that provides simultaneous measurements of the Stokes parameters Q, U, and I in a single module. The 17-element Q-band polarimeter array, with a central frequency of 43.1 GHz, has the best sensitivity (69 micro Ks(exp 1/2)) and the lowest instrumental systematic errors ever achieved in this band, contributing to the tensor-to-scalar ratio at r < 0.1. The 84-element W-band polarimeter array has a sensitivity of 87 micro Ks(exp 1/2) at a central frequency of 94.5 GHz. It has the lowest systematic errors to date, contributing at r < 0.01 (QUIET Collaboration 2012) The two arrays together cover multipoles in the range l approximately equals 25-975 . These are the largest HEMT-ba.sed arrays deployed to date. This article describes the design, calibration, performance of, and sources of systematic error for the instrument,
Native American Transfer Students from Tribal Institutions in Montana
ERIC Educational Resources Information Center
Brown, Lindsey
2017-01-01
Nationally, and within Montana, Native American college student retention and graduation rates are significantly lower than those of their majority counterparts. Native American high school students have the highest dropout rates, lowest college enrollment rates, and lowest percentage distribution of degrees conferred of any minority population in…
Solar dynamic heat receiver thermal characteristics in low earth orbit
NASA Technical Reports Server (NTRS)
Wu, Y. C.; Roschke, E. J.; Birur, G. C.
1988-01-01
A simplified system model is under development for evaluating the thermal characteristics and thermal performance of a solar dynamic spacecraft energy system's heat receiver. Results based on baseline orbit, power system configuration, and operational conditions, are generated for three basic receiver concepts and three concentrator surface slope errors. Receiver thermal characteristics and thermal behavior in LEO conditions are presented. The configuration in which heat is directly transferred to the working fluid is noted to generate the best system and thermal characteristics. as well as the lowest performance degradation with increasing slope error.
Low-lying electronic states of Li 2+ and Li 2-
NASA Astrophysics Data System (ADS)
Konowalow, Daniel D.; Fish, James L.
1984-02-01
Potential curves for the eight lowest lying electronic states of Li2+ and the two lowest-lying states of Li2- are obtained by valence configuration calculations which-utilize an effective core potential. The calculated ionization potential of the ground state of Li2 is found to b. 5.16 eV and its electron affinity is 0.429 eV. Both values are in excellent agreement with recent experimental values and with value deduced from other high quality ab initio quantum mechanical treatments. When our potential curve for the Li2+(12Σg+ state, is corrected for the core-valence correlation error we obtain spectroscopic constants which agree nicely with the experimental values of Bernheim, Gold and Tipton (BGT). For example, we findDe = 10460 ± 140 cm-1 while BGT reportDe = 10469 ± 6 cm-1.
Krapu, G.L.; Johnson, D.H.; Dane, C.W.
1979-01-01
A technique for distinguishing adult from yearling wild mallards (Anas platyrhynchos), from late winter through the nesting season, was developed by applying discriminant analysis procedures to selected wing feather characters of 126 yearlings and 76 adults (2-year-olds) hand-reared from wild eggs during 1974, 1975, and 1977. Average values for feather characters generally increased as the birds advanced from yearlings to adults. Black-white surface area of greater secondary covert 2 was the single most reliable aging character identified during the study. The error rate was lowest in females (3%) when discriminant functions were used with measurements of primary 1 weight and black-white area of greater secondary covert 2 and in males (9%) when the functions were used with black-white area of greater secondary coverts 1, 2, and 3. Methodology precludes aging of birds in the field during capture operations.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Ghahramanian, Akram; Rezaei, Tayyebeh; Abdullahzadeh, Farahnaz; Sheikhalipour, Zahra; Dianat, Iman
2017-01-01
Background: This study investigated quality of healthcare services from patients’ perspectives and its relationship with patient safety culture and nurse-physician professional communication. Methods: A cross-sectional study was conducted among 300 surgery patients and 101 nurses caring them in a public hospital in Tabriz–Iran. Data were collected using the service quality measurement scale (SERVQUAL), hospital survey on patient safety culture (HSOPSC) and nurse physician professional communication questionnaire. Results: The highest and lowest mean (±SD) scores of the patients’ perception on the healthcare services quality belonged to the assurance 13.92 (±3.55) and empathy 6.78 (±1.88) domains,respectively. With regard to the patient safety culture, the mean percentage of positive answers ranged from 45.87% for "non-punitive response to errors" to 68.21% for "organizational continuous learning" domains. The highest and lowest mean (±SD) scores for the nurse physician professional communication were obtained for "cooperation" 3.44 (±0.35) and "non-participative decision-making" 2.84 (±0.34) domains, respectively. The "frequency of reported errors by healthcare professionals" (B=-4.20, 95% CI = -7.14 to -1.27, P<0.01) and "respect and sharing of information" (B=7.69, 95% CI=4.01 to 11.36, P<0.001) predicted the patients’perceptions of the quality of healthcare services. Conclusion: Organizational culture in dealing with medical error should be changed to non-punitive response. Change in safety culture towards reporting of errors, effective communication and teamwork between healthcare professionals are recommended. PMID:28695106
Gestational diabetes and macrosomia by race/ethnicity in Hawaii
2013-01-01
Background Gestational diabetes (GDM) has been shown to have long-term sequelae for both the mother and infant. Women with GDM are at increased risk of macrosomia, which predisposes the infant to birth injuries. Previous studies noted increased rates of GDM in Asian and Pacific Islander (API) women; however, the rate of macrosomia in API women with GDM is unclear. The objective of this study was to examine the relationship between ethnicity, gestational diabetes (GDM), and macrosomia in Hawaii. Methods A retrospective cohort study was performed using Hawaii Pregnancy Risk Assessment Monitoring System (PRAMS) data. Data from 2009–2011, linked with selected items from birth certificates, were used to examine GDM and macrosomia by ethnicity. SAS-callable SUDAAN 10.0 was used to generate odds ratios, point estimates and standard errors. Results Data from 4735 respondents were weighted to represent all pregnancies resulting in live births in Hawaii from 2009–2011. The overall prevalence of GDM in Hawaii was 10.9%. The highest prevalence of GDM was in Filipina (13.1%) and Hawaiian/Pacific Islander (12.1%) women. The lowest prevalence was in white women (7.4%). Hawaiian/Pacific Islander, Filipina, and other Asian women all had an increased risk of GDM compared to white women using bivariate analysis. Adjusting for obesity, age, maternal nativity, and smoking, Asian Pacific Islander (API) women, which includes Hawaiian/Pacific Islander, Filipina, and other Asian women, had a 50% increased odds of having GDM compared to white women when compared using multivariate analysis. Among women with GDM, the highest prevalence of macrosomia was in white women (14.5%) while the lowest was in Filipina (5.3%) women. Conclusions API women in Hawaii have increased rates of GDM compared to white women. Paradoxically, this elevated GDM risk in API women is not associated with an increased rate of macrosomia. This suggests the relationship between GDM and macrosomia is more complex in this population. PMID:24083634
Gestational diabetes and macrosomia by race/ethnicity in Hawaii.
Tsai, Pai-Jong Stacy; Roberson, Emily; Dye, Timothy
2013-10-01
Gestational diabetes (GDM) has been shown to have long-term sequelae for both the mother and infant. Women with GDM are at increased risk of macrosomia, which predisposes the infant to birth injuries. Previous studies noted increased rates of GDM in Asian and Pacific Islander (API) women; however, the rate of macrosomia in API women with GDM is unclear. The objective of this study was to examine the relationship between ethnicity, gestational diabetes (GDM), and macrosomia in Hawaii. A retrospective cohort study was performed using Hawaii Pregnancy Risk Assessment Monitoring System (PRAMS) data. Data from 2009-2011, linked with selected items from birth certificates, were used to examine GDM and macrosomia by ethnicity. SAS-callable SUDAAN 10.0 was used to generate odds ratios, point estimates and standard errors. Data from 4735 respondents were weighted to represent all pregnancies resulting in live births in Hawaii from 2009-2011. The overall prevalence of GDM in Hawaii was 10.9%. The highest prevalence of GDM was in Filipina (13.1%) and Hawaiian/Pacific Islander (12.1%) women. The lowest prevalence was in white women (7.4%). Hawaiian/Pacific Islander, Filipina, and other Asian women all had an increased risk of GDM compared to white women using bivariate analysis. Adjusting for obesity, age, maternal nativity, and smoking, Asian Pacific Islander (API) women, which includes Hawaiian/Pacific Islander, Filipina, and other Asian women, had a 50% increased odds of having GDM compared to white women when compared using multivariate analysis. Among women with GDM, the highest prevalence of macrosomia was in white women (14.5%) while the lowest was in Filipina (5.3%) women. API women in Hawaii have increased rates of GDM compared to white women. Paradoxically, this elevated GDM risk in API women is not associated with an increased rate of macrosomia. This suggests the relationship between GDM and macrosomia is more complex in this population.
NASA Astrophysics Data System (ADS)
Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.
2017-12-01
The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van den Heuvel, F; Hackett, S; Fiorini, F
Purpose: Currently, planning systems allow robustness calculations to be performed, but a generalized assessment methodology is not yet available. We introduce and evaluate a methodology to quantify the robustness of a plan on an individual patient basis. Methods: We introduce the notion of characterizing a treatment instance (i.e. one single fraction delivery) by describing the dose distribution within an organ as an alpha-stable distribution. The parameters of the distribution (shape(α), scale(γ), position(δ), and symmetry(β)), will vary continuously (in a mathematical sense) as the distributions change with the different positions. The rate of change of the parameters provides a measure ofmore » the robustness of the treatment. The methodology is tested in a planning study of 25 patients with known residual errors at each fraction. Each patient was planned using Eclipse with an IBA-proton beam model. The residual error space for every patient was sampled 30 times, yielding 31 treatment plans for each patient and dose distributions in 5 organs. The parameters’ change rate as a function of Euclidean distance from the original plan was analyzed. Results: More than 1,000 dose distributions were analyzed. For 4 of the 25 patients the change in scale rate (γ) was considerably higher than the lowest change rate, indicating a lack of robustness. The sign of the shape change rate (α) also seemed indicative but the experiment lacked the power to prove significance. Conclusion: There are indications that this robustness measure is a valuable tool to allow a more patient individualized approach to the determination of margins. In a further study we will also evaluate this robustness measure using photon treatments, and evaluate the impact of using breath hold techniques, and the a Monte Carlo based dose deposition calculation. A principle component analysis is also planned.« less
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Flavour and identification threshold detection overview of Slovak adepts for certified testing.
Vietoris, VladimIr; Barborova, Petra; Jancovicova, Jana; Eliasova, Lucia; Karvaj, Marian
2016-07-01
During certification process of sensory assessors of Slovak certification body we obtained results for basic taste thresholds and lifestyle habits. 500 adult people were screened during experiment with food industry background. For analysis of basic and non basic tastes, we used standardized procedure of ISO 8586-1:1993. In flavour test experiment, group of (26-35 y.o) produced the lowest error ratio (1.438), highest is (56+ y.o.) group with result (2.0). Average error value based on gender for women was (1.510) in comparison to men (1.477). People with allergies have the average error ratio (1.437) in comparison to people without allergies (1.511). Non-smokers produced less errors (1.484) against the smokers (1.576). Another flavour threshold identification test detected differences among age groups (by age are values increased). The highest number of errors made by men in metallic taste was (24%) the same as made by women (22%). Higher error ratio made by men occurred in salty taste (19%) against women (10%). Analysis detected some differences between allergic/non-allergic, smokers/non-smokers groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bianco, Laura; Friedrich, Katja; Wilczak, James M.
To assess current remote-sensing capabilities for wind energy applications, a remote-sensing system evaluation study, called XPIA (eXperimental Planetary boundary layer Instrument Assessment), was held in the spring of 2015 at NOAA's Boulder Atmospheric Observatory (BAO) facility. Several remote-sensing platforms were evaluated to determine their suitability for the verification and validation processes used to test the accuracy of numerical weather prediction models.The evaluation of these platforms was performed with respect to well-defined reference systems: the BAO's 300 m tower equipped at six levels (50, 100, 150, 200, 250, and 300 m) with 12 sonic anemometers and six temperature ( T) andmore » relative humidity (RH) sensors; and approximately 60 radiosonde launches.In this study we first employ these reference measurements to validate temperature profiles retrieved by two co-located microwave radiometers (MWRs) as well as virtual temperature ( T v) measured by co-located wind profiling radars equipped with radio acoustic sounding systems (RASSs). Results indicate a mean absolute error (MAE) in the temperature retrieved by the microwave radiometers below 1.5 K in the lowest 5?km of the atmosphere and a mean absolute error in the virtual temperature measured by the radio acoustic sounding systems below 0.8 K in the layer of the atmosphere covered by these measurements (up to approximately 1.6-2 km). We also investigated the benefit of the vertical velocity correction applied to the speed of sound before computing the virtual temperature by the radio acoustic sounding systems. We find that using this correction frequently increases the RASS error, and that it should not be routinely applied to all data.Water vapor density (WVD) profiles measured by the MWRs were also compared with similar measurements from the soundings, showing the capability of MWRs to follow the vertical profile measured by the sounding and finding a mean absolute error below 0.5 g m -3 in the lowest 5 km of the atmosphere. However, the relative humidity profiles measured by the microwave radiometer lack the high-resolution details available from radiosonde profiles. Furthermore, an encouraging and significant finding of this study was that the coefficient of determination between the lapse rate measured by the microwave radiometer and the tower measurements over the tower levels between 50 and 300 m ranged from 0.76 to 0.91, proving that these remote-sensing instruments can provide accurate information on atmospheric stability conditions in the lower boundary layer.« less
Bianco, Laura; Friedrich, Katja; Wilczak, James M.; ...
2017-05-09
To assess current remote-sensing capabilities for wind energy applications, a remote-sensing system evaluation study, called XPIA (eXperimental Planetary boundary layer Instrument Assessment), was held in the spring of 2015 at NOAA's Boulder Atmospheric Observatory (BAO) facility. Several remote-sensing platforms were evaluated to determine their suitability for the verification and validation processes used to test the accuracy of numerical weather prediction models.The evaluation of these platforms was performed with respect to well-defined reference systems: the BAO's 300 m tower equipped at six levels (50, 100, 150, 200, 250, and 300 m) with 12 sonic anemometers and six temperature ( T) andmore » relative humidity (RH) sensors; and approximately 60 radiosonde launches.In this study we first employ these reference measurements to validate temperature profiles retrieved by two co-located microwave radiometers (MWRs) as well as virtual temperature ( T v) measured by co-located wind profiling radars equipped with radio acoustic sounding systems (RASSs). Results indicate a mean absolute error (MAE) in the temperature retrieved by the microwave radiometers below 1.5 K in the lowest 5?km of the atmosphere and a mean absolute error in the virtual temperature measured by the radio acoustic sounding systems below 0.8 K in the layer of the atmosphere covered by these measurements (up to approximately 1.6-2 km). We also investigated the benefit of the vertical velocity correction applied to the speed of sound before computing the virtual temperature by the radio acoustic sounding systems. We find that using this correction frequently increases the RASS error, and that it should not be routinely applied to all data.Water vapor density (WVD) profiles measured by the MWRs were also compared with similar measurements from the soundings, showing the capability of MWRs to follow the vertical profile measured by the sounding and finding a mean absolute error below 0.5 g m -3 in the lowest 5 km of the atmosphere. However, the relative humidity profiles measured by the microwave radiometer lack the high-resolution details available from radiosonde profiles. Furthermore, an encouraging and significant finding of this study was that the coefficient of determination between the lapse rate measured by the microwave radiometer and the tower measurements over the tower levels between 50 and 300 m ranged from 0.76 to 0.91, proving that these remote-sensing instruments can provide accurate information on atmospheric stability conditions in the lower boundary layer.« less
NASA Astrophysics Data System (ADS)
Bianco, Laura; Friedrich, Katja; Wilczak, James M.; Hazen, Duane; Wolfe, Daniel; Delgado, Ruben; Oncley, Steven P.; Lundquist, Julie K.
2017-05-01
To assess current remote-sensing capabilities for wind energy applications, a remote-sensing system evaluation study, called XPIA (eXperimental Planetary boundary layer Instrument Assessment), was held in the spring of 2015 at NOAA's Boulder Atmospheric Observatory (BAO) facility. Several remote-sensing platforms were evaluated to determine their suitability for the verification and validation processes used to test the accuracy of numerical weather prediction models.The evaluation of these platforms was performed with respect to well-defined reference systems: the BAO's 300 m tower equipped at six levels (50, 100, 150, 200, 250, and 300 m) with 12 sonic anemometers and six temperature (T) and relative humidity (RH) sensors; and approximately 60 radiosonde launches.In this study we first employ these reference measurements to validate temperature profiles retrieved by two co-located microwave radiometers (MWRs) as well as virtual temperature (Tv) measured by co-located wind profiling radars equipped with radio acoustic sounding systems (RASSs). Results indicate a mean absolute error (MAE) in the temperature retrieved by the microwave radiometers below 1.5 K in the lowest 5 km of the atmosphere and a mean absolute error in the virtual temperature measured by the radio acoustic sounding systems below 0.8 K in the layer of the atmosphere covered by these measurements (up to approximately 1.6-2 km). We also investigated the benefit of the vertical velocity correction applied to the speed of sound before computing the virtual temperature by the radio acoustic sounding systems. We find that using this correction frequently increases the RASS error, and that it should not be routinely applied to all data.Water vapor density (WVD) profiles measured by the MWRs were also compared with similar measurements from the soundings, showing the capability of MWRs to follow the vertical profile measured by the sounding and finding a mean absolute error below 0.5 g m-3 in the lowest 5 km of the atmosphere. However, the relative humidity profiles measured by the microwave radiometer lack the high-resolution details available from radiosonde profiles. An encouraging and significant finding of this study was that the coefficient of determination between the lapse rate measured by the microwave radiometer and the tower measurements over the tower levels between 50 and 300 m ranged from 0.76 to 0.91, proving that these remote-sensing instruments can provide accurate information on atmospheric stability conditions in the lower boundary layer.
[Detection and classification of medication errors at Joan XXIII University Hospital].
Jornet Montaña, S; Canadell Vilarrasa, L; Calabuig Mũoz, M; Riera Sendra, G; Vuelta Arce, M; Bardají Ruiz, A; Gallart Mora, M J
2004-01-01
Medication errors are multifactorial and multidisciplinary, and may originate in processes such as drug prescription, transcription, dispensation, preparation and administration. The goal of this work was to measure the incidence of detectable medication errors that arise within a unit dose drug distribution and control system, from drug prescription to drug administration, by means of an observational method confined to the Pharmacy Department, as well as a voluntary, anonymous report system. The acceptance of this voluntary report system's implementation was also assessed. A prospective descriptive study was conducted. Data collection was performed at the Pharmacy Department from a review of prescribed medical orders, a review of pharmaceutical transcriptions, a review of dispensed medication and a review of medication returned in unit dose medication carts. A voluntary, anonymous report system centralized in the Pharmacy Department was also set up to detect medication errors. Prescription errors were the most frequent (1.12%), closely followed by dispensation errors (1.04%). Transcription errors (0.42%) and administration errors (0.69%) had the lowest overall incidence. Voluntary report involved only 4.25% of all detected errors, whereas unit dose medication cart review contributed the most to error detection. Recognizing the incidence and types of medication errors that occur in a health-care setting allows us to analyze their causes and effect changes in different stages of the process in order to ensure maximal patient safety.
Re-evaluating alkenone based CO2 estimates
NASA Astrophysics Data System (ADS)
Pagani, M.
2013-05-01
Multi-million year patterns of ocean temperatures and ice accumulation are relatively consistent with reconstructed CO2 records. Existing records allow for broad statements regarding climate sensitivity, but uncertainties in reconstructions can lead to considerable error. For example, alkenone-based CO2 reconstructions assume that diffusion of CO2aq is the dominant source of inorganic carbon for photosynthesis. However, the concentration of CO2aq is the lowest of all dissolved carbon species, constituting <1% of the total inorganic aqueous pool. This poses a problem for sustaining reasonable algal growth rates because the half saturation constant for the enzyme Rubisco, the primary carboxylase involved in algal photosythesis, is commonly higher than the average concentration of seawater CO2aq. That is, the concentration of CO2aq in the modern ocean is too low to maintain adequate reactions rates for Rubisco, and thus, algal growth. In order to maintain algal growth rates, most modern algae have strategies to increase intercellular CO2 concentrations. But, if such strategies were prevalent for alkenone-producing algae in the past, CO2 reconstructions could be compromised. This presentation will assess time periods when carbon-concentration strategies were potentially in play and consequences for existing CO2 records.
Prediction of road traffic death rate using neural networks optimised by genetic algorithm.
Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari
2015-01-01
Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
Use of streamflow data to estimate base flowground-water recharge for Wisconsin
Gebert, W.A.; Radloff, M.J.; Considine, E.J.; Kennedy, J.L.
2007-01-01
The average annual base flow/recharge was determined for streamflow-gaging stations throughout Wisconsin by base-flow separation. A map of the State was prepared that shows the average annual base flow for the period 1970-99 for watersheds at 118 gaging stations. Trend analysis was performed on 22 of the 118 streamflow-gaging stations that had long-term records, unregulated flow, and provided aerial coverage of the State. The analysis found that a statistically significant increasing trend was occurring for watersheds where the primary land use was agriculture. Most gaging stations where the land cover was forest had no significant trend. A method to estimate the average annual base flow at ungaged sites was developed by multiple-regression analysis using basin characteristics. The equation with the lowest standard error of estimate, 9.5%, has drainage area, soil infiltration and base flow factor as independent variables. To determine the average annual base flow for smaller watersheds, estimates were made at low-flow partial-record stations in 3 of the 12 major river basins in Wisconsin. Regression equations were developed for each of the three major river basins using basin characteristics. Drainage area, soil infiltration, basin storage and base-flow factor were the independent variables in the regression equations with the lowest standard error of estimate. The standard error of estimate ranged from 17% to 52% for the three river basins. ?? 2007 American Water Resources Association.
Kraal, Jos J; Sartor, Francesco; Papini, Gabriele; Stut, Wim; Peek, Niels; Kemps, Hareld Mc; Bonomi, Alberto G
2016-11-01
Accurate assessment of energy expenditure provides an opportunity to monitor physical activity during cardiac rehabilitation. However, the available assessment methods, based on the combination of heart rate (HR) and body movement data, are not applicable for patients using beta-blocker medication. Therefore, we developed an energy expenditure prediction model for beta-blocker-medicated cardiac rehabilitation patients. Sixteen male cardiac rehabilitation patients (age: 55.8 ± 7.3 years, weight: 93.1 ± 11.8 kg) underwent a physical activity protocol with 11 low- to moderate-intensity common daily life activities. Energy expenditure was assessed using a portable indirect calorimeter. HR and body movement data were recorded during the protocol using unobtrusive wearable devices. In addition, patients underwent a symptom-limited exercise test and resting metabolic rate assessment. Energy expenditure estimation models were developed using multivariate regression analyses based on HR and body movement data and/or patient characteristics. In addition, a HR-flex model was developed. The model combining HR and body movement data and patient characteristics showed the highest correlation and lowest error (r 2 = 0.84, root mean squared error = 0.834 kcal/minute) with total energy expenditure. The method based on individual calibration data (HR-flex) showed lower accuracy (i 2 = 0.83, root mean squared error = 0.992 kcal/minute). Our results show that combining HR and body movement data improves the accuracy of energy expenditure prediction models in cardiac patients, similar to methods that have been developed for healthy subjects. The proposed methodology does not require individual calibration and is based on the data that are available in clinical practice. © The European Society of Cardiology 2016.
How Many Alternatives Can Be Ranked? A Comparison of the Paired Comparison and Ranking Methods.
Ock, Minsu; Yi, Nari; Ahn, Jeonghoon; Jo, Min-Woo
2016-01-01
To determine the feasibility of converting ranking data into paired comparison (PC) data and suggest the number of alternatives that can be ranked by comparing a PC and a ranking method. Using a total of 222 health states, a household survey was conducted in a sample of 300 individuals from the general population. Each respondent performed a PC 15 times and a ranking method 6 times (two attempts of ranking three, four, and five health states, respectively). The health states of the PC and the ranking method were constructed to overlap each other. We converted the ranked data into PC data and examined the consistency of the response rate. Applying probit regression, we obtained the predicted probability of each method. Pearson correlation coefficients were determined between the predicted probabilities of those methods. The mean absolute error was also assessed between the observed and the predicted values. The overall consistency of the response rate was 82.8%. The Pearson correlation coefficients were 0.789, 0.852, and 0.893 for ranking three, four, and five health states, respectively. The lowest mean absolute error was 0.082 (95% confidence interval [CI] 0.074-0.090) in ranking five health states, followed by 0.123 (95% CI 0.111-0.135) in ranking four health states and 0.126 (95% CI 0.113-0.138) in ranking three health states. After empirically examining the consistency of the response rate between a PC and a ranking method, we suggest that using five alternatives in the ranking method may be superior to using three or four alternatives. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
An educational and audit tool to reduce prescribing error in intensive care.
Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D
2008-10-01
To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
Pajunen, Tuuli; Saranto, Kaija; Lehtonen, Lasse
2016-01-01
Background The rapid expansion in the use of electronic health records (EHR) has increased the number of medical errors originating in health information systems (HIS). The sociotechnical approach helps in understanding risks in the development, implementation, and use of EHR and health information technology (HIT) while accounting for complex interactions of technology within the health care system. Objective This study addresses two important questions: (1) “which of the common EHR error types are associated with perceived high- and extreme-risk severity ratings among EHR users?”, and (2) “which variables are associated with high- and extreme-risk severity ratings?” Methods This study was a quantitative, non-experimental, descriptive study of EHR users. We conducted a cross-sectional web-based questionnaire study at the largest hospital district in Finland. Statistical tests included the reliability of the summative scales tested with Cronbach’s alpha. Logistic regression served to assess the association of the independent variables to each of the eight risk factors examined. Results A total of 2864 eligible respondents provided the final data. Almost half of the respondents reported a high level of risk related to the error type “extended EHR unavailability”. The lowest overall risk level was associated with “selecting incorrectly from a list of items”. In multivariate analyses, profession and clinical unit proved to be the strongest predictors for high perceived risk. Physicians perceived risk levels to be the highest (P<.001 in six of eight error types), while emergency departments, operating rooms, and procedure units were associated with higher perceived risk levels (P<.001 in four of eight error types). Previous participation in eLearning courses on EHR-use was associated with lower risk for some of the risk factors. Conclusions Based on a large number of Finnish EHR users in hospitals, this study indicates that HIT safety hazards should be taken very seriously, particularly in operating rooms, procedure units, emergency departments, and intensive care units/critical care units. Health care organizations should use proactive and systematic assessments of EHR risks before harmful events occur. An EHR training program should be compulsory for all EHR users in order to address EHR safety concerns resulting from the failure to use HIT appropriately. PMID:27154599
LPTA versus Tradeoff: Analysis of Contract Source Selection Strategies and Performance Outcomes
2016-06-15
methodologies contracting professionals employ to acquire what the DOD needs. Contracting professionals may use lowest price technically acceptable...17 1. 2. Lowest Price Technically Acceptable...Acquisition Streamlining Act Government Accountability Office highest technically rated offeror independent variable lowest price technically
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Secure and Privacy Enhanced Gait Authentication on Smart Phone
Choi, Deokjai
2014-01-01
Smart environments established by the development of mobile technology have brought vast benefits to human being. However, authentication mechanisms on portable smart devices, particularly conventional biometric based approaches, still remain security and privacy concerns. These traditional systems are mostly based on pattern recognition and machine learning algorithms, wherein original biometric templates or extracted features are stored under unconcealed form for performing matching with a new biometric sample in the authentication phase. In this paper, we propose a novel gait based authentication using biometric cryptosystem to enhance the system security and user privacy on the smart phone. Extracted gait features are merely used to biometrically encrypt a cryptographic key which is acted as the authentication factor. Gait signals are acquired by using an inertial sensor named accelerometer in the mobile device and error correcting codes are adopted to deal with the natural variation of gait measurements. We evaluate our proposed system on a dataset consisting of gait samples of 34 volunteers. We achieved the lowest false acceptance rate (FAR) and false rejection rate (FRR) of 3.92% and 11.76%, respectively, in terms of key length of 50 bits. PMID:24955403
Insam, Charlène; Paccaud, Fred; Marques-Vidal, Pedro
2014-05-01
In Switzerland, health policies are decided at the local level, but little is known regarding their impact on the management of acute myocardial infarction (AMI). In this study, we assessed geographical differences within Switzerland regarding management of AMI. Cross-sectional study. Swiss hospital discharge database for period 2007-2008 (26,204 discharges from AMI). Seven Swiss regions (Leman, Mittelland, Northwest, Zurich, Central, Eastern, and Ticino) were analysed. Almost 53.7% of discharges from AMI were managed in a single hospital, ranging from 62.1% (Leman) to 31.6% (Ticino). The highest intensive care unit admission rate was in Leman (69.4%), the lowest (16.9%) in Ticino (Swiss average: 36.0%). Intracoronary revascularization rates were highest in Leman (51.1%) and lowest (30.9%) in Central Switzerland (average: 41.0%). Bare (non-drug-eluting) stent use was highest in Leman (61.4%) and lowest (16.9%) in Ticino (average: 42.1%), while drug-eluting stent use was highest (83.2%) in Ticino and lowest (38.6%) in Leman (average: 57.9%). Coronary artery bypass graft rates were highest (4.8%) in Ticino and lowest (0.5%) in Eastern Switzerland (average: 2.8%). Mechanical circulatory assistance rates were highest (4.2%) in Zurich and lowest (0.5%) in Ticino (average: 1.8%). The differences remained after adjusting for age, single or multiple hospital management, and gender. In Switzerland, significant geographical differences in management and revascularization procedures for AMI were found.
Evaluation of recent GRACE monthly solution series with an ice sheet perspective
NASA Astrophysics Data System (ADS)
Horwath, Martin; Groh, Andreas
2016-04-01
GRACE monthly global gravity field solutions have undergone a remarkable evolution, leading to the latest (Release 5) series by CSR, GFZ, and JPL, to new series by other processing centers, such as ITSG and AIUB, as well as to efforts to derive combined solutions, particularly by the EGSIEM (European Gravity Service for Improved Emergency Management) project. For applications, such as GRACE inferences on ice sheet mass balance, the obvious question is on what GRACE solution series to base the assessment. Here we evaluate different GRACE solution series (including the ones listed above) in a unified framework. We concentrate on solutions expanded up to degree 90 or higher, since this is most appropriate for polar applications. We empirically assess the error levels in the spectral as well as in the spatial domain based on the month-to-month scatter in the high spherical harmonic degrees. We include empirical assessment of error correlations. We then apply all series to infer Antarctic and Greenland mass change time series and compare the results in terms of apparent signal content and noise level. We find that the ITSG solutions show lowest noise level in the high degrees (above 60). A preliminary combined solution from the EGSIEM project shows lowest noise in the degrees below 60. This virtue maps into the derived ice mass time series, where the EGSIEM-based results show the lowest noise in most cases. Meanwhile, there is no indication that any of the considered series systematically dampens actual geophysical signals.
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Evaluation of nine popular de novo assemblers in microbial genome assembly.
Forouzan, Esmaeil; Maleki, Masoumeh Sadat Mousavi; Karkhane, Ali Asghar; Yakhchali, Bagher
2017-12-01
Next generation sequencing (NGS) technologies are revolutionizing biology, with Illumina being the most popular NGS platform. Short read assembly is a critical part of most genome studies using NGS. Hence, in this study, the performance of nine well-known assemblers was evaluated in the assembly of seven different microbial genomes. Effect of different read coverage and k-mer parameters on the quality of the assembly were also evaluated on both simulated and actual read datasets. Our results show that the performance of assemblers on real and simulated datasets could be significantly different, mainly because of coverage bias. According to outputs on actual read datasets, for all studied read coverages (of 7×, 25× and 100×), SPAdes and IDBA-UD clearly outperformed other assemblers based on NGA50 and accuracy metrics. Velvet is the most conservative assembler with the lowest NGA50 and error rate. Copyright © 2017. Published by Elsevier B.V.
Auger recombination in Dirac materials: A tangle of many-body effects
NASA Astrophysics Data System (ADS)
Alymov, Georgy; Vyurkov, Vladimir; Ryzhii, Victor; Satou, Akira; Svintsov, Dmitry
2018-05-01
The peculiar electron dispersion in Dirac materials makes lowest-order Auger processes prohibited or marginally prohibited by energy and momentum conservation laws. Thus, Auger recombination (AR) in these materials is very sensitive to many-body effects. We incorporate them at the level of the G W approximation into the nonequilibrium Green's functions approach to AR and study the role of dynamic screening, spectrum broadening, and renormalization in the case of weakly pumped undoped graphene. We find that incorrect treatment of many-body effects can lead to an order-of-magnitude error in the recombination rate. We show that the AR time depends weakly (sublinearly) on the background dielectric constant, which limits the possibility to control recombination by the choice of substrate. However, the AR time can be considerably prolonged by placing graphene under a metal gate or by introducing a band gap. With carrier cooling taken into account, our results comply with experiments on photoexcited graphene.
Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Development and implementation of a human accuracy program in patient foodservice.
Eden, S H; Wood, S M; Ptak, K M
1987-04-01
For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Lee, Seung-Woo; Lee, Chang-Woo; Jeong, Hyunseok
2015-01-01
We propose a quantum algorithm to obtain the lowest eigenstate of any Hamiltonian simulated by a quantum computer. The proposed algorithm begins with an arbitrary initial state of the simulated system. A finite series of transforms is iteratively applied to the initial state assisted with an ancillary qubit. The fraction of the lowest eigenstate in the initial state is then amplified up to 1. We prove that our algorithm can faithfully work for any arbitrary Hamiltonian in the theoretical analysis. Numerical analyses are also carried out. We firstly provide a numerical proof-of-principle demonstration with a simple Hamiltonian in order to compare our scheme with the so-called "Demon-like algorithmic cooling (DLAC)", recently proposed in Xu (Nat Photonics 8:113, 2014). The result shows a good agreement with our theoretical analysis, exhibiting the comparable behavior to the best `cooling' with the DLAC method. We then consider a random Hamiltonian model for further analysis of our algorithm. By numerical simulations, we show that the total number of iterations is proportional to , where is the difference between the two lowest eigenvalues and is an error defined as the probability that the finally obtained system state is in an unexpected (i.e., not the lowest) eigenstate.
Bertolini, F; Galimberti, G; Schiavo, G; Mastrangelo, S; Di Gerlando, R; Strillacci, M G; Bagnato, A; Portolano, B; Fontanesi, L
2018-01-01
Commercial single nucleotide polymorphism (SNP) arrays have been recently developed for several species and can be used to identify informative markers to differentiate breeds or populations for several downstream applications. To identify the most discriminating genetic markers among thousands of genotyped SNPs, a few statistical approaches have been proposed. In this work, we compared several methods of SNPs preselection (Delta, F st and principal component analyses (PCA)) in addition to Random Forest classifications to analyse SNP data from six dairy cattle breeds, including cosmopolitan (Holstein, Brown and Simmental) and autochthonous Italian breeds raised in two different regions and subjected to limited or no breeding programmes (Cinisara, Modicana, raised only in Sicily and Reggiana, raised only in Emilia Romagna). From these classifications, two panels of 96 and 48 SNPs that contain the most discriminant SNPs were created for each preselection method. These panels were evaluated in terms of the ability to discriminate as a whole and breed-by-breed, as well as linkage disequilibrium within each panel. The obtained results showed that for the 48-SNP panel, the error rate increased mainly for autochthonous breeds, probably as a consequence of their admixed origin lower selection pressure and by ascertaining bias in the construction of the SNP chip. The 96-SNP panels were generally more able to discriminate all breeds. The panel derived by PCA-chrom (obtained by a preselection chromosome by chromosome) could identify informative SNPs that were particularly useful for the assignment of minor breeds that reached the lowest value of Out Of Bag error even in the Cinisara, whose value was quite high in all other panels. Moreover, this panel contained also the lowest number of SNPs in linkage disequilibrium. Several selected SNPs are located nearby genes affecting breed-specific phenotypic traits (coat colour and stature) or associated with production traits. In general, our results demonstrated the usefulness of Random Forest in combination to other reduction techniques to identify population informative SNPs.
Liboiron, Max; Liboiron, France; Wells, Emily; Richárd, Natalie; Zahara, Alexander; Mather, Charles; Bradshaw, Hillary; Murichi, Judyannet
2016-12-15
Marine microplastics are a contaminant of concern because their small size allows ingestion by a wide range of marine life. Using citizen science during the Newfoundland recreational cod fishery, we sampled 205 Atlantic cod (Gadus morhua) destined for human consumption and found that 5 had eaten plastic, an ingestion prevalence rate of 2.4%. This ingestion rate for Atlantic cod is the second lowest recorded rate in the reviewed published literature (the lowest is 1.4%), and the lowest for any fish in the North Atlantic. This is the first report for plastic ingestion in fish in Newfoundland, Canada, a province dependent on fish for sustenance and livelihoods. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristophanous, M; Court, L
Purpose: Despite daily image guidance setup uncertainties can be high when treating large areas of the body. The aim of this study was to measure local uncertainties inside the PTV for patients receiving IMRT to the mediastinum region. Methods: Eleven lymphoma patients that received radiotherapy (breath-hold) to the mediastinum were included in this study. The treated region could range all the way from the neck to the diaphragm. Each patient had a CT scan with a CT-on-rails system prior to every treatment. The entire PTV region was matched to the planning CT using automatic rigid registration. The PTV was thenmore » split into 5 regions: neck, supraclavicular, superior mediastinum, upper heart, lower heart. Additional auto-registrations for each of the 5 local PTV regions were performed. The residual local setup errors were calculated as the difference between the final global PTV position and the individual final local PTV positions for the AP, SI and RL directions. For each patient 4 CT scans were analyzed (1 per week of treatment). Results: The residual mean group error (M) and standard deviation of the inter-patient (or systematic) error (Σ) were lowest in the RL direction of the superior mediastinum (0.0mm and 0.5mm) and highest in the RL direction of the lower heart (3.5mm and 2.9mm). The standard deviation of the inter-fraction (or random) error (σ) was lowest in the RL direction of the superior mediastinum (0.5mm) and highest in the SI direction of the lower heart (3.9mm) The directionality of local uncertainties is important; a superior residual error in the lower heart for example keeps it in the global PTV. Conclusion: There is a complex relationship between breath-holding and positioning uncertainties that needs further investigation. Residual setup uncertainties can be significant even under daily CT image guidance when treating large regions of the body.« less
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
Wavefront Sensing Analysis of Grazing Incidence Optical Systems
NASA Technical Reports Server (NTRS)
Rohrbach, Scott; Saha, Timo
2012-01-01
Wavefront sensing is a process by which optical system errors are deduced from the aberrations in the image of an ideal source. The method has been used successfully in near-normal incidence, but not for grazing incidence systems. This innovation highlights the ability to examine out-of-focus images from grazing incidence telescopes (typically operating in the x-ray wavelengths, but integrated using optical wavelengths) and determine the lower-order deformations. This is important because as a metrology tool, this method would allow the integration of high angular resolution optics without the use of normal incidence interferometry, which requires direct access to the front surface of each mirror. Measuring the surface figure of mirror segments in a highly nested x-ray telescope mirror assembly is difficult due to the tight packing of elements and blockage of all but the innermost elements to normal incidence light. While this can be done on an individual basis in a metrology mount, once the element is installed and permanently bonded into the assembly, it is impossible to verify the figure of each element and ensure that the necessary imaging quality will be maintained. By examining on-axis images of an ideal point source, one can gauge the low-order figure errors of individual elements, even when integrated into an assembly. This technique is known as wavefront sensing (WFS). By shining collimated light down the optical axis of the telescope and looking at out-of-focus images, the blur due to low-order figure errors of individual elements can be seen, and the figure error necessary to produce that blur can be calculated. The method avoids the problem of requiring normal incidence access to the surface of each mirror segment. Mirror figure errors span a wide range of spatial frequencies, from the lowest-order bending to the highest order micro-roughness. While all of these can be measured in normal incidence, only the lowest-order contributors can be determined through this WFS technique.
Reliability of Semi-Automated Segmentations in Glioblastoma.
Huber, T; Alber, G; Bette, S; Boeckh-Behrens, T; Gempt, J; Ringel, F; Alberts, E; Zimmer, C; Bauer, J S
2017-06-01
In glioblastoma, quantitative volumetric measurements of contrast-enhancing or fluid-attenuated inversion recovery (FLAIR) hyperintense tumor compartments are needed for an objective assessment of therapy response. The aim of this study was to evaluate the reliability of a semi-automated, region-growing segmentation tool for determining tumor volume in patients with glioblastoma among different users of the software. A total of 320 segmentations of tumor-associated FLAIR changes and contrast-enhancing tumor tissue were performed by different raters (neuroradiologists, medical students, and volunteers). All patients underwent high-resolution magnetic resonance imaging including a 3D-FLAIR and a 3D-MPRage sequence. Segmentations were done using a semi-automated, region-growing segmentation tool. Intra- and inter-rater-reliability were addressed by intra-class-correlation (ICC). Root-mean-square error (RMSE) was used to determine the precision error. Dice score was calculated to measure the overlap between segmentations. Semi-automated segmentation showed a high ICC (> 0.985) for all groups indicating an excellent intra- and inter-rater-reliability. Significant smaller precision errors and higher Dice scores were observed for FLAIR segmentations compared with segmentations of contrast-enhancement. Single rater segmentations showed the lowest RMSE for FLAIR of 3.3 % (MPRage: 8.2 %). Both, single raters and neuroradiologists had the lowest precision error for longitudinal evaluation of FLAIR changes. Semi-automated volumetry of glioblastoma was reliably performed by all groups of raters, even without neuroradiologic expertise. Interestingly, segmentations of tumor-associated FLAIR changes were more reliable than segmentations of contrast enhancement. In longitudinal evaluations, an experienced rater can detect progressive FLAIR changes of less than 15 % reliably in a quantitative way which could help to detect progressive disease earlier.
The influence of the structure and culture of medical group practices on prescription drug errors.
Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer
2005-08-01
This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.
Pulse oximetry-derived respiratory rate in general care floor patients.
Addison, Paul S; Watson, James N; Mestek, Michael L; Ochs, James P; Uribe, Alberto A; Bergese, Sergio D
2015-02-01
Respiratory rate is recognized as a clinically important parameter for monitoring respiratory status on the general care floor (GCF). Currently, intermittent manual assessment of respiratory rate is the standard of care on the GCF. This technique has several clinically-relevant shortcomings, including the following: (1) it is not a continuous measurement, (2) it is prone to observer error, and (3) it is inefficient for the clinical staff. We report here on an algorithm designed to meet clinical needs by providing respiratory rate through a standard pulse oximeter. Finger photoplethysmograms were collected from a cohort of 63 GCF patients monitored during free breathing over a 25-min period. These were processed using a novel in-house algorithm based on continuous wavelet-transform technology within an infrastructure incorporating confidence-based averaging and logical decision-making processes. The computed oximeter respiratory rates (RRoxi) were compared to an end-tidal CO2 reference rate (RRETCO2). RRETCO2 ranged from a lowest recorded value of 4.7 breaths per minute (brpm) to a highest value of 32.0 brpm. The mean respiratory rate was 16.3 brpm with standard deviation of 4.7 brpm. Excellent agreement was found between RRoxi and RRETCO2, with a mean difference of -0.48 brpm and standard deviation of 1.77 brpm. These data demonstrate that our novel respiratory rate algorithm is a potentially viable method of monitoring respiratory rate in GCF patients. This technology provides the means to facilitate continuous monitoring of respiratory rate, coupled with arterial oxygen saturation and pulse rate, using a single non-invasive sensor in low acuity settings.
Emergency department discharge prescription errors in an academic medical center
Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.
2017-01-01
This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061
Testing a new application for TOPSIS: monitoring drought and wet periods in Iran
NASA Astrophysics Data System (ADS)
Roshan, Gholamreza; Ghanghermeh, AbdolAzim; Grab, Stefan W.
2018-01-01
Globally, droughts are a recurring major natural disaster owing to below normal precipitation, and are occasionally associated with high temperatures, which together negatively impact upon human health and social, economic, and cultural activities. Drought early warning and monitoring is thus essential for reducing such potential impacts on society. To this end, several experimental methods have previously been proposed for calculating drought, yet these are based almost entirely on precipitation alone. Here, for the first time, and in contrast to previous studies, we use seven climate parameters to establish drought/wet periods; these include: T min, T max, sunshine hours, relative humidity, average rainfall, number of rain days greater than 1 mm, and the ratio of total precipitation to number of days with precipitation, using the technique for order of preference by similarity to ideal solution (TOPSIS) algorithm. To test the TOPSIS method for different climate zones, six sample stations representing a variety of different climate conditions were used by assigning weight changes to climate parameters, which are then applied to the model, together with multivariate regression analysis. For the six stations tested, model results indicate the lowest errors for Zabol station and maximum errors for Kermanshah. The validation techniques strongly support our proposed new method for calculating and rating drought/wet events using TOPSIS.
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
Welles, Alexander P; Buller, Mark J; Looney, David P; Rumpler, William V; Gribok, Andrei V; Hoyt, Reed W
2018-02-01
Human metabolic energy expenditure is critical to many scientific disciplines but can only be measured using expensive and/or restrictive equipment. The aim of this work is to determine whether the SCENARIO thermoregulatory model can be adapted to estimate metabolic rate (M) from core body temperature (T C ). To validate this method of M estimation, data were collected from fifteen test volunteers (age = 23 ± 3yr, height = 1.73 ± 0.07m, mass = 68.6 ± 8.7kg, body fat = 16.7 ± 7.3%; mean ± SD) who wore long sleeved nylon jackets and pants (I tot,clo = 1.22, I m = 0.41) during treadmill exercise tasks (32 trials; 7.8 ± 0.5km in 1h; air temp. = 22°C, 50% RH, wind speed = 0.35ms -1 ). Core body temperatures were recorded by ingested thermometer pill and M data were measured via whole room indirect calorimetry. Metabolic rate was estimated for 5min epochs in a two-step process. First, for a given epoch, a range of M values were input to the SCENARIO model and a corresponding range of T C values were output. Second, the output T C range value with the lowest absolute error relative to the observed T C for the given epoch was identified and its corresponding M range input was selected as the estimated M for that epoch. This process was then repeated for each subsequent remaining epoch. Root mean square error (RMSE), mean absolute error (MAE), and bias between observed and estimated M were 186W, 130 ± 174W, and 33 ± 183W, respectively. The RMSE for total energy expenditure by exercise period was 0.30 MJ. These results indicate that the SCENARIO model is useful for estimating M from T C when measurement is otherwise impractical. Published by Elsevier Ltd.
Taylor, Erin A; Saltzman, Evan; Bauhoff, Sebastian; Pacula, Rosalie L; Eibner, Christine
2015-01-01
Federal subsidies available to enrollees in health insurance Marketplaces are pegged to the premium of the second-lowest-cost silver plan available in each rating area (as defined by each state). People who qualify for the subsidy contribute a percentage of their income to purchase coverage, and the federal government covers the remaining cost up to the price of that premium. Because the number of plans offered and plan premiums vary substantially across rating areas, the effective value of the subsidy may vary geographically. We found that the availability of more plans in a rating area was associated with lower premiums but higher deductibles for enrollees in the second-lowest-cost silver plan. In rating areas with more than twenty plans, the average deductible in the second-lowest-cost silver plan was nearly $1,000 higher than it was in rating areas with fewer than thirteen plans. Because premium costs for second-lowest-cost silver plans are capped, deductibles may be a more salient measure of plan value for enrollees than premiums are. Greater standardization of plans or an alternative approach to calculating the subsidy could provide a more consistent benefit to enrollees across various rating areas. Project HOPE—The People-to-People Health Foundation, Inc.
Juvenile Arrests, 2000. Juvenile Justice Bulletin.
ERIC Educational Resources Information Center
Snyder, Howard N.
This bulletin examines the national and state juvenile arrest rate in 2000 using data reported annually by local law enforcement agencies nationwide to the FBI's Uniform Crime Reporting program. Results indicate that the murder rate in 2000 was the lowest since 1965; juvenile arrests for violence in 2000 were the lowest since 1988; few juveniles…
48 CFR 52.247-59 - F.o.b. Origin-Carload and Truckload Shipments.
Code of Federal Regulations, 2011 CFR
2011-10-01
.... origin contracts with shipments in carloads or truckloads. This will facilitate realistic freight cost... weight that will result in the lowest freight rate (or per car charge) on file or published in common... highest applicable minimum weight that will result in the lowest possible freight rate (or per car charge...
48 CFR 52.247-59 - F.o.b. Origin-Carload and Truckload Shipments.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... origin contracts with shipments in carloads or truckloads. This will facilitate realistic freight cost... weight that will result in the lowest freight rate (or per car charge) on file or published in common... highest applicable minimum weight that will result in the lowest possible freight rate (or per car charge...
48 CFR 52.247-59 - F.o.b. Origin-Carload and Truckload Shipments.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... origin contracts with shipments in carloads or truckloads. This will facilitate realistic freight cost... weight that will result in the lowest freight rate (or per car charge) on file or published in common... highest applicable minimum weight that will result in the lowest possible freight rate (or per car charge...
48 CFR 52.247-59 - F.o.b. Origin-Carload and Truckload Shipments.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... origin contracts with shipments in carloads or truckloads. This will facilitate realistic freight cost... weight that will result in the lowest freight rate (or per car charge) on file or published in common... highest applicable minimum weight that will result in the lowest possible freight rate (or per car charge...
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-01-01
Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033
Karunaratne, Nicholas
2013-12-01
To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Dispensing error rate after implementation of an automated pharmacy carousel system.
Oswald, Scott; Caldwell, Richard
2007-07-01
A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.
Gilbert, Sarah Skye; Thakare, Neeraj; Ramanujapuram, Arun; Akkihal, Anup
2017-04-19
Immunization supply chains in low resource settings do not always reach children with necessary vaccines. Digital information systems can enable real time visibility of inventory and improve vaccine availability. In 2014, a digital, mobile/web-based information system was implemented in two districts of Uttar Pradesh, India. This retrospective investigates improvements and stabilization of supply chain performance following introduction of the digital information system. All data were collected via the digital information system between March 2014 and September 2015. Data included metadata and transaction logs providing information about users, facilities, and vaccines. Metrics evaluated include adoption (system access, timeliness and completeness), data quality (error rates), and performance (stock availability on immunization session days, replenishment response duration, rate of zero stock events). Stability was defined as the phase in which quality and performance metrics achieved equilibrium rates with minimal volatility. The analysis compared performance across different facilities and vaccines. Adoption appeared sufficiently high from the onset to commence stability measures of data quality and supply chain performance. Data quality stabilized from month 3 onwards, and supply chain performance stabilized from month 13 onwards. For data quality, error rates reduced by two thirds post stabilization. Although vaccine availability remained high throughout the pilot, the three lowest-performing facilities improved from 91.05% pre-stability to 98.70% post-stability (p<0.01; t-test). Average replenishment duration (as a corrective response to stock-out events) decreased 52.3% from 4.93days to 2.35days (p<0.01; t-test). Diphtheria-tetanus-pertussis vaccine was significantly less likely to be stocked out than any other material. The results suggest that given sufficient adoption, stability is sequentially achieved, beginning with data quality, and then performance. Identifying when a pilot stabilizes can enable more predictable, reliable cost estimates, and outcome forecasts in the scale-up phase. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Guzman Castillo, Maria; Gillespie, Duncan O. S.; Allen, Kirk; Bandosz, Piotr; Schmid, Volker; Capewell, Simon; O’Flaherty, Martin
2014-01-01
Background Coronary Heart Disease (CHD) remains a major cause of mortality in the United Kingdom. Yet predictions of future CHD mortality are potentially problematic due to population ageing and increase in obesity and diabetes. Here we explore future projections of CHD mortality in England & Wales under two contrasting future trend assumptions. Methods In scenario A, we used the conventional counterfactual scenario that the last-observed CHD mortality rates from 2011 would persist unchanged to 2030. The future number of deaths was calculated by applying those rates to the 2012–2030 population estimates. In scenario B, we assumed that the recent falling trend in CHD mortality rates would continue. Using Lee-Carter and Bayesian Age Period Cohort (BAPC) models, we projected the linear trends up to 2030. We validate our methods using past data to predict mortality from 2002–2011. Then, we computed the error between observed and projected values. Results In scenario A, assuming that 2011 mortality rates stayed constant by 2030, the number of CHD deaths would increase 62% or approximately 39,600 additional deaths. In scenario B, assuming recent declines continued, the BAPC model (the model with lowest error) suggests the number of deaths will decrease by 56%, representing approximately 36,200 fewer deaths by 2030. Conclusions The decline in CHD mortality has been reasonably continuous since 1979, and there is little reason to believe it will soon halt. The commonly used assumption that mortality will remain constant from 2011 therefore appears slightly dubious. By contrast, using the BAPC model and assuming continuing mortality falls offers a more plausible prediction of future trends. Thus, despite population ageing, the number of CHD deaths might halve again between 2011 and 2030. This has implications for how the potential benefits of future cardiovascular strategies might best be calculated and presented. PMID:24918442
Entropy of space-time outcome in a movement speed-accuracy task.
Hsieh, Tsung-Yu; Pacheco, Matheus Maia; Newell, Karl M
2015-12-01
The experiment reported was set-up to investigate the space-time entropy of movement outcome as a function of a range of spatial (10, 20 and 30 cm) and temporal (250-2500 ms) criteria in a discrete aiming task. The variability and information entropy of the movement spatial and temporal errors considered separately increased and decreased on the respective dimension as a function of an increment of movement velocity. However, the joint space-time entropy was lowest when the relative contribution of spatial and temporal task criteria was comparable (i.e., mid-range of space-time constraints), and it increased with a greater trade-off between spatial or temporal task demands, revealing a U-shaped function across space-time task criteria. The traditional speed-accuracy functions of spatial error and temporal error considered independently mapped to this joint space-time U-shaped entropy function. The trade-off in movement tasks with joint space-time criteria is between spatial error and timing error, rather than movement speed and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Setting and validating the pass/fail score for the NBDHE.
Tsai, Tsung-Hsun; Dixon, Barbara Leatherman
2013-04-01
This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.
Analysis of significant factors for dengue fever incidence prediction.
Siriyasatien, Padet; Phumee, Atchara; Ongruk, Phatsavee; Jampachaisri, Katechan; Kesorn, Kraisak
2016-04-16
Many popular dengue forecasting techniques have been used by several researchers to extrapolate dengue incidence rates, including the K-H model, support vector machines (SVM), and artificial neural networks (ANN). The time series analysis methodology, particularly ARIMA and SARIMA, has been increasingly applied to the field of epidemiological research for dengue fever, dengue hemorrhagic fever, and other infectious diseases. The main drawback of these methods is that they do not consider other variables that are associated with the dependent variable. Additionally, new factors correlated to the disease are needed to enhance the prediction accuracy of the model when it is applied to areas of similar climates, where weather factors such as temperature, total rainfall, and humidity are not substantially different. Such drawbacks may consequently lower the predictive power for the outbreak. The predictive power of the forecasting model-assessed by Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the mean absolute percentage error (MAPE)-is improved by including the new parameters for dengue outbreak prediction. This study's selected model outperforms all three other competing models with the lowest AIC, the lowest BIC, and a small MAPE value. The exclusive use of climate factors from similar locations decreases a model's prediction power. The multivariate Poisson regression, however, effectively forecasts even when climate variables are slightly different. Female mosquitoes and seasons were strongly correlated with dengue cases. Therefore, the dengue incidence trends provided by this model will assist the optimization of dengue prevention. The present work demonstrates the important roles of female mosquito infection rates from the previous season and climate factors (represented as seasons) in dengue outbreaks. Incorporating these two factors in the model significantly improves the predictive power of dengue hemorrhagic fever forecasting models, as confirmed by AIC, BIC, and MAPE.
Ground Validation Assessments of GPM Core Observatory Science Requirements
NASA Astrophysics Data System (ADS)
Petersen, Walt; Huffman, George; Kidd, Chris; Skofronick-Jackson, Gail
2017-04-01
NASA Global Precipitation Measurement (GPM) Mission science requirements define specific measurement error standards for retrieved precipitation parameters such as rain rate, raindrop size distribution, and falling snow detection on instantaneous temporal scales and spatial resolutions ranging from effective instrument fields of view [FOV], to grid scales of 50 km x 50 km. Quantitative evaluation of these requirements intrinsically relies on GPM precipitation retrieval algorithm performance in myriad precipitation regimes (and hence, assumptions related to physics) and on the quality of ground-validation (GV) data being used to assess the satellite products. We will review GPM GV products, their quality, and their application to assessing GPM science requirements, interleaving measurement and precipitation physical considerations applicable to the approaches used. Core GV data products used to assess GPM satellite products include 1) two minute and 30-minute rain gauge bias-adjusted radar rain rate products and precipitation types (rain/snow) adapted/modified from the NOAA/OU multi-radar multi-sensor (MRMS) product over the continental U.S.; 2) Polarimetric radar estimates of rain rate over the ocean collected using the K-Pol radar at Kwajalein Atoll in the Marshall Islands and the Middleton Island WSR-88D radar located in the Gulf of Alaska; and 3) Multi-regime, field campaign and site-specific disdrometer-measured rain/snow size distribution (DSD), phase and fallspeed information used to derive polarimetric radar-based DSD retrievals and snow water equivalent rates (SWER) for comparison to coincident GPM-estimated DSD and precipitation rates/types, respectively. Within the limits of GV-product uncertainty we demonstrate that the GPM Core satellite meets its basic mission science requirements for a variety of precipitation regimes. For the liquid phase, we find that GPM radar-based products are particularly successful in meeting bias and random error requirements associated with retrievals of rain rate and required +/- 0.5 millimeter error bounds for mass-weighted mean drop diameter. Version-04 (V4) GMI GPROF radiometer-based rain rate products exhibit reasonable agreement with GV, but do not completely meet mission science requirements over the continental U.S. for lighter rain rates (e.g., 1 mm/hr) due to excessive random error ( 75%). Importantly, substantial corrections were made to the V4 GPROF algorithm and preliminary analysis of Version 5 (V5) rain products indicates more robust performance relative to GV. For the frozen phase and a modest GPM requirement to "demonstrate detection of snowfall", DPR products do successfully identify snowfall within the sensitivity and beam sampling limits of the DPR instrument ( 12 dBZ lower limit; lowest clutter-free bins). Similarly, the GPROF algorithm successfully "detects" falling snow and delineates it from liquid precipitation. However, the GV approach to computing falling-snow "detection" statistics is intrinsically tied to GPROF Bayesian algorithm-based thresholds of precipitation "detection" and model analysis temperature, and is not sufficiently tied to SWER. Hence we will also discuss ongoing work to establish the lower threshold SWER for "detection" using combined GV radar, gauge and disdrometer-based case studies.
Asteroid thermal modeling in the presence of reflected sunlight
NASA Astrophysics Data System (ADS)
Myhrvold, Nathan
2018-03-01
A new derivation of simple asteroid thermal models is presented, investigating the need to account correctly for Kirchhoff's law of thermal radiation when IR observations contain substantial reflected sunlight. The framework applies to both the NEATM and related thermal models. A new parameterization of these models eliminates the dependence of thermal modeling on visible absolute magnitude H, which is not always available. Monte Carlo simulations are used to assess the potential impact of violating Kirchhoff's law on estimates of physical parameters such as diameter and IR albedo, with an emphasis on NEOWISE results. The NEOWISE papers use ten different models, applied to 12 different combinations of WISE data bands, in 47 different combinations. The most prevalent combinations are simulated and the accuracy of diameter estimates is found to be depend critically on the model and data band combination. In the best case of full thermal modeling of all four band the errors in an idealized model the 1σ (68.27%) confidence interval is -5% to +6%, but this combination is just 1.9% of NEOWISE results. Other combinations representing 42% of the NEOWISE results have about twice the CI at -10% to +12%, before accounting for errors due to irregular shape or other real world effects that are not simulated. The model and data band combinations found for the majority of NEOWISE results have much larger systematic and random errors. Kirchhoff's law violation by NEOWISE models leads to errors in estimation accuracy that are strongest for asteroids with W1, W2 band emissivity ɛ12 in both the lowest (0.605 ≤ɛ12 ≤ 0 . 780), and highest decile (0.969 ≤ɛ12 ≤ 0 . 988), corresponding to the highest and lowest deciles of near-IR albedo pIR. Systematic accuracy error between deciles ranges from a low of 5% to as much as 45%, and there are also differences in the random errors. Kirchhoff's law effects also produce large errors in NEOWISE estimates of pIR, particularly for high values. IR observations of asteroids in bands that have substantial reflected sunlight can largely avoid these problems by adopting the Kirchhoff law compliant modeling framework presented here, which is conceptually straightforward and comes without computational cost.
Mendez, Michelle A.; Popkin, Barry M.; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R.; Sánchez, María-José; González, Carlos A
2011-01-01
Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29–65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = −0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302
Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A
2011-02-15
Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes.
Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems
NASA Astrophysics Data System (ADS)
El-Ghandour, Osama M.; Saha, Debabrata
1991-05-01
A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Graduation Rates Hit Lowest Level in 7 Years for Athletes in Football and Basketball.
ERIC Educational Resources Information Center
Suggs, Welch
1999-01-01
Graduation rates of football players and men's and women's basketball players at National Collegiate Athletic Association Division I colleges have reached their lowest level in seven years, with fewer black athletes graduating than at any time since the mid-1980s. Eight universities graduated at least 90% of athletes enrolling between 1989 and…
An evaluation of the underlying mechanisms of bloodstain pattern analysis error.
Behrooz, Nima; Hulse-Smith, Lee; Chandra, Sanjeev
2011-09-01
An experiment was designed to explore the underlying mechanisms of blood disintegration and its subsequent effect on area of origin (AO) calculations. Blood spatter patterns were created through the controlled application of pressurized air (20-80 kPa) for 0.1 msec onto suspended blood droplets (2.7-3.2 mm diameter). The resulting disintegration process was captured using high-speed photography. Straight-line triangulation resulted in a 50% height overestimation, whereas using the lowest calculated height for each spatter pattern reduced this error to 8%. Incorporation of projectile motion resulted in a 28% height underestimation. The AO xy-coordinate was found to be very accurate with a maximum offset of only 4 mm, while AO size calculations were found to be two- to fivefold greater than expected. Subsequently, reverse triangulation analysis revealed the rotational offset for 26% of stains could not be attributed to measurement error, suggesting that some portion of error is inherent in the disintegration process. © 2011 American Academy of Forensic Sciences.
Fletcher, Timothy L; Popelier, Paul L A
2016-06-14
A machine learning method called kriging is applied to the set of all 20 naturally occurring amino acids. Kriging models are built that predict electrostatic multipole moments for all topological atoms in any amino acid based on molecular geometry only. These models then predict molecular electrostatic interaction energies. On the basis of 200 unseen test geometries for each amino acid, no amino acid shows a mean prediction error above 5.3 kJ mol(-1), while the lowest error observed is 2.8 kJ mol(-1). The mean error across the entire set is only 4.2 kJ mol(-1) (or 1 kcal mol(-1)). Charged systems are created by protonating or deprotonating selected amino acids, and these show no significant deviation in prediction error over their neutral counterparts. Similarly, the proposed methodology can also handle amino acids with aromatic side chains, without the need for modification. Thus, we present a generic method capable of accurately capturing multipolar polarizable electrostatics in amino acids.
Menéndez, Lumila Paula
2017-05-01
Intraobserver error (INTRA-OE) is the difference between repeated measurements of the same variable made by the same observer. The objective of this work was to evaluate INTRA-OE from 3D landmarks registered with a Microscribe, in different datasets: (A) the 3D coordinates, (B) linear measurements calculated from A, and (C) the six-first principal component axes. INTRA-OE was analyzed by digitizing 42 landmarks from 23 skulls in three events two weeks apart from each other. Systematic error was tested through repeated measures ANOVA (ANOVA-RM), while random error through intraclass correlation coefficient. Results showed that the largest differences between the three observations were found in the first dataset. Some anatomical points like nasion, ectoconchion, temporosphenoparietal, asterion, and temporomandibular presented the highest INTRA-OE. In the second dataset, local distances had higher INTRA-OE than global distances while the third dataset showed the lowest INTRA-OE. © 2016 American Academy of Forensic Sciences.
Executive Council lists and general practitioner files
Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.
1974-01-01
An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-05-01
Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Worldwide prevalence and incidence of osteoporotic vertebral fractures.
Ballane, G; Cauley, J A; Luckey, M M; El-Hajj Fuleihan, G
2017-05-01
We investigated the prevalence and incidence of vertebral fractures worldwide. We used a systematic Medline search current to 2015 and updated as per authors' libraries. A total of 62 articles of fair to good quality and comparable methods for vertebral fracture identification were considered. The prevalence of morphometric vertebral fractures in European women is highest in Scandinavia (26%) and lowest in Eastern Europe (18%). Prevalence rates in North America (NA) for White women ≥50 are 20-24%, with a White/Black ratio of 1.6. Rates in women ≥50 years in Latin America are overall lower than Europe and NA (11-19%). In Asia, rates in women above ≥65 are highest in Japan (24%), lowest in Indonesia (9%), and in the Middle East, Lebanon, rates are 20%. The highest-lowest ratio between countries, within and across continents, varied from 1.4-2.6. Incidence data is less abundant and more heterogeneous. Age-standardized rates in studies combining hospitalized and ambulatory vertebral fractures are highest in South Korea, USA, and Hong Kong and lowest in the UK. Neither a North-South gradient nor a relation to urbanization is evident. Conversely, the incidence of hospitalized vertebral fractures in European patients ≥50 shows a North-South gradient with 3-3.7-fold variability. In the USA, rates in Whites are approximately 4-fold higher than in Blacks. Vertebral fractures variation worldwide is lower than observed with hip fractures, and some of highest rates are unexpectedly from Asia. Better quality representative studies are needed. We investigate the occurrence of vertebral fractures, worldwide, using published data current until the present. Worldwide, the variation in vertebral fractures is lower than observed for hip fractures. Some of the highest rates are from North America and unexpectedly Asia. The highest-lowest ratio between countries, within and across continents, varied from 1.4-2.6. Better quality representative data is needed.
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.
ERIC Educational Resources Information Center
Anderson, Richard C.; And Others
A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…
Estimating genotype error rates from high-coverage next-generation sequence data.
Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil
2014-11-01
Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. © 2014 Wall et al.; Published by Cold Spring Harbor Laboratory Press.
ERIC Educational Resources Information Center
Richard, Alan
2006-01-01
Florida's voucher program for students in the lowest-rated public schools is unconstitutional, the state supreme court ruled early January 2006 in a 5-2 decision that friends and foes of private school choice are scrutinizing for its potential impact on voucher debates nationwide. Chief Justice Barbara J. Pariente of the Florida Supreme Court…
Taylor, Gordon T.; Suter, Elizabeth A.; Li, Zhuo Q.; Chow, Stephanie; Stinton, Dallyce; Zaliznyak, Tatiana; Beaupré, Steven R.
2017-01-01
A new method to measure growth rates of individual photoautotrophic cells by combining stable isotope probing (SIP) and single-cell resonance Raman microspectrometry is introduced. This report explores optimal experimental design and the theoretical underpinnings for quantitative responses of Raman spectra to cellular isotopic composition. Resonance Raman spectra of isogenic cultures of the cyanobacterium, Synechococcus sp., grown in 13C-bicarbonate revealed linear covariance between wavenumber (cm−1) shifts in dominant carotenoid Raman peaks and a broad range of cellular 13C fractional isotopic abundance. Single-cell growth rates were calculated from spectra-derived isotopic content and empirical relationships. Growth rates among any 25 cells in a sample varied considerably; mean coefficient of variation, CV, was 29 ± 3% (σ/x¯), of which only ~2% was propagated analytical error. Instantaneous population growth rates measured independently by in vivo fluorescence also varied daily (CV ≈ 53%) and were statistically indistinguishable from single-cell growth rates at all but the lowest levels of cell labeling. SCRR censuses of mixtures prepared from Synechococcus sp. and T. pseudonana (a diatom) populations with varying 13C-content and growth rates closely approximated predicted spectral responses and fractional labeling of cells added to the sample. This approach enables direct microspectrometric interrogation of isotopically- and phylogenetically-labeled cells and detects as little as 3% changes in cellular fractional labeling. This is the first description of a non-destructive technique to measure single-cell photoautotrophic growth rates based on Raman spectroscopy and well-constrained assumptions, while requiring few ancillary measurements. PMID:28824580
Speech Errors across the Lifespan
ERIC Educational Resources Information Center
Vousden, Janet I.; Maylor, Elizabeth A.
2006-01-01
Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…
Computer calculated dose in paediatric prescribing.
Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C
2005-01-01
Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.
Angular rate optimal design for the rotary strapdown inertial navigation system.
Yu, Fei; Sun, Qian
2014-04-22
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung
2007-03-01
The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection
Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-01-01
Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.
Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-08-18
The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
2015-04-30
Proposal Prep Delays [F6]). For this particular factor, PCOs reported the highest impact while PMs ranked it as the lowest. Senior staff cited that Shortage...of Contracting Officers (F4) created the highest impact, while PCOs reported it had the lowest impact. With a 7% , it was the lowest among all six
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
NASA Technical Reports Server (NTRS)
Gaskell, R. W.; Synnott, S. P.
1987-01-01
To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.
Delay compensation - Its effect in reducing sampling errors in Fourier spectroscopy
NASA Technical Reports Server (NTRS)
Zachor, A. S.; Aaronson, S. M.
1979-01-01
An approximate formula is derived for the spectrum ghosts caused by periodic drive speed variations in a Michelson interferometer. The solution represents the case of fringe-controlled sampling and is applicable when the reference fringes are delayed to compensate for the delay introduced by the electrical filter in the signal channel. Numerical results are worked out for several common low-pass filters. It is shown that the maximum relative ghost amplitude over the range of frequencies corresponding to the lower half of the filter band is typically 20 times smaller than the relative zero-to-peak velocity error, when delayed sampling is used. In the lowest quarter of the filter band it is more than 100 times smaller than the relative velocity error. These values are ten and forty times smaller, respectively, than they would be without delay compensation if the filter is a 6-pole Butterworth.
Cross-Spectrum PM Noise Measurement, Thermal Energy, and Metamaterial Filters.
Gruson, Yannick; Giordano, Vincent; Rohde, Ulrich L; Poddar, Ajay K; Rubiola, Enrico
2017-03-01
Virtually all commercial instruments for the measurement of the oscillator PM noise make use of the cross-spectrum method (arXiv:1004.5539 [physics.ins-det], 2010). High sensitivity is achieved by correlation and averaging on two equal channels, which measure the same input, and reject the background of the instrument. We show that a systematic error is always present if the thermal energy of the input power splitter is not accounted for. Such error can result in noise underestimation up to a few decibels in the lowest-noise quartz oscillators, and in an invalid measurement in the case of cryogenic oscillators. As another alarming fact, the presence of metamaterial components in the oscillator results in unpredictable behavior and large errors, even in well controlled experimental conditions. We observed a spread of 40 dB in the phase noise spectra of an oscillator, just replacing the output filter.
Technological Advancements and Error Rates in Radiation Therapy Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui
2011-11-15
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Fisher, Brad L.; Wolff, David B.
2007-01-01
This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.
Relative Error Evaluation to Typical Open Global dem Datasets in Shanxi Plateau of China
NASA Astrophysics Data System (ADS)
Zhao, S.; Zhang, S.; Cheng, W.
2018-04-01
Produced by radar data or stereo remote sensing image pairs, global DEM datasets are one of the most important types for DEM data. Relative error relates to surface quality created by DEM data, so it relates to geomorphology and hydrologic applications using DEM data. Taking Shanxi Plateau of China as the study area, this research evaluated the relative error to typical open global DEM datasets including Shuttle Radar Terrain Mission (SRTM) data with 1 arc second resolution (SRTM1), SRTM data with 3 arc second resolution (SRTM3), ASTER global DEM data in the second version (GDEM-v2) and ALOS world 3D-30m (AW3D) data. Through process and selection, more than 300,000 ICESat/GLA14 points were used as the GCP data, and the vertical error was computed and compared among four typical global DEM datasets. Then, more than 2,600,000 ICESat/GLA14 point pairs were acquired using the distance threshold between 100 m and 500 m. Meanwhile, the horizontal distance between every point pair was computed, so the relative error was achieved using slope values based on vertical error difference and the horizontal distance of the point pairs. Finally, false slope ratio (FSR) index was computed through analyzing the difference between DEM and ICESat/GLA14 values for every point pair. Both relative error and FSR index were categorically compared for the four DEM datasets under different slope classes. Research results show: Overall, AW3D has the lowest relative error values in mean error, mean absolute error, root mean square error and standard deviation error; then the SRTM1 data, its values are a little higher than AW3D data; the SRTM3 and GDEM-v2 data have the highest relative error values, and the values for the two datasets are similar. Considering different slope conditions, all the four DEM data have better performance in flat areas but worse performance in sloping regions; AW3D has the best performance in all the slope classes, a litter better than SRTM1; with slope increasing, the relative error for the SRTM3 data increases faster than other DEM datasets; so SRTM3 is better than GDEM-v2 in flat regions but worse in sloping regions. As to FSR value, AW3D has the lowest value, 4.37 %; then SRTM1 data, 5.80 %, similar to AW3D data; SRTM3 has higher value, about 8.27 %; GDEM-v2 data has the highest FSR value, about 12.15 %. FSR can represent the performance of correctly creating the earth surface based on DEM data. Hence, AW3D has the best performance, which is approximate to but a little better than SRTM1. The performance of SRTM3 and GDEM-v2 is similar, which is much worse than AW3D and SRTM1, and the performance of GDEM-v2 is the worst of all. Originated from the DEM dataset with 5m resolution, AW3D is regarded as the most precise global DEM datasets up to now, so it may exerts more effect in topographic analysis and geographic research. Through analysis and comparison of the relative error for the four open global DEM datasets, this research will provide reference in open global DEM datasets selection and applications in geosciences and other relevant fields.
Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O
2015-12-01
To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors, providing an automatic robust tool to evaluate diaphragm motion.
Chen, Judy Ying; Ma, Qiufei; Everhard, Francois; Yermilov, Irina; Tian, Haijun; Mayer, Kenneth Hugh
2011-06-01
The Centers for Disease Control strongly recommends HIV screening for all patients who present to health care settings with sexually transmitted diseases (STD) or blood-borne pathogens exposure. The objective of this study is to assess the rates and determinants of HIV screening in a national sample of commercially insured patients screened or diagnosed with an STD or hepatitis B or C. We used Poisson regression model with a robust error variance to assess the determinants of HIV screening using administrative claims data from health plans across 6 states (n = 270,423). The overall HIV screening rate of patients who were diagnosed or screened for STDs or hepatitis was low (32.7%); rates were lowest for patients presenting with epididymitis or granuloma inguinale (<10%). Patients aged 25 to 34 years were more likely to be screened than other age groups. Females were significantly less likely to be screened for HIV (prevalence ratio = 0.90; 95% CI = 0.89, 0.91) than males. Patients living in states where no written HIV informed consent was required were significantly more likely to be screened than those living in states where written HIV informed consent was specifically required. HIV screening rates were low and varied by STD categories. Females and younger and older patients were at increased risk of no HIV screening. Requiring specific written informed consent for HIV screening resulted in less HIV screening. Interventions are urgently needed to increase the HIV screening rate among this at-risk population.
Cancer death rates in US congressional districts.
Siegel, Rebecca L; Sahar, Liora; Portier, Kenneth M; Ward, Elizabeth M; Jemal, Ahmedin
2015-01-01
Knowledge of the cancer burden is important for informing and advocating cancer prevention and control. Mortality data are readily available for states and counties, but not for congressional districts, from which representatives are elected and which may be more influential in compelling legislation and policy. The authors calculated average annual cancer death rates during 2002 to 2011 for each of the 435 congressional districts using mortality data from the National Center for Health Statistics and population estimates from the US Census Bureau. Age-standardized death rates were mapped for all sites combined and separately for cancers of the lung and bronchus, colorectum, breast, and prostate by race/ethnicity and sex. Overall cancer death rates vary by almost 2-fold and are generally lowest in Mountain states and highest in Appalachia and areas of the South. The distribution is similar for lung and colorectal cancers, with the lowest rates consistently noted in districts in Utah. However, for breast and prostate cancers, while the highest rates are again scattered throughout the South, the geographic pattern is less clear and the lowest rates are in Hawaii and southern Texas and Florida. Within-state heterogeneity is limited, particularly for men, with the exceptions of Texas, Georgia, and Florida. Patterns also vary by race/ethnicity. For example, the highest prostate cancer death rates are in the West and north central United States among non-Hispanic whites, but in the deep South among African Americans. Hispanics have the lowest rates except for colorectal cancer in Wyoming, eastern Colorado, and northern New Mexico. These data can facilitate cancer control and stimulate conversation about the relationship between cancer and policies that influence access to health care and the prevalence of behavioral and environmental risk factors. © 2015 American Cancer Society.
Zhang, Hongsheng; Zhang, Xianxing; Zhang, Chu; Liu, Song; He, Jian-Feng
2014-04-01
To analyze the results of an occupational hazards survey of specially supervised enterprises (156 enterprise-times) during 2011-2012 in one district of Shenzhen, China and find out the changes in occupational hazards in these enterprises, and to put forward countermeasures for the prevention and control of occupational hazards. Occupational hazards monitoring results for specially supervised enterprises (156 enterprise-times) during 2011-2012 were included. Comparison and analysis were performed between different years, different industries, different occupational hazards, and different sizes of enterprises. A total of 1274 monitoring sites from these specially supervised enterprises were included, of which qualification rate was 73.55% (937/1274), and the noise monitoring sites showed the lowest qualification rate. The overall qualification rate in 2012 (70.37%) was significantly lower than that in 2011 (80.94%) (χ(2) = 15.38, P < 0.01). In electronics industry, the qualification rate in 2012 was significantly lower than that in 2011 (χ2 = 11.27, P = 0.001). Comparison of various hazards in different industries indicated that electronic enterprises and furniture enterprises had the lowest qualification rate in noise monitoring, printing enterprises had the lowest qualification rate in organic solvent monitoring, and furniture enterprises had the lowest qualification rate in dust monitoring. Comparison between different sizes of enterprises indicated that the qualification rate of large and medium enterprises in 2012 was significantly lower than that in 2011, while the qualification rate of small enterprises in 2012 was significantly higher than that in 2011 (P < 0.01 or P < 0.05). In the prevention and control of occupational hazards in specially supervised enterprises, special attention should be paid to the control of organic solvents in printing enterprises and noise and dust in furniture enterprises.
Approximation of Bit Error Rates in Digital Communications
2007-06-01
and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase
Advances in nowcasting influenza-like illness rates using search query logs
NASA Astrophysics Data System (ADS)
Lampos, Vasileios; Miller, Andrew C.; Crossan, Steve; Stefansen, Christian
2015-08-01
User-generated content can assist epidemiological surveillance in the early detection and prevalence estimation of infectious diseases, such as influenza. Google Flu Trends embodies the first public platform for transforming search queries to indications about the current state of flu in various places all over the world. However, the original model significantly mispredicted influenza-like illness rates in the US during the 2012-13 flu season. In this work, we build on the previous modeling attempt, proposing substantial improvements. Firstly, we investigate the performance of a widely used linear regularized regression solver, known as the Elastic Net. Then, we expand on this model by incorporating the queries selected by the Elastic Net into a nonlinear regression framework, based on a composite Gaussian Process. Finally, we augment the query-only predictions with an autoregressive model, injecting prior knowledge about the disease. We assess predictive performance using five consecutive flu seasons spanning from 2008 to 2013 and qualitatively explain certain shortcomings of the previous approach. Our results indicate that a nonlinear query modeling approach delivers the lowest cumulative nowcasting error, and also suggest that query information significantly improves autoregressive inferences, obtaining state-of-the-art performance.
Advances in nowcasting influenza-like illness rates using search query logs.
Lampos, Vasileios; Miller, Andrew C; Crossan, Steve; Stefansen, Christian
2015-08-03
User-generated content can assist epidemiological surveillance in the early detection and prevalence estimation of infectious diseases, such as influenza. Google Flu Trends embodies the first public platform for transforming search queries to indications about the current state of flu in various places all over the world. However, the original model significantly mispredicted influenza-like illness rates in the US during the 2012-13 flu season. In this work, we build on the previous modeling attempt, proposing substantial improvements. Firstly, we investigate the performance of a widely used linear regularized regression solver, known as the Elastic Net. Then, we expand on this model by incorporating the queries selected by the Elastic Net into a nonlinear regression framework, based on a composite Gaussian Process. Finally, we augment the query-only predictions with an autoregressive model, injecting prior knowledge about the disease. We assess predictive performance using five consecutive flu seasons spanning from 2008 to 2013 and qualitatively explain certain shortcomings of the previous approach. Our results indicate that a nonlinear query modeling approach delivers the lowest cumulative nowcasting error, and also suggest that query information significantly improves autoregressive inferences, obtaining state-of-the-art performance.
Arendrup, Maiken Cavling; Garcia-Effron, Guillermo; Lass-Flörl, Cornelia; Lopez, Alicia Gomez; Rodriguez-Tudela, Juan-Luis; Cuenca-Estrella, Manuel; Perlin, David S.
2010-01-01
This study compared nine susceptibility testing methods and 12 endpoints for anidulafungin, caspofungin, and micafungin with the same collection of blinded FKS hot spot mutant (n = 29) and wild-type isolates (n = 94). The susceptibility tests included EUCAST Edef 7.1, agar dilution, Etest, and disk diffusion with RPMI-1640 plus 2% glucose (2G) and IsoSensitest-2G media and CLSI M27A-3. Microdilution plates were read after 24 and 48 h. The following test parameters were evaluated: fks hot spot mutants overlapping the wild-type distribution, distance between the two populations, number of very major errors (VMEs; fks mutants misclassified as susceptible), and major errors (MEs; wild-type isolates classified as resistant) using a wild-type-upper-limit value (WT-UL) (two twofold-dilutions higher than the MIC50) as the susceptibility breakpoint. The methods with the lowest number of errors (given as VMEs/MEs) across the three echinocandins were CLSI (12%/1%), agar dilution with RPMI-2G medium (14%/0%), and Etest with RPMI-2G medium (8%/3%). The fewest errors overall were observed for anidulafungin (4%/1% for EUCAST, 4%/3% for CLSI, and 3%/9% for Etest with RPMI-2G). For micafungin, VME rates of 10 to 71% were observed. For caspofungin, agar dilution with either medium was superior (VMEs/MEs of 0%/1%), while CLSI, EUCAST with IsoSensitest-2G medium, and Etest were less optimal (VMEs of 7%, 10%, and 10%, respectively). Applying the CLSI breakpoint (S ≤ 2 μg/ml) for CLSI results, 89.2% fks hot spot mutants were classified as anidulafungin susceptible, 60.7% as caspofungin susceptible, and 92.9% as micafungin susceptible. In conclusion, no test was perfect, but anidulafungin susceptibility testing using the WT-UL to define susceptibility reliably identified fks hot spot mutants. PMID:19884370
Arba-Mosquera, Samuel; Aslanides, Ioannis M.
2012-01-01
Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System
Yu, Fei; Sun, Qian
2014-01-01
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115
Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.
Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D
2016-10-01
Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang
2018-05-04
The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2013 CFR
2013-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2014 CFR
2014-10-01
....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2012 CFR
2012-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2011 CFR
2011-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
Impact of an antiretroviral stewardship strategy on medication error rates.
Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E
2018-05-02
The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Vanderhoof, Melanie; Brunner, Nicole M.; Beal, Yen-Ju G.; Hawbaker, Todd J.
2017-01-01
The U.S. Geological Survey has produced the Landsat Burned Area Essential Climate Variable (BAECV) product for the conterminous United States (CONUS), which provides wall-to-wall annual maps of burned area at 30 m resolution (1984–2015). Validation is a critical component in the generation of such remotely sensed products. Previous efforts to validate the BAECV relied on a reference dataset derived from Landsat, which was effective in evaluating the product across its timespan but did not allow for consideration of inaccuracies imposed by the Landsat sensor itself. In this effort, the BAECV was validated using 286 high-resolution images, collected from GeoEye-1, QuickBird-2, Worldview-2 and RapidEye satellites. A disproportionate sampling strategy was utilized to ensure enough burned area pixels were collected. Errors of omission and commission for burned area averaged 22 ± 4% and 48 ± 3%, respectively, across CONUS. Errors were lowest across the western U.S. The elevated error of commission relative to omission was largely driven by patterns in the Great Plains which saw low errors of omission (13 ± 13%) but high errors of commission (70 ± 5%) and potentially a region-growing function included in the BAECV algorithm. While the BAECV reliably detected agricultural fires in the Great Plains, it frequently mapped tilled areas or areas with low vegetation as burned. Landscape metrics were calculated for individual fire events to assess the influence of image resolution (2 m, 30 m and 500 m) on mapping fire heterogeneity. As the spatial detail of imagery increased, fire events were mapped in a patchier manner with greater patch and edge densities, and shape complexity, which can influence estimates of total greenhouse gas emissions and rates of vegetation recovery. The increasing number of satellites collecting high-resolution imagery and rapid improvements in the frequency with which imagery is being collected means greater opportunities to utilize these sources of imagery for Landsat product validation.
Adnan, Tassha Hilda; Hashim, Nadiah Hanis; Mohan, Kirubashni; Kim Liong, Ang; Ahmad, Ghazali; Bak Leong, Goh; Bavanandan, Sunita; Haniff, Jamaiyah
2017-01-01
Background. The incidence of patients with end-stage renal disease (ESRD) requiring dialysis has been growing rapidly in Malaysia from 18 per million population (pmp) in 1993 to 231 pmp in 2013. Objective. To forecast the incidence and prevalence of ESRD patients who will require dialysis treatment in Malaysia until 2040. Methodology. Univariate forecasting models using the number of new and current dialysis patients, by the Malaysian Dialysis and Transplant Registry from 1993 to 2013 were used. Four forecasting models were evaluated, and the model with the smallest error was selected for the prediction. Result. ARIMA (0, 2, 1) modeling with the lowest error was selected to predict both the incidence (RMSE = 135.50, MAPE = 2.85, and MAE = 87.71) and the prevalence (RMSE = 158.79, MAPE = 1.29, and MAE = 117.21) of dialysis patients. The estimated incidences of new dialysis patients in 2020 and 2040 are 10,208 and 19,418 cases, respectively, while the estimated prevalence is 51,269 and 106,249 cases. Conclusion. The growth of ESRD patients on dialysis in Malaysia can be expected to continue at an alarming rate. Effective steps to address and curb further increase in new patients requiring dialysis are urgently needed, in order to mitigate the expected financial and health catastrophes associated with the projected increase of such patients. PMID:28348890
Spencer, Bruce D
2012-06-01
Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Georgsson, Mattias; Staggers, Nancy
2016-01-01
Mobile health (mHealth) systems are becoming more common for chronic disease management, but usability studies are still needed on patients' perspectives and mHealth interaction performance. This deficiency is addressed by our quantitative usability study of a mHealth diabetes system evaluating patients' task performance, satisfaction, and the relationship of these measures to user characteristics. We used metrics in the International Organization for Standardization (ISO) 9241-11 standard. After standardized training, 10 patients performed representative tasks and were assessed on individual task success, errors, efficiency (time on task), satisfaction (System Usability Scale [SUS]) and user characteristics. Tasks of exporting and correcting values proved the most difficult, had the most errors, the lowest task success rates, and consumed the longest times on task. The average SUS satisfaction score was 80.5, indicating good but not excellent system usability. Data trends showed males were more successful in task completion, and younger participants had higher performance scores. Educational level did not influence performance, but a more recent diabetes diagnosis did. Patients with more experience in information technology (IT) also had higher performance rates. Difficult task performance indicated areas for redesign. Our methods can assist others in identifying areas in need of improvement. Data about user background and IT skills also showed how user characteristics influence performance and can provide future considerations for targeted mHealth designs. Using the ISO 9241-11 usability standard, the SUS instrument for satisfaction and measuring user characteristics provided objective measures of patients' experienced usability. These could serve as an exemplar for standardized, quantitative methods for usability studies on mHealth systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Staggers, Nancy
2016-01-01
Objective Mobile health (mHealth) systems are becoming more common for chronic disease management, but usability studies are still needed on patients’ perspectives and mHealth interaction performance. This deficiency is addressed by our quantitative usability study of a mHealth diabetes system evaluating patients’ task performance, satisfaction, and the relationship of these measures to user characteristics. Materials and Methods We used metrics in the International Organization for Standardization (ISO) 9241-11 standard. After standardized training, 10 patients performed representative tasks and were assessed on individual task success, errors, efficiency (time on task), satisfaction (System Usability Scale [SUS]) and user characteristics. Results Tasks of exporting and correcting values proved the most difficult, had the most errors, the lowest task success rates, and consumed the longest times on task. The average SUS satisfaction score was 80.5, indicating good but not excellent system usability. Data trends showed males were more successful in task completion, and younger participants had higher performance scores. Educational level did not influence performance, but a more recent diabetes diagnosis did. Patients with more experience in information technology (IT) also had higher performance rates. Discussion Difficult task performance indicated areas for redesign. Our methods can assist others in identifying areas in need of improvement. Data about user background and IT skills also showed how user characteristics influence performance and can provide future considerations for targeted mHealth designs. Conclusion Using the ISO 9241-11 usability standard, the SUS instrument for satisfaction and measuring user characteristics provided objective measures of patients’ experienced usability. These could serve as an exemplar for standardized, quantitative methods for usability studies on mHealth systems. PMID:26377990
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
Morrow, Linda; Hompesch, Marcus; Tideman, Ann M; Matson, Jennifer; Dunne, Nancy; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Simmons, David A
2011-07-01
This glucose clamp study assessed the performance of an electrochemical continuous glucose monitoring (CGM) system for monitoring levels of interstitial glucose. This novel system does not require use of a trocar or needle for sensor insertion. Continuous glucose monitoring sensors were inserted subcutaneously into the abdominal tissue of 14 adults with type 1 or type 2 diabetes. Subjects underwent an automated glucose clamp procedure with four consecutive post-steady-state glucose plateau periods (40 min each): (a) hypoglycemic (50 mg/dl), (b) hyperglycemic (250 mg/dl), (c) second hypoglycemic (50 mg/dl), and (d) euglycemic (90 mg/dl). Plasma glucose results obtained with YSI glucose analyzers were used for sensor calibration. Accuracy was assessed retrospectively for plateau periods and transition states, when glucose levels were changing rapidly (approximately 2 mg/dl/min). Mean absolute percent difference (APD) was lowest during hypoglycemic plateaus (11.68%, 14.15%) and the euglycemic-to-hypoglycemic transition (14.21%). Mean APD during the hyperglycemic plateau was 17.11%; mean APDs were 18.12% and 19.25% during the hypoglycemic-to-hyperglycemic and hyperglycemic-to-hypoglycemic transitions, respectively. Parkes (consensus) error grid analysis (EGA) and rate EGA of the plateaus and transition periods, respectively, yielded 86.8% and 68.6% accurate results (zone A) and 12.1% and 20.0% benign errors (zone B). Continuous EGA yielded 88.5%, 75.4%, and 79.3% accurate results and 8.3%, 14.3%, and 2.4% benign errors for the euglycemic, hyperglycemic, and hypoglycemic transition periods, respectively. Adverse events were mild and unlikely to be device related. This novel CGM system was safe and accurate across the clinically relevant glucose range. © 2011 Diabetes Technology Society.
Morrow, Linda; Hompesch, Marcus; Tideman, Ann M; Matson, Jennifer; Dunne, Nancy; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Simmons, David A
2011-01-01
Background This glucose clamp study assessed the performance of an electrochemical continuous glucose monitoring (CGM) system for monitoring levels of interstitial glucose. This novel system does not require use of a trocar or needle for sensor insertion. Method Continuous glucose monitoring sensors were inserted subcutaneously into the abdominal tissue of 14 adults with type 1 or type 2 diabetes. Subjects underwent an automated glucose clamp procedure with four consecutive post-steady-state glucose plateau periods (40 min each): (a) hypoglycemic (50 mg/dl), (b) hyperglycemic (250 mg/dl), (c) second hypoglycemic (50 mg/dl), and (d) euglycemic (90 mg/dl). Plasma glucose results obtained with YSI glucose analyzers were used for sensor calibration. Accuracy was assessed retrospectively for plateau periods and transition states, when glucose levels were changing rapidly (approximately 2 mg/dl/min). Results Mean absolute percent difference (APD) was lowest during hypoglycemic plateaus (11.68%, 14.15%) and the euglycemic-to-hypoglycemic transition (14.21%). Mean APD during the hyperglycemic plateau was 17.11%; mean APDs were 18.12% and 19.25% during the hypoglycemic-to-hyperglycemic and hyperglycemic-to-hypoglycemic transitions, respectively. Parkes (consensus) error grid analysis (EGA) and rate EGA of the plateaus and transition periods, respectively, yielded 86.8% and 68.6% accurate results (zone A) and 12.1% and 20.0% benign errors (zone B). Continuous EGA yielded 88.5%, 75.4%, and 79.3% accurate results and 8.3%, 14.3%, and 2.4% benign errors for the euglycemic, hyperglycemic, and hypoglycemic transition periods, respectively. Adverse events were mild and unlikely to be device related. Conclusion This novel CGM system was safe and accurate across the clinically relevant glucose range. PMID:21880226
Beauty and ugliness in the bodies and faces of others: an fMRI study of person esthetic judgement.
Martín-Loeches, M; Hernández-Tamames, J A; Martín, A; Urrutia, M
2014-09-26
Whether beauty and ugliness represent two independent judgement categories or, instead, opposite extremes of a single dimension is a matter of debate. In the present 3T-functional Magnetic Resonance Imaging (fMRI) study, 20 participants were scanned while judging faces and nude bodies of people classified as extremely ugly, extremely beautiful, or indifferent. Certain areas, such as the caudate/nucleus accumbens (NAcc) and the anterior cingulate cortex (ACC), exhibited a linear relationship across esthetic judgments supporting ugliness as the lowest extreme of a beauty continuum. Other regions, such as basal occipital areas, displayed an inverse pattern, with the highest activations for ugly and the lowest for beautiful ones. Further, several areas were involved alike by both the very beautiful and the very ugly stimuli. Among these, the medial orbitofrontal cortex (mOFC), as well as the posterior and medial portions of the cingulate gyrus. This is interpreted as the activation of neural circuits related to self- vs. other-assessment. Beauty and ugliness in the brain, at least in relation to natural and biologically and socially relevant stimuli (faces and bodies), appear tightly related and non-independent. Finally, neutral stimuli elicited strong and wide activations of the somatosensory and somatomotor systems together with longer reaction times and higher error rates, probably reflecting the difficulty of the human brain to classify someone as indifferent. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
The statistical validity of nursing home survey findings.
Woolley, Douglas C
2011-11-01
The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.
Comparison of surgical techniques of 111 medial malleolar fractures classified by fracture geometry.
Ebraheim, Nabil A; Ludwig, Todd; Weston, John T; Carroll, Trevor; Liu, Jiayong
2014-05-01
Evaluation of operative techniques used for medial malleolar fractures by classifying fracture geometry has not been well documented. One hundred eleven patients with medial malleolar fractures (transverse n = 63, oblique n = 29, vertical n = 7, comminuted n = 12) were included in this study. Seventy-two patients had complicating comorbidities. All patients were treated with buttress plate, lag screw, tension band, or K-wire fixation. Treatment outcomes were evaluated on the basis of radiological outcome (union, malunion, delayed union, or nonunion), need for operative revision, presence of postoperative complications, and AOFAS Ankle-Hindfoot score. For transverse fractures, tension band fixation showed the highest rate of union (79%), highest average AOFAS score (86), lowest revision rate (5%), and lowest complication rate (16%). For oblique fractures, lag screws showed the highest rate of union (71%), highest average AOFAS score (80), lowest revision rate (19%), and lowest complication rate (33%) of the commonly used fixation techniques. For vertical fractures, buttress plating was used in every case but 1, achieving union (whether normal or delayed) in all cases with an average AOFAS score of 84, no revisions, and a 17% complication rate. Comminuted fractures had relatively poor outcomes regardless of fixation method. The results of this study suggest that both tension bands and lag screws result in similar rates of union for transverse fractures of the medial malleolus, but that tension band constructs are associated with less need for revision surgery and fewer complications. In addition, our data demonstrate that oblique fractures were most effectively treated with lag screws and that vertical fractures attained superior outcomes with buttress plating. Level III, retrospective comparative series.
How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?
Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina
2015-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615
How does aging affect the types of error made in a visual short-term memory 'object-recall' task?
Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina
2014-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.
Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.
Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep
2014-01-01
Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.
Pelgrim, Gert Jan; van Hamersvelt, Robbert W; Willemink, Martin J; Schmidt, Bernhard T; Flohr, Thomas; Schilham, Arnold; Milles, Julien; Oudkerk, Matthijs; Leiner, Tim; Vliegenthart, Rozemarijn
2017-09-01
To determine the accuracy of iodine quantification with dual energy computed tomography (DECT) in two high-end CT systems with different spectral imaging techniques. Five tubes with different iodine concentrations (0, 5, 10, 15, 20 mg/ml) were analysed in an anthropomorphic thoracic phantom. Adding two phantom rings simulated increased patient size. For third-generation dual source CT (DSCT), tube voltage combinations of 150Sn and 70, 80, 90, 100 kVp were analysed. For dual layer CT (DLCT), 120 and 140 kVp were used. Scans were repeated three times. Median normalized values and interquartile ranges (IQRs) were calculated for all kVp settings and phantom sizes. Correlation between measured and known iodine concentrations was excellent for both systems (R = 0.999-1.000, p < 0.0001). For DSCT, median measurement errors ranged from -0.5% (IQR -2.0, 2.0%) at 150Sn/70 kVp and -2.3% (IQR -4.0, -0.1%) at 150Sn/80 kVp to -4.0% (IQR -6.0, -2.8%) at 150Sn/90 kVp. For DLCT, median measurement errors ranged from -3.3% (IQR -4.9, -1.5%) at 140 kVp to -4.6% (IQR -6.0, -3.6%) at 120 kVp. Larger phantom sizes increased variability of iodine measurements (p < 0.05). Iodine concentration can be accurately quantified with state-of-the-art DECT systems from two vendors. The lowest absolute errors were found for DSCT using the 150Sn/70 kVp or 150Sn/80 kVp combinations, which was slightly more accurate than 140 kVp in DLCT. • High-end CT scanners allow accurate iodine quantification using different DECT techniques. • Lowest measurement error was found in scans with largest photon energy separation. • Dual-source CT quantified iodine slightly more accurately than dual layer CT.
Welles, Alexander P; Xu, Xiaojiang; Santee, William R; Looney, David P; Buller, Mark J; Potter, Adam W; Hoyt, Reed W
2018-05-18
Core body temperature (T C ) is a key physiological metric of thermal heat-strain yet it remains difficult to measure non-invasively in the field. This work used combinations of observations of skin temperature (T S ), heat flux (HF), and heart rate (HR) to accurately estimate T C using a Kalman Filter (KF). Data were collected from eight volunteers (age 22 ± 4 yr, height 1.75 ± 0.10 m, body mass 76.4 ± 10.7 kg, and body fat 23.4 ± 5.8%, mean ± standard deviation) while walking at two different metabolic rates (∼350 and ∼550 W) under three conditions (warm: 25 °C, 50% relative humidity (RH); hot-humid: 35 °C, 70% RH; and hot-dry: 40 °C, 20% RH). Skin temperature and HF data were collected from six locations: pectoralis, inner thigh, scapula, sternum, rib cage, and forehead. Kalman filter variables were learned via linear regression and covariance calculations between T C and T S , HF, and HR. Root mean square error (RMSE) and bias were calculated to identify the best performing models. The pectoralis (RMSE 0.18 ± 0.04 °C; bias -0.01 ± 0.09 °C), rib (RMSE 0.18 ± 0.09 °C; bias -0.03 ± 0.09 °C), and sternum (RMSE 0.20 ± 0.10 °C; bias -0.04 ± 0.13 °C) were found to have the lowest error values when using T S , HF, and HR but, using only two of these measures provided similar accuracy. Copyright © 2018. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
Prediction of municipal solid waste generation using nonlinear autoregressive network.
Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A
2015-12-01
Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.
Five-year growth trajectories of kindergarten children with learning difficulties in mathematics.
Morgan, Paul L; Farkas, George; Qiong Wu
2009-01-01
The investigators used data from the Early Childhood Longitudinal Study-Kindergarten Cohort (ECLS-K) to estimate whether and to what extent the timing and persistence of mathematics difficulties (MD) in kindergarten predicted children's first through fifth grade math growth trajectories. Results indicated that children persistently displaying MD (i.e., those experiencing MD in both fall and spring of kindergarten) had the lowest subsequent growth rates, children with MD in spring only had the second-lowest growth rates, and children with MD in the fall only (and who had thus recovered from their MD by the spring of kindergarten) had the next-lowest growth rates. The children who did not have MD in either fall or spring of kindergarten had the highest growth rates. These results were observed prior to and after statistical control for additional variables. They indicate that measuring the timing and persistence of kindergarten children's mathematics learning difficulties may help identify those most at risk for failing to become mathematically proficient during elementary school.
7 CFR 275.23 - Determination of State agency program performance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-05-01
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
Collisional excitation of intersteller molecules: Ammonia
NASA Technical Reports Server (NTRS)
Green, S.
1981-01-01
Theoretical rate constants are presented for excitation of NH3 by collisions with He. The lowest 22 levels of ortho-NH3 and the lowest 16 levels of para-NH3 are considered at kinetic temperatures of 15 to 300 K.
Association between split selection instability and predictive error in survival trees.
Radespiel-Tröger, M; Gefeller, O; Rabenstein, T; Hothorn, T
2006-01-01
To evaluate split selection instability in six survival tree algorithms and its relationship with predictive error by means of a bootstrap study. We study the following algorithms: logrank statistic with multivariate p-value adjustment without pruning (LR), Kaplan-Meier distance of survival curves (KM), martingale residuals (MR), Poisson regression for censored data (PR), within-node impurity (WI), and exponential log-likelihood loss (XL). With the exception of LR, initial trees are pruned by using split-complexity, and final trees are selected by means of cross-validation. We employ a real dataset from a clinical study of patients with gallbladder stones. The predictive error is evaluated using the integrated Brier score for censored data. The relationship between split selection instability and predictive error is evaluated by means of box-percentile plots, covariate and cutpoint selection entropy, and cutpoint selection coefficients of variation, respectively, in the root node. We found a positive association between covariate selection instability and predictive error in the root node. LR yields the lowest predictive error, while KM and MR yield the highest predictive error. The predictive error of survival trees is related to split selection instability. Based on the low predictive error of LR, we recommend the use of this algorithm for the construction of survival trees. Unpruned survival trees with multivariate p-value adjustment can perform equally well compared to pruned trees. The analysis of split selection instability can be used to communicate the results of tree-based analyses to clinicians and to support the application of survival trees.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia
1996-01-01
Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er
Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F
2018-01-01
Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.
Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B
2013-09-01
In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Tully, Mary P; Buchan, Iain E
2009-12-01
To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.
Quadratic Zeeman effect for hydrogen: A method for rigorous bound-state error estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonte, G.; Falsaperla, P.; Schiffrer, G.
1990-06-01
We present a variational method, based on direct minimization of energy, for the calculation of eigenvalues and eigenfunctions of a hydrogen atom in a strong uniform magnetic field in the framework of the nonrelativistic theory (quadratic Zeeman effect). Using semiparabolic coordinates and a harmonic-oscillator basis, we show that it is possible to give rigorous error estimates for both eigenvalues and eigenfunctions by applying some results of Kato (Proc. Phys. Soc. Jpn. 4, 334 (1949)). The method can be applied in this simple form only to the lowest level of given angular momentum and parity, but it is also possible tomore » apply it to any excited state by using the standard Rayleigh-Ritz diagonalization method. However, due to the particular basis, the method is expected to be more effective, the weaker the field and the smaller the excitation energy, while the results of Kato we have employed lead to good estimates only when the level spacing is not too small. We present a numerical application to the {ital m}{sup {ital p}}=0{sup +} ground state and the lowest {ital m}{sup {ital p}}=1{sup {minus}} excited state, giving results that are among the most accurate in the literature for magnetic fields up to about 10{sup 10} G.« less
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
du Rose, Alister; Breen, Alan
2016-03-10
Intervertebral motion impairment is widely thought to be related to chronic back disability, however, the movements of inter-vertebral pairs are not independent of each other and motion may also be related to morphology. Furthermore, maximum intervertebral range of motion (IV-RoMmax) is difficult to measure accurately in living subjects. The purpose of this study was to explore possible relationships between (IV-RoMmax) and lordosis, initial attainment rate and IV-RoMmax at other levels during weight-bearing flexion using quantitative fluoroscopy (QF). Continuous QF motion sequences were recorded during controlled active sagittal flexion of 60° in 18 males (mean age 27.6 SD 4.4) with no history of low back pain in the previous year. IV-RoMmax, lordotic angle, and initial attainment rate at all inter-vertebral levels from L2-S1 were extracted. Relationships between IV-RoMmax and the other variables were explored using correlation coefficients, and simple linear regression was used to determine the effects of any significant relationships. Within and between observer repeatability of IV-RoMmax and initial attainment rate measurements were assessed in a sub-set of ten participants, using the intra-class correlation coefficient (ICC) and standard error of measurement (SEM). QF measurements were highly repeatable, the lowest ICC for IV-RoMmax, being 0.94 (0.80-0.99) and highest SEM (0.76°). For initial attainment rate the lowest ICC was 0.84 (0.49-0.96) and the highest SEM (0.036). The results also demonstrated significant positive and negative correlations between IV-RoMmax and IV-RoMmax at other lumbar levels (r = -0.64-0.65), lordosis (r = -0.52-0.54), and initial attainment rate (r = -0.64-0.73). Simple linear regression analysis of all significant relationships showed that these predict between 28 and 42 % of the variance in IV-RoMmax. This study found weak to moderate effects of individual kinematic variables and lumbar lordosis on IV-RoMmax at other intervertebral levels. These effects, when combined, may be important when such levels are being considered by healthcare professionals as potential sources of pain generation. Multivariate investigations in larger samples are warranted.
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang
2018-01-01
The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions. PMID:29734707
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404
Mogull, Scott A
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).
ERIC Educational Resources Information Center
Birjandi, Parviz; Siyyari, Masood
2016-01-01
This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…
National Suicide Rates a Century after Durkheim: Do We Know Enough to Estimate Error?
ERIC Educational Resources Information Center
Claassen, Cynthia A.; Yip, Paul S.; Corcoran, Paul; Bossarte, Robert M.; Lawrence, Bruce A.; Currier, Glenn W.
2010-01-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the…
The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency
ERIC Educational Resources Information Center
Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ
2012-01-01
This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2013-01-01
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
A retrospective review of medical errors adjudicated in court between 2002 and 2012 in Spain.
Giraldo, Priscila; Sato, Luke; Sala, María; Comas, Merce; Dywer, Kathy; Castells, Xavier
2016-02-01
This paper describes verdicts in court involving injury-producing medical errors in Spain. A descriptive analysis of 1041 closed court verdicts from Spain between January 2002 and December 2012. It was determined whether a medical error had occurred, and among those with medical error (n = 270), characteristics and results of litigation were analyzed. Data on litigation were obtained from the Thomson Reuters Aranzadi Westlaw databases. All verdicts involving health system were reviewed and classified according to the presence of medical error. Among those, contributory factors, medical specialty involved, health impact (death, disability and severity) and results of litigation (resolution, time to verdict and economic compensations) were described. Medical errors were involved in 25.9% of court verdicts. The cause of medical error was a diagnosis-related problem in 25.1% and surgical treatment in 22.2%, and Obstetrics-Gynecology was the most frequent involved specialty (21%). Most of them were of high severity (59.4%), one-third (32%) caused death. The average time interval between the occurrence of the error and the verdict was 7.8 years. The average indemnity payment was €239 505.24; the highest was psychiatry (€7 585 075.86) and the lowest was Emergency Medicine (€69 871.19). This study indicates that in Spain medical errors are common among verdicts involving the health system, most of them causing high-severity adverse outcomes. The interval between the medical error and the verdict is excessive, and there is a wide range of economic compensation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Decrease in medical command errors with use of a "standing orders" protocol system.
Holliman, C J; Wuerz, R C; Meador, S A
1994-05-01
The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)
Quantifying Data Quality for Clinical Trials Using Electronic Data Capture
Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.
2008-01-01
Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958
Interactions between Patients, Providers, and Health Systems and Technical Quality of Care
Yelin, Edward; Yazdany, Jinoos; Tonner, Chris; Trupin, Laura; Criswell, Lindsey A.; Katz, Patricia; Schmajuk, Gabriela
2014-01-01
Objectives Prior studies have established disparities by race/ethnicity and socioeconomic status (SES) in the kind, quantity, and technical quality of SLE care and outcomes. In this study we evaluate whether disparities exist in assessments of interactions with health care providers and health plans and whether such interactions affect the technical quality of SLE care. Methods Data derive from the Lupus Outcomes Study (LOS). Principal data collection is an annual structured phone interview including items from the Consumer Assessment of Health Plans and Interpersonal Processes of Care Scale measuring dimensions of health care interactions. We use general estimating equations to assess whether disparities exist by race/ethnicity and SES in being in the lowest quartile of ratings of such interactions and whether ratings in the lowest quartile of interactions are associated with technical quality of care after adjustment for sociodemographic and disease characteristics. Results In the 2012 LOS interview, there were 793 respondents, of whom 640 had ≥1 visits to their principal SLE provider. Non-white race/ethnicity and education were not associated with low ratings on any dimension of provider or system interaction; poverty was associated only with low ratings of health plan interactions. After adjustment for demographics, SLE status, and health care variables, ratings in the lowest quartile on all dimensions were associated with significantly lower technical quality of care. Conclusions Ratings in the lowest quartile on all dimensions of interactions with providers and the health care system were associated with lower technical quality of care, potentially resulting in poorer SLE outcomes. PMID:25132660
Interactions between patients, providers, and health systems and technical quality of care.
Yelin, Edward; Yazdany, Jinoos; Tonner, Chris; Trupin, Laura; Criswell, Lindsey A; Katz, Patricia; Schmajuk, Gabriela
2015-03-01
Prior studies have established disparities by race/ethnicity and socioeconomic status (SES) in the kind, quantity, and technical quality of systemic lupus erythematosus (SLE) care and outcomes. In this study we evaluate whether disparities exist in assessments of interactions with health care providers and health plans and whether such interactions affect the technical quality of SLE care. Data derive from the Lupus Outcomes Study (LOS). Principal data collection is an annual structured phone interview including items from the Consumer Assessment of Health Plans and Interpersonal Processes of Care Scale measuring dimensions of health care interactions. We use general estimating equations to assess whether disparities exist by race/ethnicity and SES in being in the lowest quartile of ratings of such interactions and whether ratings in the lowest quartile of interactions are associated with technical quality of care after adjustment for sociodemographic and disease characteristics. In the 2012 LOS interview, there were 793 respondents, of whom 640 had ≥1 visit to their principal SLE provider. Nonwhite race/ethnicity and education were not associated with low ratings on any dimension of provider or system interaction; poverty was associated only with low ratings of health plan interactions. After adjustment for demographics, SLE status, and health care variables, ratings in the lowest quartile on all dimensions were associated with significantly lower technical quality of care. Ratings in the lowest quartile on all dimensions of interactions with providers and the health care system were associated with lower technical quality of care, potentially resulting in poorer SLE outcomes. Copyright © 2015 by the American College of Rheumatology.
NASA Technical Reports Server (NTRS)
Safren, H. G.
1987-01-01
The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.
NASA Astrophysics Data System (ADS)
Abecia, J. A.; Arrébola, F.; Macías, A.; Laviña, A.; González-Casquet, O.; Benítez, F.; Palacios, C.
2016-10-01
A total number of 1092 artificial inseminations (AIs) performed from March to May were documented over four consecutive years on 10 Payoya goat farms (36° N) and 19,392 AIs on 102 Rasa Aragonesa sheep farms (41° N) over 10 years. Mean, maximum, and minimum ambient temperatures, mean relative humidity, mean solar radiation, and total rainfall on each insemination day were recorded. Overall, fertility rates were 58 % in goats and 45 % in sheep. The fertility rates of the highest and lowest deciles of each of the meteorological variables indicated that temperature and rainfall had a significant effect on fertility in goats. Specifically, inseminations that were performed when mean (68 %), maximum (68 %), and minimum (66 %) temperatures were in the highest decile, and rainfall was in the lowest decile (59 %), had a significantly ( P < 0.0001) higher proportion of does that became pregnant than did the ewes in the lowest decile (56, 54, 58, and 49 %, respectively). In sheep, the fertility rates of the highest decile of mean (62 %), maximum (62 %), and minimum (52 %) temperature, RH (52 %), THI (53 %), and rainfall (45 %) were significantly higher ( P < 0.0001) than were the fertility rates among ewes in the lowest decile (46, 45, 45, 45, 46, and 43 %, respectively). In conclusion, weather was related to fertility in small ruminants after AI in spring. It remains to be determined whether scheduling the dates of insemination based on forecasted temperatures can improve the success of AI in goats and sheep.
Sliceable transponders for metro-access transmission links
NASA Astrophysics Data System (ADS)
Wagner, C.; Madsen, P.; Spolitis, S.; Vegas Olmos, J. J.; Tafur Monroy, I.
2015-01-01
This paper presents a solution for upgrading optical access networks by reusing existing electronics or optical equipment: sliceable transponders using signal spectrum slicing and stitching back method after direct detection. This technique allows transmission of wide bandwidth signals from the service provider (OLT - optical line terminal) to the end user (ONU - optical network unit) over an optical distribution network (ODN) via low bandwidth equipment. We show simulation and experimental results for duobinary signaling of 1 Gbit/s and 10 Gbit/s waveforms. The number of slices is adjusted to match the lowest analog bandwidth of used electrical devices and scale from 2 slices to 10 slices. Results of experimental transmission show error free signal recovery by using post forward error correction with 7% overhead.
NASA Astrophysics Data System (ADS)
Kim, Garam; Kang, Hyung-Ku
2018-03-01
We investigated the feeding habits of Calanus sinicus during its four developmental stages as copepodite 4 (CIV), copepodite 5 (CV), adult males and females in early June 2015 at 12 sampling stations along the southern coast of Korea to the northern East China Sea, to better understand the role of C. sinicus in controlling phytoplankton stocks. Ingestion rate, daily ration as body carbon, population ingestion rate, and grazing impact were estimated using the gut pigment method. The mean biomass of CVs was the greatest (13.5 mg C m-3) and that of adult males was the lowest (0.7 mg C m-3). The ingestion rate per C. sinicus individual tended to increase with developmental stage, with the highest rate in adult females (519 ng chl ind-1 d-1) and the lowest rate in CIVs (305 ng chl ind-1 d-1). A significant correlation was found between ingestion rate and temperature, but not salinity or chlorophyll-a concentration. The daily ration of C. sinicus as body carbon significantly decreased with increased body weight, with the highest value found in CIVs (66.4%) and the lowest value in adult males (30%). Despite the high ingestion rate of the adults, the mean grazing impact of C. sinicus on phytoplankton biomass, in terms of chlorophyll-a concentration, was the highest in CVs (2.6%), followed by CIVs and adult females, and was the lowest in adult males (0.1%). The higher grazing impact of copepodites than adults underscores the importance of evaluating copepodite stages in the feeding studies of marine food webs.
Gibson, Rachel; Eriksen, Rebeca; Lamb, Kathryn; McMeel, Yvonne; Vergnaud, Anne-Claire; Spear, Jeanette; Aresu, Maria; Chan, Queenie; Elliott, Paul; Frost, Gary
2017-01-01
Objectives Dietary intake is a key aspect of occupational health. To capture the characteristics of dietary behaviour that is affected by occupational environment that may affect disease risk, a collection of prospective multiday dietary records is required. The aims of this paper are to: (1) collect multiday dietary data in the Airwave Health Monitoring Study, (2) describe the dietary coding procedures applied and (3) investigate the plausibility of dietary reporting in this occupational cohort. Design A dietary coding protocol for this large-scale study was developed to minimise coding error rate. Participants (n 4412) who completed 7-day food records were included for cross-sectional analyses. Energy intake (EI) misreporting was estimated using the Goldberg method. Multivariate logistic regression models were applied to determine participant characteristics associated with EI misreporting. Setting British police force employees enrolled (2007–2012) into the Airwave Health Monitoring Study. Results The mean code error rate per food diary was 3.7% (SD 3.2%). The strongest predictors of EI under-reporting were body mass index (BMI) and physical activity. Compared with participants with BMI<25 kg/m2, those with BMI>30 kg/m2 had increased odds of being classified as under-reporting EI (men OR 5.20 95% CI 3.92 to 6.89; women OR 2.66 95% CI 1.85 to 3.83). Men and women in the highest physical activity category compared with the lowest were also more likely to be classified as under-reporting (men OR 3.33 95% CI 2.46 to 4.50; women OR 4.34 95% CI 2.91 to 6.55). Conclusions A reproducible dietary record coding procedure has been developed to minimise coding error in complex 7-day diet diaries. The prevalence of EI under-reporting is comparable with existing national UK cohorts and, in agreement with previous studies, classification of under-reporting was biased towards specific subgroups of participants. PMID:28377391
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja
2016-12-21
Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.
2011-01-01
Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484
Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R
2018-04-10
Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo
Kent, Paul R.; Krogel, Jaron T.
2017-06-22
Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less
Patient identification using a near-infrared laser scanner
NASA Astrophysics Data System (ADS)
Manit, Jirapong; Bremer, Christina; Schweikard, Achim; Ernst, Floris
2017-03-01
We propose a new biometric approach where the tissue thickness of a person's forehead is used as a biometric feature. Given that the spatial registration of two 3D laser scans of the same human face usually produces a low error value, the principle of point cloud registration and its error metric can be applied to human classification techniques. However, by only considering the spatial error, it is not possible to reliably verify a person's identity. We propose to use a novel near-infrared laser-based head tracking system to determine an additional feature, the tissue thickness, and include this in the error metric. Using MRI as a ground truth, data from the foreheads of 30 subjects was collected from which a 4D reference point cloud was created for each subject. The measurements from the near-infrared system were registered with all reference point clouds using the ICP algorithm. Afterwards, the spatial and tissue thickness errors were extracted, forming a 2D feature space. For all subjects, the lowest feature distance resulted from the registration of a measurement and the reference point cloud of the same person. The combined registration error features yielded two clusters in the feature space, one from the same subject and another from the other subjects. When only the tissue thickness error was considered, these clusters were less distinct but still present. These findings could help to raise safety standards for head and neck cancer patients and lays the foundation for a future human identification technique.
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De
2016-01-01
The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).
Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M
2015-06-01
Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.
Performance improvement of robots using a learning control scheme
NASA Technical Reports Server (NTRS)
Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.
1987-01-01
Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.
Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J
2016-09-09
A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Shea, T; Bamber, J; Harris, E
Purpose: For ultrasound speckle tracking there is some evidence that the envelope-detected signal (the main step in B-mode image formation) may be more accurate than raw ultrasound data for tracking larger inter-frame tissue motion. This study investigates the accuracy of raw radio-frequency (RF) versus non-logarithmic compressed envelope-detected (B-mode) data for ultrasound speckle tracking in the context of image-guided radiation therapy. Methods: Transperineal ultrasound RF data was acquired (with a 7.5 MHz linear transducer operating at a 12 Hz frame rate) from a speckle phantom moving with realistic intra-fraction prostate motion derived from a commercial tracking system. A normalised cross-correlation templatemore » matching algorithm was used to track speckle motion at the focus using (i) the RF signal and (ii) the B-mode signal. A range of imaging rates (0.5 to 12 Hz) were simulated by decimating the imaging sequences, therefore simulating larger to smaller inter-frame displacements. Motion estimation accuracy was quantified by comparison with known phantom motion. Results: The differences between RF and B-mode motion estimation accuracy (2D mean and 95% errors relative to ground truth displacements) were less than 0.01 mm for stable and persistent motion types and 0.2 mm for transient motion for imaging rates of 0.5 to 12 Hz. The mean correlation for all motion types and imaging rates was 0.851 and 0.845 for RF and B-mode data, respectively. Data type is expected to have most impact on axial (Superior-Inferior) motion estimation. Axial differences were <0.004 mm for stable and persistent motion and <0.3 mm for transient motion (axial mean errors were lowest for B-mode in all cases). Conclusions: Using the RF or B-mode signal for speckle motion estimation is comparable for translational prostate motion. B-mode image formation may involve other signal-processing steps which also influence motion estimation accuracy. A similar study for respiratory-induced motion would also be prudent. This work is support by Cancer Research UK Programme Grant C33589/A19727.« less
Citation Help in Databases: The More Things Change, the More They Stay the Same
ERIC Educational Resources Information Center
Van Ullen, Mary; Kessler, Jane
2012-01-01
In 2005, the authors reviewed citation help in databases and found an error rate of 4.4 errors per citation. This article describes a follow-up study that revealed a modest improvement in the error rate to 3.4 errors per citation, still unacceptably high. The most problematic area was retrieval statements. The authors conclude that librarians…
ERIC Educational Resources Information Center
Hodgson, Catherine; Lambon Ralph, Matthew A.
2008-01-01
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…
Physical fault tolerance of nanoelectronics.
Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N
2011-04-29
The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
Multinational Comparison of Prophylactic Antibiotic Use for Eyelid Surgery.
Fay, Aaron; Nallasamy, Nambi; Bernardini, Francesco; Wladis, Edward J; Durand, Marlene L; Devoto, Martin H; Meyer, Dale; Hartstein, Morris; Honavar, Santosh; Osaki, Midori H; Osaki, Tammy H; Santiago, Yvette M; Sales-Sanz, Marco; Vadala, Giuseppe; Verity, David
2015-07-01
Antibiotic stewardship is important in controlling resistance, adverse reactions, and cost. The literature regarding antibiotic use for eyelid surgery is lacking. To determine standard care and assess factors influencing antibiotic prescribing practices for eyelid surgery. A survey study was conducted from February 2, 2014, to March 24, 2014. The survey was distributed to 2397 oculoplastic surgeons in private and academic oculoplastic surgery practices in 43 countries. All surgeons were members of ophthalmic plastic and reconstructive surgery societies. Data were analyzed by geographic location. Linear regression was performed to quantify contributions to rates of prescribing postoperative antibiotics for routine eyelid surgical procedures. Rates of prescribing prophylactic intravenous, oral, and topical antibiotics as well as factors that influence surgeons' prescribing practices. A total of 782 responses were received from 2397 surgeons (average response rate, 36.7%; 2.5% margin of error) from 43 countries. Topical antibiotic use was common in all regions (85.2%). Perioperative intravenous antibiotic use was uncommon in all regions (13.5%). Geographic location was the greatest predictor of antibiotic prescribing practices (range, 2.9% in the United Kingdom to 86.7% in India; mean, 24%). Within Europe, Italy had the highest rate of antibiotic prescriptions for eyelid surgery (41.7%) and the United Kingdom had the lowest rate (2.9%.) In South America, Venezuela had the highest rate of antibiotic prescriptions for eyelid surgery (83.3%) and Chile had the lowest rate (0%). The practice locations that were associated with routinely prescribing postoperative oral antibiotics were India (odds ratio [OR], 15.83; 95% CI, 4.85-51.68; P < .001), Venezuela (OR, 13.47; 95% CI, 1.43-127.19; P = .02), and Southeast Asia (OR, 2.80; 95% CI, 1.15-6.84; P = .02). Conversely, practice location in the United Kingdom (OR, 0.048; 95% CI, 0.0063-0.37; P = .004), Australia and New Zealand (OR, 0.15; 95% CI, 0.033-0.67; P = .01), and the United States and Canada (OR, 0.41; 95% CI, 0.23-0.72; P = .002) were associated with decreased rates of postoperative oral antibiotic use. Surgeons' concern for allergic reactions was associated with decreased rates of prescribing antibiotics (OR, 0.34; 95% CI, 0.23-0.49; P < .001), while surgeons' concern for infection was associated with increased rates of prescribing antibiotics (OR 1.80; 95% CI, 1.45-2.23; P < .001). These results from members of ophthalmic plastic and reconstructive surgery societies confirm that antibiotic prescribing practices for routine eyelid surgical procedures vary widely throughout the world. No standard of care has been established that would require the routine use of postoperative prophylactic antibiotics following eyelid surgery.
White, R; Mitchell, T; Gyorfi-Dyke, E; Sweet, L; Hebert, R; Moase, O; MacPhee, R; MacDonald, B
2001-01-01
This paper provides an overview of the Prince Edward Island Heart Health Program (PEIHHP) Dissemination Research Project. Prince Edward Island (PEI) is a small province in the Atlantic region of Canada with a population of 137,980. The Island's economy is dependent on the fishery, agriculture, and tourism industries. Although unemployment rates are high (14.4%), Prince Edward Island has the lowest poverty rate in the country at 15.2%, high levels of social support (86%), and the second lowest rate of high chronic stress (Report on the Health of Canadians, 1996, 1999).
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Organizational safety culture and medical error reporting by Israeli nurses.
Kagan, Ilya; Barnoy, Sivia
2013-09-01
To investigate the association between patient safety culture (PSC) and the incidence and reporting rate of medical errors by Israeli nurses. Self-administered structured questionnaires were distributed to a convenience sample of 247 registered nurses enrolled in training programs at Tel Aviv University (response rate = 91%). The questionnaire's three sections examined the incidence of medication mistakes in clinical practice, the reporting rate for these errors, and the participants' views and perceptions of the safety culture in their workplace at three levels (organizational, departmental, and individual performance). Pearson correlation coefficients, t tests, and multiple regression analysis were used to analyze the data. Most nurses encountered medical errors from a daily to a weekly basis. Six percent of the sample never reported their own errors, while half reported their own errors "rarely or sometimes." The level of PSC was positively and significantly correlated with the error reporting rate. PSC, place of birth, error incidence, and not having an academic nursing degree were significant predictors of error reporting, together explaining 28% of variance. This study confirms the influence of an organizational safety climate on readiness to report errors. Senior healthcare executives and managers can make a major impact on safety culture development by creating and promoting a vision and strategy for quality and safety and fostering their employees' motivation to implement improvement programs at the departmental and individual level. A positive, carefully designed organizational safety culture can encourage error reporting by staff and so improve patient safety. © 2013 Sigma Theta Tau International.
Software for Quantifying and Simulating Microsatellite Genotyping Error
Johnson, Paul C.D.; Haydon, Daniel T.
2007-01-01
Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126
Send, Robert; Kaila, Ville R. I.; Sundholm, Dage
2011-01-01
We investigate how the reduction of the virtual space affects coupled-cluster excitation energies at the approximate singles and doubles coupled-cluster level (CC2). In this reduced-virtual-space (RVS) approach, all virtual orbitals above a certain energy threshold are omitted in the correlation calculation. The effects of the RVS approach are assessed by calculations on the two lowest excitation energies of 11 biochromophores using different sizes of the virtual space. Our set of biochromophores consists of common model systems for the chromophores of the photoactive yellow protein, the green fluorescent protein, and rhodopsin. The RVS calculations show that most of the high-lying virtual orbitals can be neglected without significantly affecting the accuracy of the obtained excitation energies. Omitting all virtual orbitals above 50 eV in the correlation calculation introduces errors in the excitation energies that are smaller than 0.1 eV . By using a RVS energy threshold of 50 eV , the CC2 calculations using triple-ζ basis sets (TZVP) on protonated Schiff base retinal are accelerated by a factor of 6. We demonstrate the applicability of the RVS approach by performing CC2∕TZVP calculations on the lowest singlet excitation energy of a rhodopsin model consisting of 165 atoms using RVS thresholds between 20 eV and 120 eV. The calculations on the rhodopsin model show that the RVS errors determined in the gas-phase are a very good approximation to the RVS errors in the protein environment. The RVS approach thus renders purely quantum mechanical treatments of chromophores in protein environments feasible and offers an ab initio alternative to quantum mechanics∕molecular mechanics separation schemes. PMID:21663351
Send, Robert; Kaila, Ville R I; Sundholm, Dage
2011-06-07
We investigate how the reduction of the virtual space affects coupled-cluster excitation energies at the approximate singles and doubles coupled-cluster level (CC2). In this reduced-virtual-space (RVS) approach, all virtual orbitals above a certain energy threshold are omitted in the correlation calculation. The effects of the RVS approach are assessed by calculations on the two lowest excitation energies of 11 biochromophores using different sizes of the virtual space. Our set of biochromophores consists of common model systems for the chromophores of the photoactive yellow protein, the green fluorescent protein, and rhodopsin. The RVS calculations show that most of the high-lying virtual orbitals can be neglected without significantly affecting the accuracy of the obtained excitation energies. Omitting all virtual orbitals above 50 eV in the correlation calculation introduces errors in the excitation energies that are smaller than 0.1 eV. By using a RVS energy threshold of 50 eV, the CC2 calculations using triple-ζ basis sets (TZVP) on protonated Schiff base retinal are accelerated by a factor of 6. We demonstrate the applicability of the RVS approach by performing CC2/TZVP calculations on the lowest singlet excitation energy of a rhodopsin model consisting of 165 atoms using RVS thresholds between 20 eV and 120 eV. The calculations on the rhodopsin model show that the RVS errors determined in the gas-phase are a very good approximation to the RVS errors in the protein environment. The RVS approach thus renders purely quantum mechanical treatments of chromophores in protein environments feasible and offers an ab initio alternative to quantum mechanics/molecular mechanics separation schemes. © 2011 American Institute of Physics
Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather
2018-01-09
Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These results confirm experimental findings in other fields and raise questions about the acceptability of the high rates of multitasking and interruption in clinical environments. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers
NASA Technical Reports Server (NTRS)
Ha, Eunho; North, Gerald R.
1995-01-01
Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
Shipitsin, M; Small, C; Choudhury, S; Giladi, E; Friedlander, S; Nardone, J; Hussain, S; Hurley, A D; Ernst, C; Huang, Y E; Chang, H; Nifong, T P; Rimm, D L; Dunyak, J; Loda, M; Berman, D M; Blume-Jensen, P
2014-09-09
Key challenges of biopsy-based determination of prostate cancer aggressiveness include tumour heterogeneity, biopsy-sampling error, and variations in biopsy interpretation. The resulting uncertainty in risk assessment leads to significant overtreatment, with associated costs and morbidity. We developed a performance-based strategy to identify protein biomarkers predictive of prostate cancer aggressiveness and lethality regardless of biopsy-sampling variation. Prostatectomy samples from a large patient cohort with long follow-up were blindly assessed by expert pathologists who identified the tissue regions with the highest and lowest Gleason grade from each patient. To simulate biopsy-sampling error, a core from a high- and a low-Gleason area from each patient sample was used to generate a 'high' and a 'low' tumour microarray, respectively. Using a quantitative proteomics approach, we identified from 160 candidates 12 biomarkers that predicted prostate cancer aggressiveness (surgical Gleason and TNM stage) and lethal outcome robustly in both high- and low-Gleason areas. Conversely, a previously reported lethal outcome-predictive marker signature for prostatectomy tissue was unable to perform under circumstances of maximal sampling error. Our results have important implications for cancer biomarker discovery in general and development of a sampling error-resistant clinical biopsy test for prediction of prostate cancer aggressiveness.
NASA Astrophysics Data System (ADS)
Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.
2017-01-01
Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
1990-07-01
relations, variety, working conditions, cultural identity, physical prowess, and economic security. The measures of reliability were computed for the VS: 1...Prowess, Ability Utilization, and Physical Activity and the lowest scale scores on Cultural Identity, Social Relations, and Prestige. Those with neutral...and Cultural Identity and the lowest scale scores on Physical Prowess, Advancement, and Risk. Table 14 Highest and Lowest Rated Value Scale Dimensions
Hansen, Heidi; Ben-David, Merav; McDonald, David B
2008-03-01
In noninvasive genetic sampling, when genotyping error rates are high and recapture rates are low, misidentification of individuals can lead to overestimation of population size. Thus, estimating genotyping errors is imperative. Nonetheless, conducting multiple polymerase chain reactions (PCRs) at multiple loci is time-consuming and costly. To address the controversy regarding the minimum number of PCRs required for obtaining a consensus genotype, we compared consumer-style the performance of two genotyping protocols (multiple-tubes and 'comparative method') in respect to genotyping success and error rates. Our results from 48 faecal samples of river otters (Lontra canadensis) collected in Wyoming in 2003, and from blood samples of five captive river otters amplified with four different primers, suggest that use of the comparative genotyping protocol can minimize the number of PCRs per locus. For all but five samples at one locus, the same consensus genotypes were reached with fewer PCRs and with reduced error rates with this protocol compared to the multiple-tubes method. This finding is reassuring because genotyping errors can occur at relatively high rates even in tissues such as blood and hair. In addition, we found that loci that amplify readily and yield consensus genotypes, may still exhibit high error rates (7-32%) and that amplification with different primers resulted in different types and rates of error. Thus, assigning a genotype based on a single PCR for several loci could result in misidentification of individuals. We recommend that programs designed to statistically assign consensus genotypes should be modified to allow the different treatment of heterozygotes and homozygotes intrinsic to the comparative method. © 2007 The Authors.
National suicide rates a century after Durkheim: do we know enough to estimate error?
Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W
2010-06-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.
Does Mckuer's Law Hold for Heart Rate Control via Biofeedback Display?
NASA Technical Reports Server (NTRS)
Courter, B. J.; Jex, H. R.
1984-01-01
Some persons can control their pulse rate with the aid of a biofeedback display. If the biofeedback display is modified to show the error between a command pulse-rate and the measured rate, a compensatory (error correcting) heart rate tracking control loop can be created. The dynamic response characteristics of this control loop when subjected to step and quasi-random disturbances were measured. The control loop includes a beat-to-beat cardiotachmeter differenced with a forcing function from a quasi-random input generator; the resulting error pulse-rate is displayed as feedback. The subject acts to null the displayed pulse-rate error, thereby closing a compensatory control loop. McRuer's Law should hold for this case. A few subjects already skilled in voluntary pulse-rate control were tested for heart-rate control response. Control-law properties are derived, such as: crossover frequency, stability margins, and closed-loop bandwidth. These are evaluated for a range of forcing functions and for step as well as random disturbances.
Online automatic tuning and control for fed-batch cultivation
van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.
2007-01-01
Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554
Total Dose Effects on Error Rates in Linear Bipolar Systems
NASA Technical Reports Server (NTRS)
Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent
2007-01-01
The shapes of single event transients in linear bipolar circuits are distorted by exposure to total ionizing dose radiation. Some transients become broader and others become narrower. Such distortions may affect SET system error rates in a radiation environment. If the transients are broadened by TID, the error rate could increase during the course of a mission, a possibility that has implications for hardness assurance.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I
Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions regarding the direction of change for error type proportions. The current findings argued for an alternative concept of the role of activation and decay in influencing types of serial-order sound errors. Rather than a slow activation decay rate (Dell, 1986), the results of the current study were more compatible with an alternative explanation of rapid activation decay or slow build-up of residual activation.
NASA Astrophysics Data System (ADS)
Taba, Paulina; Kasim, S.; Raya, I.
2018-03-01
A research on the application of MSSIP-2 nutrient in marine phytoplankton culture has been conducted to produce biomass to be used as raw material for biofuel. Marine phytoplankton was cultivated using the nutrient media and the growth rates were studied by measuring the cell solidity at various growth times. Seven phytoplanktons; Isochrysis aff galbana, Spirulina sp., Thalassiosira sp., and Nitzchia sp., Chlorella vulgaris, Chaetoceros calcitrans and Isochrysis tahiti were used in the research. The experimental temperature, salinity, and pH of the media were measured and the specific growth rates of phytoplanktons were determined using the first order rate equation. Results showed that the highest specific growth rate was given by Chlorella vulgaris (0.0322 cells/hour) and the lowest one was by Thalassiosira sp. (0.0277 cells/hour). The highest biomass weight was obtained from Isochrysis aff galbana (0.329 g), whereas the lowest one (0.27 g) was from Nitzchia sp. The carbohydrate content was various, the highest content was 34.07% found in Isochrysis aff galbana and the lowest was 28.16% in Thalassiosira sp.
Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Wolff, David B.
2009-01-01
Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.
Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C
2017-02-15
Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, D; McCarthy, A; Galavis, P
Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
Sultana, Shemaila; Solotchi, Mihai; Ramachandran, Aparna; Patel, Smita S
2017-11-03
Single-subunit RNA polymerases (RNAPs) are present in phage T7 and in mitochondria of all eukaryotes. This RNAP class plays important roles in biotechnology and cellular energy production, but we know little about its fidelity and error rates. Herein, we report the error rates of three single-subunit RNAPs measured from the catalytic efficiencies of correct and all possible incorrect nucleotides. The average error rates of T7 RNAP (2 × 10 -6 ), yeast mitochondrial Rpo41 (6 × 10 -6 ), and human mitochondrial POLRMT (RNA polymerase mitochondrial) (2 × 10 -5 ) indicate high accuracy/fidelity of RNA synthesis resembling those of replicative DNA polymerases. All three RNAPs exhibit a distinctly high propensity for GTP misincorporation opposite dT, predicting frequent A→G errors in RNA with rates of ∼10 -4 The A→C, G→A, A→U, C→U, G→U, U→C, and U→G errors mostly due to pyrimidine-purine mismatches were relatively frequent (10 -5 -10 -6 ), whereas C→G, U→A, G→C, and C→A errors from purine-purine and pyrimidine-pyrimidine mismatches were rare (10 -7 -10 -10 ). POLRMT also shows a high C→A error rate on 8-oxo-dG templates (∼10 -4 ). Strikingly, POLRMT shows a high mutagenic bypass rate, which is exacerbated by TEFM (transcription elongation factor mitochondrial). The lifetime of POLRMT on terminally mismatched elongation substrate is increased in the presence of TEFM, which allows POLRMT to efficiently bypass the error and continue with transcription. This investigation of nucleotide selectivity on normal and oxidatively damaged DNA by three single-subunit RNAPs provides the basic information to understand the error rates in mitochondria and, in the case of T7 RNAP, to assess the quality of in vitro transcribed RNAs. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
Salorio-Corbetto, Marina; Baer, Thomas; Moore, Brian C. J.
2017-01-01
Abstract Objective: The objective was to assess the degradation of speech sound quality produced by frequency compression for listeners with extensive high-frequency dead regions (DRs). Design: Quality ratings were obtained using values of the starting frequency (Sf) of the frequency compression both below and above the estimated edge frequency, fe, of each DR. Thus, the value of Sf often fell below the lowest value currently used in clinical practice. Several compression ratios were used for each value of Sf. Stimuli were sentences processed via a prototype hearing aid based on Phonak Exélia Art P. Study sample: Five participants (eight ears) with extensive high-frequency DRs were tested. Results: Reductions of sound-quality produced by frequency compression were small to moderate. Ratings decreased significantly with decreasing Sf and increasing CR. The mean ratings were lowest for the lowest Sf and highest CR. Ratings varied across participants, with one participant rating frequency compression lower than no frequency compression even when Sf was above fe. Conclusions: Frequency compression degraded sound quality somewhat for this small group of participants with extensive high-frequency DRs. The degradation was greater for lower values of Sf relative to fe, and for greater values of CR. Results varied across participants. PMID:27724057
Abecia, J A; Arrébola, F; Macías, A; Laviña, A; González-Casquet, O; Benítez, F; Palacios, C
2016-10-01
A total number of 1092 artificial inseminations (AIs) performed from March to May were documented over four consecutive years on 10 Payoya goat farms (36° N) and 19,392 AIs on 102 Rasa Aragonesa sheep farms (41° N) over 10 years. Mean, maximum, and minimum ambient temperatures, mean relative humidity, mean solar radiation, and total rainfall on each insemination day were recorded. Overall, fertility rates were 58 % in goats and 45 % in sheep. The fertility rates of the highest and lowest deciles of each of the meteorological variables indicated that temperature and rainfall had a significant effect on fertility in goats. Specifically, inseminations that were performed when mean (68 %), maximum (68 %), and minimum (66 %) temperatures were in the highest decile, and rainfall was in the lowest decile (59 %), had a significantly (P < 0.0001) higher proportion of does that became pregnant than did the ewes in the lowest decile (56, 54, 58, and 49 %, respectively). In sheep, the fertility rates of the highest decile of mean (62 %), maximum (62 %), and minimum (52 %) temperature, RH (52 %), THI (53 %), and rainfall (45 %) were significantly higher (P < 0.0001) than were the fertility rates among ewes in the lowest decile (46, 45, 45, 45, 46, and 43 %, respectively). In conclusion, weather was related to fertility in small ruminants after AI in spring. It remains to be determined whether scheduling the dates of insemination based on forecasted temperatures can improve the success of AI in goats and sheep.
Gilmartin-Thomas, Julia Fiona-Maree; Smith, Felicity; Wolfe, Rory; Jani, Yogini
2017-07-01
No published study has been specifically designed to compare medication administration errors between original medication packaging and multi-compartment compliance aids in care homes, using direct observation. Compare the effect of original medication packaging and multi-compartment compliance aids on medication administration accuracy. Prospective observational. Ten Greater London care homes. Nurses and carers administering medications. Between October 2014 and June 2015, a pharmacist researcher directly observed solid, orally administered medications in tablet or capsule form at ten purposively sampled care homes (five only used original medication packaging and five used both multi-compartment compliance aids and original medication packaging). The medication administration error rate was calculated as the number of observed doses administered (or omitted) in error according to medication administration records, compared to the opportunities for error (total number of observed doses plus omitted doses). Over 108.4h, 41 different staff (35 nurses, 6 carers) were observed to administer medications to 823 residents during 90 medication administration rounds. A total of 2452 medication doses were observed (1385 from original medication packaging, 1067 from multi-compartment compliance aids). One hundred and seventy eight medication administration errors were identified from 2493 opportunities for error (7.1% overall medication administration error rate). A greater medication administration error rate was seen for original medication packaging than multi-compartment compliance aids (9.3% and 3.1% respectively, risk ratio (RR)=3.9, 95% confidence interval (CI) 2.4 to 6.1, p<0.001). Similar differences existed when comparing medication administration error rates between original medication packaging (from original medication packaging-only care homes) and multi-compartment compliance aids (RR=2.3, 95%CI 1.1 to 4.9, p=0.03), and between original medication packaging and multi-compartment compliance aids within care homes that used a combination of both medication administration systems (RR=4.3, 95%CI 2.7 to 6.8, p<0.001). A significant difference in error rate was not observed between use of a single or combination medication administration system (p=0.44). The significant difference in, and high overall, medication administration error rate between original medication packaging and multi-compartment compliance aids supports the use of the latter in care homes, as well as local investigation of tablet and capsule impact on medication administration errors and staff training to prevent errors occurring. As a significant difference in error rate was not observed between use of a single or combination medication administration system, common practice of using both multi-compartment compliance aids (for most medications) and original packaging (for medications with stability issues) is supported. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calabrese, Edward J., E-mail: edwardc@schoolph.uma
This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. •more » The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.« less
Fanning, Laura; Jones, Nick; Manias, Elizabeth
2016-04-01
The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.
Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions
ERIC Educational Resources Information Center
Yormaz, Seha; Sünbül, Önder
2017-01-01
This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…
Vairy, Stephanie; Corny, Jennifer; Jamoulle, Olivier; Levy, Arielle; Lebel, Denis; Carceller, Ana
2017-12-01
A high rate of prescription errors exists in pediatric teaching hospitals, especially during initial training. To determine the effectiveness of a two-hour lecture by a pharmacist on rates of prescription errors and quality of prescriptions. A two-hour lecture led by a pharmacist was provided to 11 junior pediatric residents (PGY-1) as part of a one-month immersion program. A control group included 15 residents without the intervention. We reviewed charts to analyze the first 50 prescriptions of each resident. Data were collected from 1300 prescriptions involving 451 patients, 550 in the intervention group and 750 in the control group. The rate of prescription errors in the intervention group was 9.6% compared to 11.3% in the control group (p=0.32), affecting 106 patients. Statistically significant differences between both groups were prescriptions with unwritten doses (p=0.01) and errors involving overdosing (p=0.04). We identified many errors as well as issues surrounding quality of prescriptions. We found a 10.6% prescription error rate. This two-hour lecture seems insufficient to reduce prescription errors among junior pediatric residents. This study highlights the most frequent types of errors and prescription quality issues that should be targeted by future educational interventions.
Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark
2012-11-01
To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.
Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O
2016-11-01
Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.
Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.
2015-12-01
Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.
Reducing the Familiarity of Conjunction Lures with Pictures
ERIC Educational Resources Information Center
Lloyd, Marianne E.
2013-01-01
Four experiments were conducted to test whether conjunction errors were reduced after pictorial encoding and whether the semantic overlap between study and conjunction items would impact error rates. Across 4 experiments, compound words studied with a single-picture had lower conjunction error rates during a recognition test than those words…
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...
[Error analysis of functional articulation disorders in children].
Zhou, Qiao-juan; Yin, Heng; Shi, Bing
2008-08-01
To explore the clinical characteristic of functional articulation disorders in children and provide more evidence for differential diagnosis and speech therapy. 172 children with functional articulation disorders were grouped by age. Children aged 4-5 years were assigned to one group, and those aged 6-10 years were to another group. Their phonological samples were collected and analyzed. In the two groups, substitution and omission (deletion) were the mainly articulation errors in these children, dental consonants were the main wrong sounds, and bilabial and labio-dental were rarely wrong. In age 4-5 group, sequence according to the error frequency from the highest to lowest was dental, velar, lingual, apical, bilabial, and labio-dental. In age 6-10 group, the sequence was dental, lingual, apical, velar, bilabial, labio-dental. Lateral misarticulation and palatalized misarticulation occurred more often in age 6-10 group than age 4-5 group and were only found in lingual and dental consonants in two groups. Misarticulation of functional articulation disorders mainly occurs in dental and rarely in bilabial and labio-dental. Substitution and omission are the most often occurred errors. Lateral misarticulation and palatalized misarticulation occur mainly in lingual and dental consonants.
Habets, Bas; Staal, J Bart; Tijssen, Marsha; van Cingel, Robert
2018-01-10
To determine the intrarater reliability of the Humac NORM isokinetic dynamometer for concentric and eccentric strength tests of knee and shoulder muscles. 54 participants (50% female, average age 20.9 ± 3.1 years) performed concentric and eccentric strength measures of the knee extensors and flexors, and the shoulder internal and external rotators on two different Humac NORM isokinetic dynamometers, which were situated at two different centers. The knee extensors and flexors were tested concentrically at 60° and 180°/s, and eccentrically at 60° s. Concentric strength of the shoulder internal and external rotators, and eccentric strength of the external rotators were measured at 60° and 120°/s. We calculated intraclass correlation coefficients (ICCs), standard error of measurement, standard error of measurement expressed as a %, and the smallest detectable change to determine reliability and measurement error. ICCs for the knee tests ranged from 0.74 to 0.89, whereas ICC values for the shoulder tests ranged from 0.72 to 0.94. Measurement error was highest for the concentric test of the knee extensors and lowest for the concentric test of shoulder external rotators.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Zhang, Wei; Regterschot, G Ruben H; Wahle, Fabian; Geraedts, Hilde; Baldus, Heribert; Zijlstra, Wiebren
2014-01-01
Falls result in substantial disability, morbidity, and mortality among older people. Early detection of fall risks and timely intervention can prevent falls and injuries due to falls. Simple field tests, such as repeated chair rise, are used in clinical assessment of fall risks in older people. Development of on-body sensors introduces potential beneficial alternatives for traditional clinical methods. In this article, we present a pendant sensor based chair rise detection and analysis algorithm for fall risk assessment in older people. The recall and the precision of the transfer detection were 85% and 87% in standard protocol, and 61% and 89% in daily life activities. Estimation errors of chair rise performance indicators: duration, maximum acceleration, peak power and maximum jerk were tested in over 800 transfers. Median estimation error in transfer peak power ranged from 1.9% to 4.6% in various tests. Among all the performance indicators, maximum acceleration had the lowest median estimation error of 0% and duration had the highest median estimation error of 24% over all tests. The developed algorithm might be feasible for continuous fall risk assessment in older people.
Certification of ICI 1012 optical data storage tape
NASA Technical Reports Server (NTRS)
Howell, J. M.
1993-01-01
ICI has developed a unique and novel method of certifying a Terabyte optical tape. The tape quality is guaranteed as a statistical upper limit on the probability of uncorrectable errors. This is called the Corrected Byte Error Rate or CBER. We developed this probabilistic method because of two reasons why error rate cannot be measured directly. Firstly, written data is indelible, so one cannot employ write/read tests such as used for magnetic tape. Secondly, the anticipated error rates need impractically large samples to measure accurately. For example, a rate of 1E-12 implies only one byte in error per tape. The archivability of ICI 1012 Data Storage Tape in general is well characterized and understood. Nevertheless, customers expect performance guarantees to be supported by test results on individual tapes. In particular, they need assurance that data is retrievable after decades in archive. This paper describes the mathematical basis, measurement apparatus and applicability of the certification method.
Cherner, M; Suarez, P; Lazzaretto, D; Fortuny, L Artiola I; Mindt, Monica Rivera; Dawes, S; Marcotte, Thomas; Grant, I; Heaton, R
2007-03-01
The large number of primary Spanish speakers both in the United States and the world makes it imperative that appropriate neuropsychological assessment instruments be available to serve the needs of these populations. In this article we describe the norming process for Spanish speakers from the U.S.-Mexico border region on the Brief Visuospatial Memory Test-revised and the Hopkins Verbal Learning Test-revised. We computed the rates of impairment that would be obtained by applying the original published norms for these tests to raw scores from the normative sample, and found substantial overestimates compared to expected rates. As expected, these overestimates were most salient at the lowest levels of education, given the under-representation of poorly educated subjects in the original normative samples. Results suggest that demographically corrected norms derived from healthy Spanish-speaking adults with a broad range of education, are less likely to result in diagnostic errors. At minimum, demographic corrections for the tests in question should include the influence of literacy or education, in addition to the traditional adjustments for age. Because the age range of our sample was limited, the norms presented should not be applied to elderly populations.
Within-Session Stability of Short-Term Heart Rate Variability Measurement
2016-01-01
Abstract The primary aim of this study was to assess the retest stability of the short-term heart rate variability (HRV) measurement performed within one session and without the use of any intervention. Additionally, a precise investigation of the possible impact of intrinsic biological variation on HRV reliability was also performed. First, a single test-retest HRV measurement was conducted with 20-30 min apart from one another. Second, the HRV measurement was repeated in ten non-interrupted consecutive intervals. The lowest typical error (CV = 21.1%) was found for the square root of the mean squared differences of successive RR intervals (rMSSD) and the highest for the low frequency power (PLF) (CV = 93.9%). The standardized changes in the mean were trivial to small. The correlation analysis revealed the highest level for ln rMSSD (ICC = 0.87), while ln PLF represented the worst case (ICC = 0.59). The reliability indices for ln rMSSD in 10 consecutive intervals improved (CV = 9.9%; trivial standardized changes in the mean; ICC = 0.96). In conclusion, major differences were found in the reliability level between the HRV indices. The rMSSD demonstrated the highest reliability level. No substantial influence of intrinsic biological variation on the HRV reliability was observed. PMID:28149345
Martínez-Martí, Chele; Jiménez-Franco, María V.; Royle, J. Andrew; Palazón, José A.; Calvo, José F.
2016-01-01
Occurrence models that account for imperfect detection of species are increasingly used for estimating geographical range, for determining species-landscape relations and to prioritize conservation actions worldwide. In 2010, we conducted a large-scale survey in Río Muni, the mainland territory of Equatorial Guinea, which aimed to estimate the probabilities of occurrence and detection of threatened mammals based on environmental covariates, and to identify priority areas for conservation. Interviews with hunters were designed to record presence/absence data of seven species (golden cat, leopard, forest elephant, forest buffalo, western gorilla, chimpanzee and mandrill) in 225 sites throughout the region. We fitted single season occupancy models and recently developed models which also include false positive errors (i.e. species detected in places where it actually does not occur), which should provide more accurate estimates for most species, which are susceptible to mis-identification. Golden cat and leopard had the lowest occurrence rates in the region, whereas primates had the highest rates. All species, except gorilla, were affected negatively by human settlements. The southern half of Río Muni showed the highest occurrence of the species studied, and conservation strategies for ensuring the persistence of threatened mammals should be focused on this area.
Martínez-Martí, Chele; Jiménez-Franco, María V; Royle, J Andrew; Palazón, José A; Calvo, José F
2016-09-26
Occurrence models that account for imperfect detection of species are increasingly used for estimating geographical range, for determining species-landscape relations and to prioritize conservation actions worldwide. In 2010, we conducted a large-scale survey in Río Muni, the mainland territory of Equatorial Guinea, which aimed to estimate the probabilities of occurrence and detection of threatened mammals based on environmental covariates, and to identify priority areas for conservation. Interviews with hunters were designed to record presence/absence data of seven species (golden cat, leopard, forest elephant, forest buffalo, western gorilla, chimpanzee and mandrill) in 225 sites throughout the region. We fitted single season occupancy models and recently developed models which also include false positive errors (i.e. species detected in places where it actually does not occur), which should provide more accurate estimates for most species, which are susceptible to mis-identification. Golden cat and leopard had the lowest occurrence rates in the region, whereas primates had the highest rates. All species, except gorilla, were affected negatively by human settlements. The southern half of Río Muni showed the highest occurrence of the species studied, and conservation strategies for ensuring the persistence of threatened mammals should be focused on this area.
Martínez-Martí, Chele; Jiménez-Franco, María V.; Royle, J. Andrew; Palazón, José A.; Calvo, José F.
2016-01-01
Occurrence models that account for imperfect detection of species are increasingly used for estimating geographical range, for determining species-landscape relations and to prioritize conservation actions worldwide. In 2010, we conducted a large-scale survey in Río Muni, the mainland territory of Equatorial Guinea, which aimed to estimate the probabilities of occurrence and detection of threatened mammals based on environmental covariates, and to identify priority areas for conservation. Interviews with hunters were designed to record presence/absence data of seven species (golden cat, leopard, forest elephant, forest buffalo, western gorilla, chimpanzee and mandrill) in 225 sites throughout the region. We fitted single season occupancy models and recently developed models which also include false positive errors (i.e. species detected in places where it actually does not occur), which should provide more accurate estimates for most species, which are susceptible to mis-identification. Golden cat and leopard had the lowest occurrence rates in the region, whereas primates had the highest rates. All species, except gorilla, were affected negatively by human settlements. The southern half of Río Muni showed the highest occurrence of the species studied, and conservation strategies for ensuring the persistence of threatened mammals should be focused on this area. PMID:27666671
The dependence of crowding on flanker complexity and target-flanker similarity
Bernard, Jean-Baptiste; Chung, Susana T.L.
2013-01-01
We examined the effects of the spatial complexity of flankers and target-flanker similarity on the performance of identifying crowded letters. On each trial, observers identified the middle character of random strings of three characters (“trigrams”) briefly presented at 10° below fixation. We tested the 26 lowercase letters of the Times-Roman and Courier fonts, a set of 79 characters (letters and non-letters) of the Times-Roman font, and the uppercase letters of two highly complex ornamental fonts, Edwardian and Aristocrat. Spatial complexity of characters was quantified by the length of the morphological skeleton of each character, and target-flanker similarity was defined based on a psychometric similarity matrix. Our results showed that (1) letter identification error rate increases with flanker complexity up to a certain value, beyond which error rate becomes independent of flanker complexity; (2) the increase of error rate is slower for high-complexity target letters; (3) error rate increases with target-flanker similarity; and (4) mislocation error rate increases with target-flanker similarity. These findings, combined with the current understanding of the faulty feature integration account of crowding, provide some constraints of how the feature integration process could cause perceptual errors. PMID:21730225
Total energy based flight control system
NASA Technical Reports Server (NTRS)
Lambregts, Antonius A. (Inventor)
1985-01-01
An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.
Evaluation of stream flow effects on smolt survival in the Yakima River basin, Washington
Courter,; Garrison,; Kock, Tobias J.; Perry, Russell W.
2012-01-01
Yearling Chinook smolt survival and travel time estimates from 2012 suggest that migration rates and survival rates in the Roza Reach may be associated with stream flow, water temperature, release timing (i.e. migratory disposition), and fish size, but the extent to which each variable influenced survival is yet to be determined. The lowest survival rate (61%) and longest travel time (median 2.26 days) was observed in Release Group 1, which had the smallest size distribution and experienced the lowest flows, lowest temperatures, and earliest release date among the three groups. Release Groups 2 and 3 survived at 96% and 98% respectively and traveled through the Roza Reach in less than eight hours. The primary focus of years two and three of this study will be to collect data that minimizes the effect of confounding explanatory variables, so that flow effects on emigration survival can be quantified independent of these other influential factors.
Stress Corrosion Cracking Behavior of Interstitial Free Steel Via Slow Strain Rate Technique
NASA Astrophysics Data System (ADS)
Murkute, Pratik; Ramkumar, J.; Mondal, K.
2016-07-01
An interstitial free steel is subjected to slow strain rate tests to investigate the stress corrosion cracking (SCC) behavior at strain rates ranging from 10-4 to 10-6s-1 in air and 3.5 wt.% NaCl solution. The ratios of time to failure, failure strain, and ultimate tensile stress at different strain rates in air to that in corrosive were considered as SCC susceptibility. Serrated stress-strain curve observed at lowest strain rate is explained by the Portevin-Le Chatelier effect. Maximum susceptibility to SCC at lowest strain rate is attributed to the soluble γ-FeOOH in the rust analyzed by Fourier Transformed Infrared spectroscopy. Mechanism for SCC relates to the anodic dissolution forming the groove, where hydrogen embrittlement can set in and finally fracture happens due to triaxiality.
Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A
2017-03-01
Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.
Di Nuovo, Alessandro G; Di Nuovo, Santo; Buono, Serafino
2012-02-01
The estimation of a person's intelligence quotient (IQ) by means of psychometric tests is indispensable in the application of psychological assessment to several fields. When complex tests as the Wechsler scales, which are the most commonly used and universally recognized parameter for the diagnosis of degrees of retardation, are not applicable, it is necessary to use other psycho-diagnostic tools more suited for the subject's specific condition. But to ensure a homogeneous diagnosis it is necessary to reach a common metric, thus, the aim of our work is to build models able to estimate accurately and reliably the Wechsler IQ, starting from different psycho-diagnostic tools. Four different psychometric tests (Leiter international performance scale; coloured progressive matrices test; the mental development scale; psycho educational profile), along with the Wechsler scale, were administered to a group of 40 mentally retarded subjects, with various pathologies, and control persons. The obtained database is used to evaluate Wechsler IQ estimation models starting from the scores obtained in the other tests. Five modelling methods, two statistical and three from machine learning, that belong to the family of artificial neural networks (ANNs) are employed to build the estimator. Several error metrics for estimated IQ and for retardation level classification are defined to compare the performance of the various models with univariate and multivariate analyses. Eight empirical studies show that, after ten-fold cross-validation, best average estimation error is of 3.37 IQ points and mental retardation level classification error of 7.5%. Furthermore our experiments prove the superior performance of ANN methods over statistical regression ones, because in all cases considered ANN models show the lowest estimation error (from 0.12 to 0.9 IQ points) and the lowest classification error (from 2.5% to 10%). Since the estimation performance is better than the confidence interval of Wechsler scales (five IQ points), we consider models built very accurate and reliable and they can be used into help clinical diagnosis. Therefore a computer software based on the results of our work is currently used in a clinical center and empirical trails confirm its validity. Furthermore positive results in our multivariate studies suggest new approaches for clinicians. Copyright © 2011 Elsevier B.V. All rights reserved.
Socioeconomic factors and cervical cancer mortality in Spain during the period 1989-1997.
Morales Suarez-Varela, M M; Jiménez-López, M C; Llópis-González, A
2004-01-01
A study was made of cervical cancer (CC) mortality trends in Spain during the period 1989-1997 at National, Autonomous Community and Provincial levels, in relation to different socioeconomic factors. Data were obtained from the Spanish National Institute of Statistics (Instituto Nacional de Estadística, INE). The crude mortality rates were age-adjusted using the indirect method and Gross Domestic Product (GDP) as socioeconomic status (SES) indicator. National CC age-adjusted mortality rates have increasing slightly, varying from 3.09 deaths/100000 women in 1989 to 3.42 in 1996. The highest age-adjusted mortality rates corresponded to Seville, Palencia and Orense, with 4.13, 4.06 and 3.98 cases/100000 women, respectively. The lowest mortality rates were found in Las Palmas, Cantabria and Alicante with 2.63, 2.77 and 2.80 deaths/100000 women, respectively. A relative risk (RR) of 1.14 (95%CI: 0.98-1.32) ( P=0.048) was observed between the provinces with the lowest SES and highest mortality rate, and those with the highest SES and lowest mortality rate. The results of our study show a slight increasing trend in CC mortality rates in Spain during the period 1989-1997, and suggest that the variations among provinces and Autonomous Communities could be due to CC risk factors (SES related to human papillomavirus, parity, diet, etc.) and differences in early diagnosis.
Mathes, Tim; Klaßen, Pauline; Pieper, Dawid
2017-11-28
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Optimizing Terminal Conditions Using Geometric Guidance for Low-Control Authority Munitions
2008-06-01
Lowest altitude allowable for maximum canard deflection per unit of acceleration constant hT δ g Canard deflection per unit of acceleration transition...target within that range window in less than five minutes from time of fire [17]. The launch platform can supply the munition with some preflight...linear 7. The information supplied by the onboard navigation system has no errors 8. The control system is always able to generate the exact amount
Optimizing Internal Wave Drag in a Forward Barotropic Model with Semidiurnal Tides
2015-01-23
Center 875 North Randolph Street, Suite 1425 Arlington, VA 22203-1995 ONR Approved for public release, distribution is unlimited. A global tuning...factor with a larger value in the Atlantic. Our best global mean RMS error of 4.4 cm for areas deeper than 1000 m and equatorward of 66_ is among the...lowest obtained in a forward barotropic tide model. Barotropic tides; Global modeling; Linear wave drag Unclassified Unclassified Unclassified UU
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
Online Error Reporting for Managing Quality Control Within Radiology.
Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L
2016-06-01
Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.
Evaluation of complications of root canal treatment performed by undergraduate dental students.
AlRahabi, Mothanna K
2017-12-01
This study evaluated the technical quality of root canal treatment (RCT) and detected iatrogenic errors in an undergraduate dental clinic at the College of Dentistry, Taibah University, Saudi Arabia. Dental records of 280 patients who received RCT between 2013 and 2016 undertaken by dental students were investigated by retrospective chart review. Root canal obturation was evaluated on the basis of the length of obturation being ≤2 mm from the radiographic apex, with uniform radiodensity and good adaptation to root canal walls. Inadequate root canal obturation included cases containing procedural errors such as furcal perforation, ledge, canal transportation, strip perforation, root perforation, instrument separation, voids in the obturation, or underfilling or overfilling of the obturation. In 193 (68.9%) teeth, RCT was adequate and without procedural errors. However, in 87 (31.1%) teeth, RCT was inadequate and contained procedural errors. The frequency of procedural errors in the entire sample was 31.1% as follows: underfilling, 49.9%; overfilling, 24.1%; voids, 12.6%; broken instruments, 9.2%; apical perforation, 2.3%; and root canal transportation, 2.3%. There were no significant differences (p > 0.05) in the type or frequency of procedural errors between the fourth- and fifth-year students. Lower molars (43.1%) and upper incisors (19.2%) exhibited the highest and lowest frequencies of procedural errors, respectively. The technical quality of RCT performed by undergraduate dental students was classified as 'adequate' in 68.9% of the cases. There is a need for improvement in the training of students at the preclinical and clinical levels.
High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link
NASA Technical Reports Server (NTRS)
Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli
2016-01-01
We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Hadronic Contribution to Muon g-2 with Systematic Error Correlations
NASA Astrophysics Data System (ADS)
Brown, D. H.; Worstell, W. A.
1996-05-01
We have performed a new evaluation of the hadronic contribution to a_μ=(g-2)/2 of the muon with explicit correlations of systematic errors among the experimental data on σ( e^+e^- → hadrons ). Our result for the lowest order hadronic vacuum polarization contribution is a_μ^hvp = 701.7(7.6)(13.4) × 10-10 where the total systematic error contributions from below and above √s = 1.4 GeV are (12.5) × 10-10 and (4.8) × 10-10 respectively. Therefore new measurements on σ( e^+e^- → hadrons ) below 1.4 GeV in Novosibirsk, Russia can significantly reduce the total error on a_μ^hvp. This contrasts with a previous evaluation which indicated that the dominant error is due to the energy region above 1.4 GeV. The latter analysis correlated systematic errors at each energy point separately but not across energy ranges as we have done. Combination with higher order hadronic contributions is required for a new measurement of a_μ at Brookhaven National Laboratory to be sensitive to electroweak and possibly supergravity and muon substructure effects. Our analysis may also be applied to calculations of hadronic contributions to the running of α(s) at √s= M_Z, the hyperfine structure of muonium, and the running of sin^2 θW in Møller scattering. The analysis of the new Novosibirsk data will also be given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kent, Paul R.; Krogel, Jaron T.
Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less
NASA Astrophysics Data System (ADS)
Ewan, B. C. R.; Ireland, S. N.
2000-12-01
Acoustic pyrometry uses the temperature dependence of sound speed in materials to measure temperature. This is normally achieved by measuring the transit time for a sound signal over a known path length and applying the material relation between temperature and velocity to extract an "average" temperature. Sources of error associated with the measurement of mean transit time are discussed in implementing the technique in gases, one of the principal causes being background noise in typical industrial environments. A number of transmitted signal and processing strategies which can be used in the area are examined and the expected error in mean transit time associated with each technique is quantified. Transmitted signals included pulses, pure frequencies, chirps, and pseudorandom binary sequences (prbs), while processing involves edge detection and correlation. Errors arise through the misinterpretation of the positions of edge arrival or correlation peaks due to instantaneous deviations associated with background noise and these become more severe as signal to noise amplitude ratios decrease. Population errors in the mean transit time are estimated for the different measurement strategies and it is concluded that PRBS combined with correlation can provide the lowest errors when operating in high noise environments. The operation of an instrument based on PRBS transmitted signals is described and test results under controlled noise conditions are presented. These confirm the value of the strategy and demonstrate that measurements can be made with signal to noise amplitude ratios down to 0.5.
A long-term follow-up evaluation of electronic health record prescribing safety
Abramson, Erika L; Malhotra, Sameer; Osorio, S Nena; Edwards, Alison; Cheriff, Adam; Cole, Curtis; Kaushal, Rainu
2013-01-01
Objective To be eligible for incentives through the Electronic Health Record (EHR) Incentive Program, many providers using older or locally developed EHRs will be transitioning to new, commercial EHRs. We previously evaluated prescribing errors made by providers in the first year following transition from a locally developed EHR with minimal prescribing clinical decision support (CDS) to a commercial EHR with robust CDS. Following system refinements, we conducted this study to assess the rates and types of errors 2 years after transition and determine the evolution of errors. Materials and methods We conducted a mixed methods cross-sectional case study of 16 physicians at an academic-affiliated ambulatory clinic from April to June 2010. We utilized standardized prescription and chart review to identify errors. Fourteen providers also participated in interviews. Results We analyzed 1905 prescriptions. The overall prescribing error rate was 3.8 per 100 prescriptions (95% CI 2.8 to 5.1). Error rates were significantly lower 2 years after transition (p<0.001 compared to pre-implementation, 12 weeks and 1 year after transition). Rates of near misses remained unchanged. Providers positively appreciated most system refinements, particularly reduced alert firing. Discussion Our study suggests that over time and with system refinements, use of a commercial EHR with advanced CDS can lead to low prescribing error rates, although more serious errors may require targeted interventions to eliminate them. Reducing alert firing frequency appears particularly important. Our results provide support for federal efforts promoting meaningful use of EHRs. Conclusions Ongoing error monitoring can allow CDS to be optimally tailored and help achieve maximal safety benefits. Clinical Trials Registration ClinicalTrials.gov, Identifier: NCT00603070. PMID:23578816
Optimum coding techniques for MST radars
NASA Technical Reports Server (NTRS)
Sulzer, M. P.; Woodman, R. F.
1986-01-01
The optimum coding technique for MST (mesosphere stratosphere troposphere) radars is that which gives the lowest possible sidelobes in practice and can be implemented without too much computing power. Coding techniques are described in Farley (1985). A technique mentioned briefly there but not fully developed and not in general use is discussed here. This is decoding by means of a filter which is not matched to the transmitted waveform, in order to reduce sidelobes below the level obtained with a matched filter. This is the first part of the technique discussed here; the second part consists of measuring the transmitted waveform and using it as the basis for the decoding filter, thus reducing errors due to imperfections in the transmitter. There are two limitations to this technique. The first is a small loss in signal to noise ratio (SNR), which usually is not significant. The second problem is related to incomplete information received at the lowest ranges. An appendix shows a technique for handling this problem. Finally, it is shown that the use of complementary codes on transmission and nonmatched decoding gives the lowest possible sidelobe level and the minimum loss in SNR due to mismatch.
Prevalence and cost of hospital medical errors in the general and elderly United States populations.
Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S
2013-12-01
The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.
The effectiveness of risk management program on pediatric nurses' medication error.
Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat
2013-09-01
Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P < 0.001) and the error-reporting rate was higher (P < 0.007) compared to before the intervention and also in comparison to the nurses of the control hospital. Based on the results of this study and taking into account the high-risk nature of the medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.
Rate, causes and reporting of medication errors in Jordan: nurses' perspectives.
Mrayyan, Majd T; Shishani, Kawkab; Al-Faouri, Ibrahim
2007-09-01
The aim of the study was to describe Jordanian nurses' perceptions about various issues related to medication errors. This is the first nursing study about medication errors in Jordan. This was a descriptive study. A convenient sample of 799 nurses from 24 hospitals was obtained. Descriptive and inferential statistics were used for data analysis. Over the course of their nursing career, the average number of recalled committed medication errors per nurse was 2.2. Using incident reports, the rate of medication errors reported to nurse managers was 42.1%. Medication errors occurred mainly when medication labels/packaging were of poor quality or damaged. Nurses failed to report medication errors because they were afraid that they might be subjected to disciplinary actions or even lose their jobs. In the stepwise regression model, gender was the only predictor of medication errors in Jordan. Strategies to reduce or eliminate medication errors are required.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
Warner, D; Dijkstra, J; Hendriks, W H; Pellikaan, W F
2013-01-01
Fractional passage rates are required to predict nutrient absorption in ruminants but data on nutrient-specific passage kinetics are largely lacking. With the use of the stable isotope ratio (δ) as an internal marker, we assessed passage kinetics of fiber and fiber-bound nitrogen (N) of intrinsically labeled grass silage from fecal and omasal excretion patterns of δ(13)C and δ(15)N. In a 6×6 Latin square, lactating dairy cows received grass silages [455 g/kg of total diet dry matter (DM) ] in a 2×3 factorial arrangement from ryegrass swards fertilized at low (45 kg of N/ha) or high (90 kg of N/ha) levels of N and harvested at 3 maturity stages. Feed intake (16.7±0.48 kg of DM/d; mean ± standard error of the mean) and milk yield (26.7±0.92 kg/d) increased at the high level of N fertilization and at decreasing maturity. Nutrient digestibility decreased with increasing plant maturity, particularly at the high level of N fertilization, essentially reflecting dietary treatment effects on the nutritional composition of the grass silage. Fractional rumen passage rates (K1) were highest and total mean retention time in the gastrointestinal tract (TMRT) was lowest when based on the external marker chromium mordanted fiber (Cr-NDF; 0.047/h and 38.0 h, respectively). Fecal δ(13)C in the acid detergent fiber fraction ((13)CADF) provided the lowest K1 (0.023/h) and the highest TMRT (61.1 h) and highest peak concentration time (PCT; 24.3h) among markers. In comparison, fecal fiber-bound N ((15)NADF) had a considerably higher K1 (0.032/h) and lower TMRT (46.4 h) than (13)CADF. Total N (measured with (15)NDM) had a comparable K1 (0.034/h) to that of (15)NADF but provided the highest fractional passage rates from the proximal colon-cecum (K2; 0.37/h) and lowest PCT (17.4 h) among markers. A literature review indicated unclear effects of grass silage maturity on K1 and unknown effects of N fertilization on K1. Our study indicated no effect of advancing maturity on fecal K1 and a trend for K1 to increase with the high level of N fertilization. Parameter K2 increased, whereas PCT and TMRT generally decreased with the high level of N fertilization. Omasal digesta sampling largely confirmed results based on fecal sampling. Results indicate that the use of δ(13)C and δ(15)N can describe fiber-specific passage kinetics of forage. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael
2002-01-01
Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Durand, Casey P
2013-01-01
Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
An Evaluation of Commercial Pedometers for Monitoring Slow Walking Speed Populations.
Beevi, Femina H A; Miranda, Jorge; Pedersen, Christian F; Wagner, Stefan
2016-05-01
Pedometers are considered desirable devices for monitoring physical activity. Two population groups of interest include patients having undergone surgery in the lower extremities or who are otherwise weakened through disease, medical treatment, or surgery procedures, as well as the slow walking senior population. For these population groups, pedometers must be able to perform reliably and accurately at slow walking speeds. The objectives of this study were to evaluate the step count accuracy of three commercially available pedometers, the Yamax (Tokyo, Japan) Digi-Walker(®) SW-200 (YM), the Omron (Kyoto, Japan) HJ-720 (OM), and the Fitbit (San Francisco, CA) Zip (FB), at slow walking speeds, specifically at 1, 2, and 3 km/h, and to raise awareness of the necessity of focusing research on step-counting devices and algorithms for slow walking populations. Fourteen participants 29.93 ±4.93 years of age were requested to walk on a treadmill at the three specified speeds, in four trials of 100 steps each. The devices were worn by the participants on the waist belt. The pedometer counts were recorded, and the error percentage was calculated. The error rate of all three evaluated pedometers decreased with the increase of speed: at 1 km/h the error rates varied from 87.11% (YM) to 95.98% (FB), at 2 km/h the error rates varied from 17.27% (FB) to 46.46% (YM), and at 3 km/h the error rates varied from 22.46% (YM) to a slight overcount of 0.70% (FB). It was observed that all the evaluated devices have high error rates at 1 km/h and mixed error rates at 2 km/h, and at 3 km/h the error rates are the smallest of the three assessed speeds, with the OM and the FB having a slight overcount. These results show that research on pedometers' software and hardware should focus more on accurate step detection at slow walking speeds.
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less
Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids
Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,
2000-01-01
Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.
New Protocol for Skin Landmark Registration in Image-Guided Neurosurgery: Technical Note.
Gerard, Ian J; Hall, Jeffery A; Mok, Kelvin; Collins, D Louis
2015-09-01
Newer versions of the commercial Medtronic StealthStation allow the use of only 8 landmark pairs for patient-to-image registration as opposed to 9 landmarks in older systems. The choice of which landmark pair to drop in these newer systems can have an effect on the quality of the patient-to-image registration. To investigate 4 landmark registration protocols based on 8 landmark pairs and compare the resulting registration accuracy with a 9-landmark protocol. Four different protocols were tested on both phantoms and patients. Two of the protocols involved using 4 ear landmarks and 4 facial landmarks and the other 2 involved using 3 ear landmarks and 5 facial landmarks. Both the fiducial registration error and target registration error were evaluated for each of the different protocols to determine any difference between them and the 9-landmark protocol. No difference in fiducial registration error was found between any of the 8-landmark protocols and the 9-landmark protocol. A significant decrease (P < .05) in target registration error was found when using a protocol based on 4 ear landmarks and 4 facial landmarks compared with the other protocols based on 3 ear landmarks. When using 8 landmarks to perform the patient-to-image registration, the protocol using 4 ear landmarks and 4 facial landmarks greatly outperformed the other 8-landmark protocols and 9-landmark protocol, resulting in the lowest target registration error.
Refractive errors in medical students in Singapore.
Woo, W W; Lim, K A; Yang, H; Lim, X Y; Liew, F; Lee, Y S; Saw, S M
2004-10-01
Refractive errors are becoming more of a problem in many societies, with prevalence rates of myopia in many Asian urban countries reaching epidemic proportions. This study aims to determine the prevalence rates of various refractive errors in Singapore medical students. 157 second year medical students (aged 19-23 years) in Singapore were examined. Refractive error measurements were determined using a stand-alone autorefractor. Additional demographical data was obtained via questionnaires filled in by the students. The prevalence rate of myopia in Singapore medical students was 89.8 percent (Spherical equivalence (SE) at least -0.50 D). Hyperopia was present in 1.3 percent (SE more than +0.50 D) of the participants and the overall astigmatism prevalence rate was 82.2 percent (Cylinder at least 0.50 D). Prevalence rates of myopia and astigmatism in second year Singapore medical students are one of the highest in the world.
Social deviance activates the brain's error-monitoring system.
Kim, Bo-Rin; Liss, Alison; Rao, Monica; Singer, Zachary; Compton, Rebecca J
2012-03-01
Social psychologists have long noted the tendency for human behavior to conform to social group norms. This study examined whether feedback indicating that participants had deviated from group norms would elicit a neural signal previously shown to be elicited by errors and monetary losses. While electroencephalograms were recorded, participants (N = 30) rated the attractiveness of 120 faces and received feedback giving the purported average rating made by a group of peers. The feedback was manipulated so that group ratings either were the same as a participant's rating or deviated by 1, 2, or 3 points. Feedback indicating deviance from the group norm elicited a feedback-related negativity, a brainwave signal known to be elicited by objective performance errors and losses. The results imply that the brain treats deviance from social norms as an error.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
Automatic learning rate adjustment for self-supervising autonomous robot control
NASA Technical Reports Server (NTRS)
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
1992-01-01
Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.
An Automated Method to Generate e-Learning Quizzes from Online Language Learner Writing
ERIC Educational Resources Information Center
Flanagan, Brendan; Yin, Chengjiu; Hirokawa, Sachio; Hashimoto, Kiyota; Tabata, Yoshiyuki
2013-01-01
In this paper, the entries of Lang-8, which is a Social Networking Site (SNS) site for learning and practicing foreign languages, were analyzed and found to contain similar rates of errors for most error categories reported in previous research. These similarly rated errors were then processed using an algorithm to determine corrections suggested…
Code of Federal Regulations, 2012 CFR
2012-10-01
..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
Comparison of the biometric formulas used for applanation A-scan ultrasound biometry.
Özcura, Fatih; Aktaş, Serdar; Sağdık, Hacı Murat; Tetikoğlu, Mehmet
2016-10-01
The purpose of the study was to compare the accuracy of various biometric formulas for predicting postoperative refraction determined using applanation A-scan ultrasound. This retrospective comparative study included 485 eyes that underwent uneventful phacoemulsification with intraocular lens (IOL) implantation. Applanation A-scan ultrasound biometry and postoperative manifest refraction were obtained in all eyes. Biometric data were entered into each of the five IOL power calculation formulas: SRK-II, SRK/T, Holladay I, Hoffer Q, and Binkhorst II. All eyes were divided into three groups according to axial length: short (≤22.0 mm), average (22.0-25.0 mm), and long (≥25.0 mm) eyes. The postoperative spherical equivalent was calculated and compared with the predicted refractive error using each biometric formula. The results showed that all formulas had significantly lower mean absolute error (MAE) in comparison with Binkhorst II formula (P < 0.01). The lowest MAE was obtained with the SRK-II for average (0.49 ± 0.40 D) and short (0.67 ± 0.54 D) eyes and the SRK/T for long (0.61 ± 0.50 D) eyes. The highest postoperative hyperopic shift was seen with the SRK-II for average (46.8 %), short (28.1 %), and long (48.4 %) eyes. The highest postoperative myopic shift was seen with the Holladay I for average (66.4 %) and long (71.0 %) eyes and the SRK/T for short eyes (80.6 %). In conclusion, the SRK-II formula produced the lowest MAE in average and short eyes and the SRK/T formula produced the lowest MAE in long eyes. The SRK-II has the highest postoperative hyperopic shift in all eyes. The highest postoperative myopic shift is with the Holladay I for average and long eyes and SRK/T for short eyes.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
Human operator response to error-likely situations in complex engineering systems
NASA Technical Reports Server (NTRS)
Morris, Nancy M.; Rouse, William B.
1988-01-01
The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Mapping DNA polymerase errors by single-molecule sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David F.; Lu, Jenny; Chang, Seungwoo
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Mapping DNA polymerase errors by single-molecule sequencing
Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...
2016-05-16
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.
Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean
2017-06-01
Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.
Automatic detection of malaria parasite in blood images using two parameters.
Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong
2015-01-01
Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.
Yarber, William L; Milhausen, Robin R; Beavers, Karly A; Ryan, Rebecca; Sullivan, Margaret J; Vanterpool, Karen B; Sanders, Stephanie A; Graham, Cynthia A; Crosby, Richard A
2018-07-01
To conduct a pilot test of a brief, self-guided, home-based program designed to improve male condom use attitudes and behaviors among young women. Women aged 18-24 years from a large Midwestern University reporting having had penile-vaginal sex with two or more partners in the past 3 months. Sixty-seven enrolled; 91.0% completed the study. A repeated measures design was used, with assessments occurring at baseline, immediately post intervention (T2), and 30 days subsequent (T3). Condom use errors and problems decreased, condom-related attitudes and self-efficacy improved, and experiences of condom-protected sex were rated more positively when comparing baseline with T2 and T3 scores. Further, the proportion of condom-protected episodes more than doubled between T1 and T3 for those in the lowest quartile for condom use at baseline. This low-resource, home-based program improved condom-related attitudes and promoted the correct and consistent use of condoms.
The assessment of cognitive errors using an observer-rated method.
Drapeau, Martin
2014-01-01
Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.
Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda
2017-09-01
We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.
Decoy-state quantum key distribution with more than three types of photon intensity pulses
NASA Astrophysics Data System (ADS)
Chau, H. F.
2018-04-01
The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.
2013-03-01
layering and typing to provide a vertical stratification of the cloud-filled pixels detected in Level 2. Level 3 output is remapped to the standard AFWA...analyses are compared to one another to see if the most recent analysis also has the lowest estimated error. Optimum interpolation (OI) occurs when...NORTHERN HEMISPHERE MERGED CLOUD ANALYSES FROM THE UNITED STATES AIR FORCE CLOUD DEPICTION FORECASTING SYSTEM II by Chandra M. Pasillas March
McCabe, G.J.
1989-01-01
Errors of the Thornthwaite model can be analyzed using adjusted pan evaporation as an index of potential evapotranspiration. An examination of ratios of adjusted pan evaporation to Thornthwaite potential evapotranspiration indicates that the ratios are highest in the winter and lowest during summer months. This trend suggests a parabolic pattern. In this study a parabolic function is used to adjust Thornthwaite estimates of potential evapotranspiration. Forty locations east of the Rocky Mountains are analyzed. -from Author
Families as Partners in Hospital Error and Adverse Event Surveillance
Khan, Alisa; Coffey, Maitreya; Litterer, Katherine P.; Baird, Jennifer D.; Furtak, Stephannie L.; Garcia, Briana M.; Ashland, Michele A.; Calaman, Sharon; Kuzma, Nicholas C.; O’Toole, Jennifer K.; Patel, Aarti; Rosenbluth, Glenn; Destino, Lauren A.; Everhart, Jennifer L.; Good, Brian P.; Hepps, Jennifer H.; Dalal, Anuj K.; Lipsitz, Stuart R.; Yoon, Catherine S.; Zigmont, Katherine R.; Srivastava, Rajendu; Starmer, Amy J.; Sectish, Theodore C.; Spector, Nancy D.; West, Daniel C.; Landrigan, Christopher P.
2017-01-01
IMPORTANCE Medical errors and adverse events (AEs) are common among hospitalized children. While clinician reports are the foundation of operational hospital safety surveillance and a key component of multifaceted research surveillance, patient and family reports are not routinely gathered. We hypothesized that a novel family-reporting mechanism would improve incident detection. OBJECTIVE To compare error and AE rates (1) gathered systematically with vs without family reporting, (2) reported by families vs clinicians, and (3) reported by families vs hospital incident reports. DESIGN, SETTING, AND PARTICIPANTS We conducted a prospective cohort study including the parents/caregivers of 989 hospitalized patients 17 years and younger (total 3902 patient-days) and their clinicians from December 2014 to July 2015 in 4 US pediatric centers. Clinician abstractors identified potential errors and AEs by reviewing medical records, hospital incident reports, and clinician reports as well as weekly and discharge Family Safety Interviews (FSIs). Two physicians reviewed and independently categorized all incidents, rating severity and preventability (agreement, 68%–90%; κ, 0.50–0.68). Discordant categorizations were reconciled. Rates were generated using Poisson regression estimated via generalized estimating equations to account for repeated measures on the same patient. MAIN OUTCOMES AND MEASURES Error and AE rates. RESULTS Overall, 746 parents/caregivers consented for the study. Of these, 717 completed FSIs. Their median (interquartile range) age was 32.5 (26–40) years; 380 (53.0%) were nonwhite, 566 (78.9%) were female, 603 (84.1%) were English speaking, and 380 (53.0%) had attended college. Of 717 parents/caregivers completing FSIs, 185 (25.8%) reported a total of 255 incidents, which were classified as 132 safety concerns (51.8%), 102 nonsafety-related quality concerns (40.0%), and 21 other concerns (8.2%). These included 22 preventable AEs (8.6%), 17 nonharmful medical errors (6.7%), and 11 nonpreventable AEs (4.3%) on the study unit. In total, 179 errors and 113 AEs were identified from all sources. Family reports included 8 otherwise unidentified AEs, including 7 preventable AEs. Error rates with family reporting (45.9 per 1000 patient-days) were 1.2-fold (95%CI, 1.1–1.2) higher than rates without family reporting (39.7 per 1000 patient-days). Adverse event rates with family reporting (28.7 per 1000 patient-days) were 1.1-fold (95%CI, 1.0–1.2; P=.006) higher than rates without (26.1 per 1000 patient-days). Families and clinicians reported similar rates of errors (10.0 vs 12.8 per 1000 patient-days; relative rate, 0.8; 95%CI, .5–1.2) and AEs (8.5 vs 6.2 per 1000 patient-days; relative rate, 1.4; 95%CI, 0.8–2.2). Family-reported error rates were 5.0-fold (95%CI, 1.9–13.0) higher and AE rates 2.9-fold (95% CI, 1.2–6.7) higher than hospital incident report rates. CONCLUSIONS AND RELEVANCE Families provide unique information about hospital safety and should be included in hospital safety surveillance in order to facilitate better design and assessment of interventions to improve safety. PMID:28241211
Star tracker error analysis: Roll-to-pitch nonorthogonality
NASA Technical Reports Server (NTRS)
Corson, R. W.
1979-01-01
An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.
Error monitoring issues for common channel signaling
NASA Astrophysics Data System (ADS)
Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.
1994-04-01
Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.
Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D
2008-07-01
English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.
Antidepressant and antipsychotic medication errors reported to United States poison control centers.
Kamboj, Alisha; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A
2018-05-08
To investigate unintentional therapeutic medication errors associated with antidepressant and antipsychotic medications in the United States and expand current knowledge on the types of errors commonly associated with these medications. A retrospective analysis of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications was conducted using data from the National Poison Data System. From 2000 to 2012, poison control centers received 207 670 calls reporting unintentional therapeutic errors associated with antidepressant or antipsychotic medications that occurred outside of a health care facility, averaging 15 975 errors annually. The rate of antidepressant-related errors increased by 50.6% from 2000 to 2004, decreased by 6.5% from 2004 to 2006, and then increased 13.0% from 2006 to 2012. The rate of errors related to antipsychotic medications increased by 99.7% from 2000 to 2004 and then increased by 8.8% from 2004 to 2012. Overall, 70.1% of reported errors occurred among adults, and 59.3% were among females. The medications most frequently associated with errors were selective serotonin reuptake inhibitors (30.3%), atypical antipsychotics (24.1%), and other types of antidepressants (21.5%). Most medication errors took place when an individual inadvertently took or was given a medication twice (41.0%), inadvertently took someone else's medication (15.6%), or took the wrong medication (15.6%). This study provides a comprehensive overview of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications. The frequency and rate of these errors increased significantly from 2000 to 2012. Given that use of these medications is increasing in the US, this study provides important information about the epidemiology of the associated medication errors. Copyright © 2018 John Wiley & Sons, Ltd.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
Observations on the clinical examination of the camel (Camelus dromedarius) in the field.
Tefera, M
2004-07-01
Camels usually inhabit remote areas, where diagnostic facilities and laboratories are very scarce. The species differences between the camel and other domestic animals necessitate some specific examination techniques. The objective of this study was to describe the clinical examination methods and sources of common errors that require special consideration in the camel. Young camels are examined in the standing position, while adults require restraint. Restraining procedures, both standing and in sternal recumbency, are described. New equipment and a crush were designed. The body temperature of the camels examined fluctuated from 35.7 to 38.9 degrees C, being lowest in the morning and highest in the afternoon; high temperature in the morning is indicative of fever, while high afternoon temperatures could be hyperthermia. It was difficult to take the pulse rate for routine procedures. The heart rate ranged from 35 to 50 per min; there was no difference between the heart rate in the morning and in the afternoon. The mean respiratory rate was 11 per min and respiration was of thoracol-lumbar type. The mucous membranes of the eye were an important site for appreciating signs of discoloration, while those of the mouth, rectum and vagina were unsuitable. The left flank was the best site for determining the rate of rumen contractions, which was 3+/-1.2 every 5 min, as determined by auscultation; counting the contractions by the application of the fist was difficult. The palpable external lymph nodes were the parotid, maxillary, prescapular, inferior cervical, thoracic, cubital, ilial and popiteal; they are large and can be seen on inspection in healthy animals, so that was not indicative of disease. A list of diagnostic indicators for the rapid diagnosis of ten endemic camel diseases was generated from the empirical signs.
Torsheim, Torbjørn; Nygren, Jens M; Rasmussen, Mette; Arnarsson, Arsæll M; Bendtsen, Pernille; Schnohr, Christina W; Nielsen, Line; Nyholm, Maria
2018-02-01
We aimed to estimate the magnitude of socioeconomic inequality in self-rated health among Nordic adolescents (aged 11, 13 and 15 years) using the Family Affluence Scale (a composite measure of material assets) and perceived family wealth as indicators of socioeconomic status. Data were collected from the Health Behaviour in School-aged Children (HBSC) survey in 2013-2014. A sample of 32,560 adolescents from Denmark, Norway, Finland, Iceland, Greenland and Sweden was included in the study. Age-adjusted regression analyses were used to estimate associations between fair or poor self-rated health and the ridit scores for family affluence and perceived wealth. The pooled relative index of inequality of 2.10 indicates that the risk of fair or poor health was about twice as high for young people with the lowest family affluence relative to those with the highest family affluence. The relative index of inequality for observed family affluence was highest in Denmark and lowest in Norway. For perceived family wealth, the pooled relative index of inequality of 3.99 indicates that the risk of fair or poor health was about four times as high for young people with the lowest perceived family wealth relative to those with the highest perceived family wealth. The relative index of inequality for perceived family wealth was highest in Iceland and lowest in Greenland. Social inequality in self-rated health among adolescents was found to be robust across subjective and objective indicators of family affluence in the Nordic welfare states.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
Prediction of pilot reserve attention capacity during air-to-air target tracking
NASA Technical Reports Server (NTRS)
Onstott, E. D.; Faulkner, W. H.
1977-01-01
Reserve attention capacity of a pilot was calculated using a pilot model that allocates exclusive model attention according to the ranking of task urgency functions whose variables are tracking error and error rate. The modeled task consisted of tracking a maneuvering target aircraft both vertically and horizontally, and when possible, performing a diverting side task which was simulated by the precise positioning of an electrical stylus and modeled as a task of constant urgency in the attention allocation algorithm. The urgency of the single loop vertical task is simply the magnitude of the vertical tracking error, while the multiloop horizontal task requires a nonlinear urgency measure of error and error rate terms. Comparison of model results with flight simulation data verified the computed model statistics of tracking error of both axes, lateral and longitudinal stick amplitude and rate, and side task episodes. Full data for the simulation tracking statistics as well as the explicit equations and structure of the urgency function multiaxis pilot model are presented.
The Effects of Non-Normality on Type III Error for Comparing Independent Means
ERIC Educational Resources Information Center
Mendes, Mehmet
2007-01-01
The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…
NASA Astrophysics Data System (ADS)
Bezan, Scott; Shirani, Shahram
2006-12-01
To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.
Errors in fluid therapy in medical wards.
Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin
2012-04-01
Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.
Bayes Error Rate Estimation Using Classifier Ensembles
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2003-01-01
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.
Wu, Zhijin; Liu, Dongmei; Sui, Yunxia
2008-02-01
The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.
Outpatient Prescribing Errors and the Impact of Computerized Prescribing
Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W
2005-01-01
Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Anthony, E-mail: anthony.arnold@sesiahs.health.nsw.gov.a; Delaney, Geoff P.; Cassapi, Lynette
Purpose: Radiotherapy is a common treatment for cancer patients. Although incidence of error is low, errors can be severe or affect significant numbers of patients. In addition, errors will often not manifest until long periods after treatment. This study describes the development of an incident reporting tool that allows categorical analysis and time trend reporting, covering first 3 years of use. Methods and Materials: A radiotherapy-specific incident analysis system was established. Staff members were encouraged to report actual errors and near-miss events detected at prescription, simulation, planning, or treatment phases of radiotherapy delivery. Trend reporting was reviewed monthly. Results: Reportsmore » were analyzed for the first 3 years of operation (May 2004-2007). A total of 688 reports was received during the study period. The actual error rate was 0.2% per treatment episode. During the study period, the actual error rates reduced significantly from 1% per year to 0.3% per year (p < 0.001), as did the total event report rates (p < 0.0001). There were 3.5 times as many near misses reported compared with actual errors. Conclusions: This system has allowed real-time analysis of events within a radiation oncology department to a reduced error rate through focus on learning and prevention from the near-miss reports. Plans are underway to develop this reporting tool for Australia and New Zealand.« less
Syndromic surveillance for health information system failures: a feasibility study.
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-05-01
To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.
Kim, Joo Hyoung; Cha, Jung Yul; Hwang, Chung Ju
2012-12-01
This in vitro study was undertaken to evaluate the physical, chemical, and biological properties of commercially available metal orthodontic brackets in South Korea, because national standards for these products are lacking. FOUR BRACKET BRANDS WERE TESTED FOR DIMENSIONAL ACCURACY, (MANUFACTURING ERRORS IN ANGULATION AND TORQUE), CYTOTOXICITY, COMPOSITION, ELUTION, AND CORROSION: Archist (Daeseung Medical), Victory (3M Unitek), Kosaka (Tomy), and Confidence (Shinye Odontology Materials). The tested rackets showed no significant differences in manufacturing errors in angulation, but Confidence brackets showed a significant difference in manufacturing errors in torque. None of the brackets were cytotoxic to mouse fibroblasts. The metal ion components did not show a regular increasing or decreasing trend of elution over time, but the volume of the total eluted metal ions increased: Archist brackets had the maximal Cr elution and Confidence brackets appeared to have the largest volume of total eluted metal ions because of excessive Ni elution. Confidence brackets showed the lowest corrosion resistance during potentiodynamic polarization. The results of this study could potentially be applied in establishing national standards for metal orthodontic brackets and in evaluating commercially available products.
Thermally stratified squeezed flow between two vertical Riga plates with no slip conditions
NASA Astrophysics Data System (ADS)
Farooq, M.; Mansoor, Zahira; Ijaz Khan, M.; Hayat, T.; Anjum, A.; Mir, N. A.
2018-04-01
This paper demonstrates the mixed convective squeezing nanomaterials flow between two vertical plates, one of which is a Riga plate embedded in a thermally stratified medium subject to convective boundary conditions. Heat transfer features are elaborated with viscous dissipation. Single-wall and multi-wall carbon nanotubes are taken as nanoparticles to form a homogeneous solution in the water. A non-linear system of differential equations is obtained for the considered flow by using suitable transformations. Convergence analysis for velocity and temperature is computed and discussed explicitly through BVPh 2.0. Residual errors are also computed by BVPh 2.0 for the dimensionless governing equations. We introduce two undetermined convergence control parameters, i.e. \\hslash_{θ} and \\hslashf , to compute the lowest entire error. The average residual error for the k -th-order approximation is given in a table. The effects of different flow variables on temperature and velocity distributions are sketched graphically and discussed comprehensively. Furthermore the coefficient of skin friction and the Nusselt number are also analyzed through graphical data.
Extended calculations of energies, transition rates, and lifetimes for F-like Kr XXVIII
NASA Astrophysics Data System (ADS)
Zhang, C. Y.; Si, R.; Yao, K.; Gu, M. F.; Wang, K.; Chen, C. Y.
2018-02-01
The excitation energies, lifetimes, wavelengths and E1, E2, M1 and M2 transition rates for the lowest 389 levels of the 2l7, 2l63l‧, 2l64l‧, and 2l65l‧ configurations from second-order many-body perturbation theory (MBPT) calculations, and the results for the lowest 200 states of the 2l7, 2l63l‧, and 2l64l‧ configurations from multi-configuration Dirac-Hartree-Fock (MCDHF) calculations in F-like Kr XXVIII are presented in this work. The relative differences between our two sets of level energies are mostly within 0.005% for the lowest 200 levels. Comparisons are made with experimental and other available theoretical results to assess the reliability and accuracy of the present calculations. We believe them to be the most complete and accurate results for Kr XXVIII at present.
2007 Washington State collision data summary
DOT National Transportation Integrated Search
2008-10-13
In 2007, Washingtons traffic fatality rate decreased to 1.00 per 100 million vehicle miles traveled (VMT), the lowest fatality rate in state history. : Washington State is 27% below the 2007 U.S. fatality rate of 1.37 fatalities per 100 VMT. : Bet...
2009 Washington State collision data summary
DOT National Transportation Integrated Search
2010-07-02
In 2009, Washingtons traffic fatality rate decreased to 0.87 per 100 million vehicle miles traveled (VMT), the lowest fatality rate in state history. : Washington State is 33% below the 2009 U.S. preliminary fatality rate of 1.16 fatalities per 10...
2008 Washington State collision data summary
DOT National Transportation Integrated Search
2009-07-28
In 2008, Washingtons traffic fatality rate decreased to 0.94 per 100 million vehicle miles traveled (VMT), the lowest fatality rate in state history. : Washington State is 35% below the 2008 U.S. fatality rate of 1.27 fatalities per 100 VMT. : Bet...
Sheehan, B; Lee, Y; Rodriguez, M; Tiase, V; Schnall, R
2012-01-01
Mobile health (mHealth) is a growing field aimed at developing mobile information and communication technologies for healthcare. Adolescents are known for their ubiquitous use of mobile technologies in everyday life. However, the use of mHealth tools among adolescents is not well described. We examined the usability of four commonly used mobile devices (an iPhone, an Android with touchscreen keyboard, an Android with built-in keyboard, and an iPad) for accessing healthcare information among a group of urban-dwelling adolescents. Guided by the FITT (Fit between Individuals, Task, and Technology) framework, a thinkaloud protocol was combined with a questionnaire to describe usability on three dimensions: 1) task-technology fit; 2) individual-technology fit; and 3) individual-task fit. For task-technology fit, we compared the efficiency, and effectiveness of each of the devices tested and found that the iPhone was the most usable had the fewest errors and prompts and had the lowest mean overall task time For individual-task fit, we compared efficiency and learnability measures by website tasks and found no statistically significant effect on tasks steps, task time and number of errors. Following our comparison of success rates by website tasks, we compared the difference between two mobile applications which were used for diet tracking and found statistically significant effect on tasks steps, task time and number of errors. For individual-technology fit, interface quality was significantly different across devices indicating that this is an important factor to be considered in developing future mobile devices. All of our users were able to complete all of the tasks, however the time needed to complete the tasks was significantly different by mobile device and mHealth application. Future design of mobile technology and mHealth applications should place particular importance on interface quality.
Robinson, Charlotte S; Sharp, Patrick
2012-05-01
Blood glucose monitoring systems (BGMS) are used in the hospital environment to manage blood glucose levels in patients at the bedside. The International Organization for Standardization (ISO) 15197:2003 standard is currently used by regulatory bodies as a minimum requirement for the performance of BGMS, specific to self-testing. There are calls for the tightening of accuracy requirements and implementation of a standard specifically for point-of-care (POC) BGMS. The accuracy of six commonly used BGMS was assessed in a clinical setting, with 108 patients' finger stick capillary samples. Using the accuracy criteria from the existing standard and a range of tightened accuracy criteria, system performance was compared. Other contributors to system performance have been measured, including hematocrit sensitivity and meter error rates encountered in the clinical setting. Five of the six BGMS evaluated met current accuracy criteria within the ISO 15197 standard. Only the Optium Xceed system had >95% of all readings within a tightened criteria of ±12.5% from the reference at glucose levels ≥72 mg/dl (4 mmol/liter) and ±9 mg/dl (0.5 mmol/liter) at glucose levels <72 mg/dl (4 mmol/liter). The Nova StatStrip Xpress had the greatest number of error messages observed; Optium Xceed the least. OneTouch Ultra2, Nova StatStrip Xpress, Accu-Chek Performa, and Contour TS products were all significantly influenced by blood hematocrit levels. From evidence obtained during this clinical evaluation, the Optium Xceed system is most likely to meet future anticipated accuracy standards for POC BGMS. In this clinical study, the results demonstrated the Optium Xceed product to have the highest level of accuracy, to have the lowest occurrence of error messages, and to be least influenced by blood hematocrit levels. © 2012 Diabetes Technology Society.
MO-D-BRB-02: The Radiological Physics Center's Quality Audit Program: Where Can We Improve?
Followill, D; Lowenstein, J; Molineu, A; Alvarez, P; Aguirre, J; Kry, S; Summers, P; Ibbott, G
2012-06-01
To analyze the findings of the Radiological Physics Center's (RPC) QA audits of institutions participating in NCI sponsored clinical trials. The RPC has developed an extensive Quality Assurance (QA) program over the past 44 years. This program includes on-site dosimetry reviews where measurements on therapy machines are made, records are reviewed and personnel are interviewed. The program's remote audit tools include mailed dosimeters (OSLD/TLD) to verify output calibration, comparison of dosimetry data with RPC 'standard' data, evaluation of benchmark and patient calculations to verify the treatment planning algorithms, review of institution's QA procedures and records, and use of anthropomorphic phantoms to verify tumor dose delivery. The RPC endeavors to assist institutions in finding the origins of any detected discrepancies, and to resolve them. Ninety percent of institutions receiving dosimetry recommendations has remained level for the past 5 years. The most frequent recommendations were for not performing TG-40 QA tests, wedge factors, small field size output factors and off-axis factors. Since TG-51 was published, the number of beam calibrations audited during visits with ion chambers, that met the RPC's ±3% criterion, decreased initially but has risen to pre-TG-51 levels. The OSLD/TLD program shows that only ∼3% of the beams are outside our ±5% criteria, but these discrepancies are distributed over 12-20% of the institutions. The percent of institutions with ï, 3 l beam outside the RPC's criteria is approximately the same whether OSLD/TLD or ion chambers were used. The first time passing rate for the anthropomorphic phantoms is increasing with time. The prostate phantom has the highest pass rate while the spine phantom has the lowest. Numerous dosimetry errors continue to be discovered by the RPC's QA program and the RPC continues to play an important role in helping institutions resolve these errors. This work was supported by PHS grants CA10953 and CA081647 awarded by NCI. © 2012 American Association of Physicists in Medicine.
Publication bias was not a good reason to discourage trials with low power.
Borm, George F; den Heijer, Martin; Zielhuis, Gerhard A
2009-01-01
The objective was to investigate whether it is justified to discourage trials with less than 80% power. Trials with low power are unlikely to produce conclusive results, but their findings can be used by pooling then in a meta-analysis. However, such an analysis may be biased, because trials with low power are likely to have a nonsignificant result and are less likely to be published than trials with a statistically significant outcome. We simulated several series of studies with varying degrees of publication bias and then calculated the "real" one-sided type I error and the bias of meta-analyses with a "nominal" error rate (significance level) of 2.5%. In single trials, in which heterogeneity was set at zero, low, and high, the error rates were 2.3%, 4.7%, and 16.5%, respectively. In multiple trials with 80%-90% power and a publication rate of 90% when the results were nonsignificant, the error rates could be as high as 5.1%. When the power was 50% and the publication rate of non-significant results was 60%, the error rates did not exceed 5.3%, whereas the bias was at most 15% of the difference used in the power calculation. The impact of publication bias does not warrant the exclusion of trials with 50% power.
Preemptive spatial competition under a reproduction-mortality constraint.
Allstadt, Andrew; Caraco, Thomas; Korniss, G
2009-06-21
Spatially structured ecological interactions can shape selection pressures experienced by a population's different phenotypes. We study spatial competition between phenotypes subject to antagonistic pleiotropy between reproductive effort and mortality rate. The constraint we invoke reflects a previous life-history analysis; the implied dependence indicates that although propagation and mortality rates both vary, their ratio is fixed. We develop a stochastic invasion approximation predicting that phenotypes with higher propagation rates will invade an empty environment (no biotic resistance) faster, despite their higher mortality rate. However, once population density approaches demographic equilibrium, phenotypes with lower mortality are favored, despite their lower propagation rate. We conducted a set of pairwise invasion analyses by simulating an individual-based model of preemptive competition. In each case, the phenotype with the lowest mortality rate and (via antagonistic pleiotropy) the lowest propagation rate qualified as evolutionarily stable among strategies simulated. This result, for a fixed propagation to mortality ratio, suggests that a selective response to spatial competition can extend the time scale of the population's dynamics, which in turn decelerates phenotypic evolution.
Medication Errors in Vietnamese Hospitals: Prevalence, Potential Outcome and Associated Factors
Nguyen, Huong-Thao; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja
2015-01-01
Background Evidence from developed countries showed that medication errors are common and harmful. Little is known about medication errors in resource-restricted settings, including Vietnam. Objectives To determine the prevalence and potential clinical outcome of medication preparation and administration errors, and to identify factors associated with errors. Methods This was a prospective study conducted on six wards in two urban public hospitals in Vietnam. Data of preparation and administration errors of oral and intravenous medications was collected by direct observation, 12 hours per day on 7 consecutive days, on each ward. Multivariable logistic regression was applied to identify factors contributing to errors. Results In total, 2060 out of 5271 doses had at least one error. The error rate was 39.1% (95% confidence interval 37.8%- 40.4%). Experts judged potential clinical outcomes as minor, moderate, and severe in 72 (1.4%), 1806 (34.2%) and 182 (3.5%) doses. Factors associated with errors were drug characteristics (administration route, complexity of preparation, drug class; all p values < 0.001), and administration time (drug round, p = 0.023; day of the week, p = 0.024). Several interactions between these factors were also significant. Nurse experience was not significant. Higher error rates were observed for intravenous medications involving complex preparation procedures and for anti-infective drugs. Slightly lower medication error rates were observed during afternoon rounds compared to other rounds. Conclusions Potentially clinically relevant errors occurred in more than a third of all medications in this large study conducted in a resource-restricted setting. Educational interventions, focusing on intravenous medications with complex preparation procedure, particularly antibiotics, are likely to improve patient safety. PMID:26383873
Gray Wolf (Canis lupus) dyad monthly association rates by demographic group.
Barber-Meyer, Shannon M.; Mech, L. David
2015-01-01
Preliminary data from GPS-collared wolves (Canis lupus) in the Superior National Forest of northeastern Minnesota indicated wolves had low association rates with packmates during summer. However, aerial-telemetry locations of very high frequency (VHF)-radioed wolves in this same area showed high associations among packmates during winter. We analyzed aerial-telemetry-location data from VHF-collared wolves in several packs (n=18 dyads) in this same area from 1994-2012 by month, and found lowest association rates occurred during June. While other studies have found low association among wolf packmates during summer, information on differences in association patterns depending on the wolf associates’ demographics is sparse. During May-July, association rates were greatest for breeding pairs, followed by sibling dyads, and lowest for parent– offspring dyads. Our findings improve our understanding of how individual wolf relationships affect monthly association rates. We highlight some important remaining questions regarding wolf packmate associations.
Effects of the liver volume and donor steatosis on errors in the estimated standard liver volume.
Siriwardana, Rohan Chaminda; Chan, See Ching; Chok, Kenneth Siu Ho; Lo, Chung Mau; Fan, Sheung Tat
2011-12-01
An accurate assessment of donor and recipient liver volumes is essential in living donor liver transplantation. Many liver donors are affected by mild to moderate steatosis, and steatotic livers are known to have larger volumes. This study analyzes errors in liver volume estimation by commonly used formulas and the effects of donor steatosis on these errors. Three hundred twenty-five Asian donors who underwent right lobe donor hepatectomy were the subjects of this study. The percentage differences between the liver volumes from computed tomography (CT) and the liver volumes estimated with each formula (ie, the error percentages) were calculated. Five popular formulas were tested. The degrees of steatosis were categorized as follows: no steatosis [n = 178 (54.8%)], ≤ 10% steatosis [n = 128 (39.4%)], and >10% to 20% steatosis [n = 19 (5.8%)]. The median errors ranged from 0.6% (7 mL) to 24.6% (360 mL). The lowest was seen with the locally derived formula. All the formulas showed a significant association between the error percentage and the CT liver volume (P < 0.001). Overestimation was seen with smaller liver volumes, whereas underestimation was seen with larger volumes. The locally derived formula was most accurate when the liver volume was 1001 to 1250 mL. A multivariate analysis showed that the estimation error was dependent on the liver volume (P = 0.001) and the anthropometric measurement that was used in the calculation (P < 0.001) rather than steatosis (P ≥ 0.07). In conclusion, all the formulas have a similar pattern of error that is possibly related to the anthropometric measurement. Clinicians should be aware of this pattern of error and the liver volume with which their formula is most accurate. Copyright © 2011 American Association for the Study of Liver Diseases.
[Validation of a method for notifying and monitoring medication errors in pediatrics].
Guerrero-Aznar, M D; Jiménez-Mesa, E; Cotrina-Luque, J; Villalba-Moreno, A; Cumplido-Corbacho, R; Fernández-Fernández, L
2014-12-01
To analyze the impact of a multidisciplinary and decentralized safety committee in the pediatric management unit, and the joint implementation of a computing network application for reporting medication errors, monitoring the follow-up of the errors, and an analysis of the improvements introduced. An observational, descriptive, cross-sectional, pre-post intervention study was performed. An analysis was made of medication errors reported to the central safety committee in the twelve months prior to introduction, and those reported to the decentralized safety committee in the management unit in the nine months after implementation, using the computer application, and the strategies generated by the analysis of reported errors. Number of reported errors/10,000 days of stay, number of reported errors with harm per 10,000 days of stay, types of error, categories based on severity, stage of the process, and groups involved in the notification of medication errors. Reported medication errors increased 4.6 -fold, from 7.6 notifications of medication errors per 10,000 days of stay in the pre-intervention period to 36 in the post-intervention, rate ratio 0.21 (95% CI; 0.11-0.39) (P<.001). The medication errors with harm or requiring monitoring reported per 10,000 days of stay, was virtually unchanged from one period to the other ratio rate 0,77 (95% IC; 0,31-1,91) (P>.05). The notification of potential errors or errors without harm per 10,000 days of stay increased 17.4-fold (rate ratio 0.005., 95% CI; 0.001-0.026, P<.001). The increase in medication errors notified in the post-intervention period is a reflection of an increase in the motivation of health professionals to report errors through this new method. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Kalet, A; Smith, W
2016-06-15
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Classification of echolocation clicks from odontocetes in the Southern California Bight.
Roch, Marie A; Klinck, Holger; Baumann-Pickering, Simone; Mellinger, David K; Qui, Simon; Soldevilla, Melissa S; Hildebrand, John A
2011-01-01
This study presents a system for classifying echolocation clicks of six species of odontocetes in the Southern California Bight: Visually confirmed bottlenose dolphins, short- and long-beaked common dolphins, Pacific white-sided dolphins, Risso's dolphins, and presumed Cuvier's beaked whales. Echolocation clicks are represented by cepstral feature vectors that are classified by Gaussian mixture models. A randomized cross-validation experiment is designed to provide conditions similar to those found in a field-deployed system. To prevent matched conditions from inappropriately lowering the error rate, echolocation clicks associated with a single sighting are never split across the training and test data. Sightings are randomly permuted before assignment to folds in the experiment. This allows different combinations of the training and test data to be used while keeping data from each sighting entirely in the training or test set. The system achieves a mean error rate of 22% across 100 randomized three-fold cross-validation experiments. Four of the six species had mean error rates lower than the overall mean, with the presumed Cuvier's beaked whale clicks showing the best performance (<2% error rate). Long-beaked common and bottlenose dolphins proved the most difficult to classify, with mean error rates of 53% and 68%, respectively.
Comparison of disagreement and error rates for three types of interdepartmental consultations.
Renshaw, Andrew A; Gould, Edwin W
2005-12-01
Previous studies have documented a relatively high rate of disagreement for interdepartmental consultations, but follow-up is limited. We reviewed the results of 3 types of interdepartmental consultations in our hospital during a 2-year period, including 328 incoming, 928 pathologist-generated outgoing, and 227 patient- or clinician-generated outgoing consults. The disagreement rate was significantly higher for incoming consults (10.7%) than for outgoing pathologist-generated consults (5.9%) (P = .06). Disagreement rates for outgoing patient- or clinician-generated consults were not significantly different from either other type (7.9%). Additional consultation, biopsy, or testing follow-up was available for 19 (54%) of 35, 14 (25%) of 55, and 6 (33%) of 18 incoming, outgoing pathologist-generated, and outgoing patient- or clinician-generated consults with disagreements, respectively; the percentage of errors varied widely (15/19 [79%], 8/14 [57%], and 2/6 [33%], respectively), but differences were not significant (P >.05 for each). Review of the individual errors revealed specific diagnostic areas in which improvement in performance might be made. Disagreement rates for interdepartmental consultation ranged from 5.9% to 10.7%, but only 33% to 79% represented errors. Additional consultation, tissue, and testing results can aid in distinguishing disagreements from errors.
Error-rate prediction for programmable circuits: methodology, tools and studied cases
NASA Astrophysics Data System (ADS)
Velazco, Raoul
2013-05-01
This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).
NASA Astrophysics Data System (ADS)
Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi
2016-08-01
This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.
Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.
Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M
2006-10-01
Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
Olson, Stephen M; Hussaini, Mohammad; Lewis, James S
2011-05-01
Frozen section analysis is an essential tool for assessing margins intra-operatively to assure complete resection. Many institutions evaluate surgical defect edge tissue provided by the surgeon after the main lesion has been removed. With the increasing use of transoral laser microsurgery, this method is becoming even more prevalent. We sought to evaluate error rates at our large academic institution and to see if sampling errors could be reduced by the simple method change of taking an additional third section on these specimens. All head and neck tumor resection cases from January 2005 through August 2008 with margins evaluated by frozen section were identified by database search. These cases were analyzed by cutting two levels during frozen section and a third permanent section later. All resection cases from August 2008 through July 2009 were identified as well. These were analyzed by cutting three levels during frozen section (the third a 'much deeper' level) and a fourth permanent section later. Error rates for both of these periods were determined. Errors were separated into sampling and interpretation types. There were 4976 total frozen section specimens from 848 patients. The overall error rate was 2.4% for all frozen sections where just two levels were evaluated and was 2.5% when three levels were evaluated (P=0.67). The sampling error rate was 1.6% for two-level sectioning and 1.2% for three-level sectioning (P=0.42). However, when considering only the frozen section cases where tumor was ultimately identified (either at the time of frozen section or on permanent sections) the sampling error rate for two-level sectioning was 15.3 versus 7.4% for three-level sectioning. This difference was statistically significant (P=0.006). Cutting a single additional 'deeper' level at the time of frozen section identifies more tumor-bearing specimens and may reduce the number of sampling errors.
47 CFR 73.1942 - Candidate rates.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Candidate rates. 73.1942 Section 73.1942... Rules Applicable to All Broadcast Stations § 73.1942 Candidate rates. (a) Charges for use of stations... the lowest unit charge by such a method must include in that calculation all rates for all...
47 CFR 73.1942 - Candidate rates.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Candidate rates. 73.1942 Section 73.1942... Rules Applicable to All Broadcast Stations § 73.1942 Candidate rates. (a) Charges for use of stations... the lowest unit charge by such a method must include in that calculation all rates for all...
47 CFR 73.1942 - Candidate rates.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false Candidate rates. 73.1942 Section 73.1942... Rules Applicable to All Broadcast Stations § 73.1942 Candidate rates. (a) Charges for use of stations... the lowest unit charge by such a method must include in that calculation all rates for all...
47 CFR 73.1942 - Candidate rates.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false Candidate rates. 73.1942 Section 73.1942... Rules Applicable to All Broadcast Stations § 73.1942 Candidate rates. (a) Charges for use of stations... the lowest unit charge by such a method must include in that calculation all rates for all...
Global Vertical Rates from VLBl
NASA Technical Reports Server (NTRS)
Ma, Chopo; MacMillan, D.; Petrov, L.
2003-01-01
The analysis of global VLBI observations provides vertical rates for 50 sites with formal errors less than 2 mm/yr and median formal error of 0.4 mm/yr. These sites are largely in Europe and North America with a few others in east Asia, Australia, South America and South Africa. The time interval of observations is up to 20 years. The error of the velocity reference frame is less than 0.5 mm/yr, but results from several sites with observations from more than one antenna suggest that the estimated vertical rates may have temporal variations or non-geophysical components. Comparisons with GPS rates and corresponding site position time series will be discussed.
NASA Astrophysics Data System (ADS)
Mai, Tam V.-T.; Duong, Minh v.; Huynh, Lam K.
2018-03-01
This short communication discusses the role of the newly-found lowest-lying structures of the transition states (∼3.0 kcal/mol lower than those previously reported by Ren et al. (2018), together with the inclusion of the hindered internal rotation correction, in obtaining reliable kinetic data for the hydrogen abstraction from sevoflurane by OH radical. With the new structures and the more rigorous kinetic model, the calculated rate constants agree much better with the experimental data than those suggested by Ren and coworkers.
Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN
NASA Astrophysics Data System (ADS)
Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.
2016-12-01
In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.
Feedback on prescribing errors to junior doctors: exploring views, problems and preferred methods.
Bertels, Jeroen; Almoudaris, Alex M; Cortoos, Pieter-Jan; Jacklin, Ann; Franklin, Bryony Dean
2013-06-01
Prescribing errors are common in hospital inpatients. However, the literature suggests that doctors are often unaware of their errors as they are not always informed of them. It has been suggested that providing more feedback to prescribers may reduce subsequent error rates. Only few studies have investigated the views of prescribers towards receiving such feedback, or the views of hospital pharmacists as potential feedback providers. Our aim was to explore the views of junior doctors and hospital pharmacists regarding feedback on individual doctors' prescribing errors. Objectives were to determine how feedback was currently provided and any associated problems, to explore views on other approaches to feedback, and to make recommendations for designing suitable feedback systems. A large London NHS hospital trust. To explore views on current and possible feedback mechanisms, self-administered questionnaires were given to all junior doctors and pharmacists, combining both 5-point Likert scale statements and open-ended questions. Agreement scores for statements regarding perceived prescribing error rates, opinions on feedback, barriers to feedback, and preferences for future practice. Response rates were 49% (37/75) for junior doctors and 57% (57/100) for pharmacists. In general, doctors did not feel threatened by feedback on their prescribing errors. They felt that feedback currently provided was constructive but often irregular and insufficient. Most pharmacists provided feedback in various ways; however some did not or were inconsistent. They were willing to provide more feedback, but did not feel it was always effective or feasible due to barriers such as communication problems and time constraints. Both professional groups preferred individual feedback with additional regular generic feedback on common or serious errors. Feedback on prescribing errors was valued and acceptable to both professional groups. From the results, several suggested methods of providing feedback on prescribing errors emerged. Addressing barriers such as the identification of individual prescribers would facilitate feedback in practice. Research investigating whether or not feedback reduces the subsequent error rate is now needed.
Induced seasonal reproductive performance in two breeds of sheep.
de Nicolo, G; Morris, S T; Kenyon, P R; Morel, P C H; Parkinson, T J
2008-01-30
The objective of the present experiment was to determine whether failure to become pregnant through out-of-season reproductive management regimes is attributable to a failure to stimulate normal corpora lutea (CL) production. Romney ewes and East Friesian Composite (EF) ewes were mated in summer, autumn, winter, and early and late spring after administration of intravaginal progesterone inserts (plus eCG in spring and summer) in a factorial (two breedsxfive mating periods) experimental design. Oestrus rate was determined from mating data, and the numbers of CL were determined by laparoscopy 9 days after ram introduction. Conception rate, pregnancy rate and litter size were determined by transabdominal ultrasonography 62 days after ram introduction. Proportions of ewes displaying oestrus did not differ significantly (75-100%) between seasons or breeds, except for the Romney ewes in summer (67%; P<0.05). Numbers of CL on Day 9 for EF ewes was lowest in autumn (1.5) and winter (1.5), and highest in late spring (2.1; P<0.05). In contrast, Romney ewes had the lowest CL count in summer (1.3), and the highest in autumn (1.8) and winter (1.6; P<0.05). Only in summer was there a difference between breeds (EF 1.8; Romney 1.3; P<0.01). The proportion of ewes that failed to conceive despite having one or more CL present was highest in late spring (EF 41%, Romney 43%), and lowest in autumn (EF 9%, Romney 4%) and winter (EF 14%, Romney 4%; P<0.05). Conception and pregnancy rates followed similar patterns with values for autumn (EF 91%, 91%; Romney 96%, 96%) and June (EF 86%, 82%; Romney 91%, 83%) being significantly (P<0.05) higher than in early (EF 50%, 40%; Romney 54%, 50%) and late spring (EF 44%, 36%; Romney 42%, 36%). Pregnancy rate in summer was higher for EF ewes (60%) than for Romney ewes (39%) but conception rates were not statistically different (EF 68%; Romney 60%). Numbers of foetuses identified at scanning was highest in autumn (1.5) and lowest in late spring (0.5 P<0.001): Litter size in pregnant EF ewes was highest in early spring and lowest in winter (1.8 versus 1.2; P<0.01), but for pregnant Romney ewes was highest in winter and lowest in early spring (1.9 versus 1.3; P<0.001). It was concluded that seasonal differences in the ability of ewes to conceive are not the consequence of failure to display oestrus or to ovulate, but probably are a result of failure of fertilisation or the establishment of pregnancy.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-09
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.
On the robustness of bucket brigade quantum RAM
NASA Astrophysics Data System (ADS)
Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa
2015-12-01
We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.
Error-Related Psychophysiology and Negative Affect
ERIC Educational Resources Information Center
Hajcak, G.; McDonald, N.; Simons, R.F.
2004-01-01
The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
2008 traffic crash facts annual report
DOT National Transportation Integrated Search
2008-01-01
In 2008, the death rate on Nebraska roadways was 1.1 persons killed per : 100 million vehicle miles traveled. This is the lowest rate ever recorded. The : death rate in Nebraska, from 1961 to 2008 is represented in Figure 1. Even : though the death r...
78 FR 42764 - Cumberland System
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-17
... INFORMATION CONTACT: Virgil G. Hobbs III, Assistant Administrator, Finance and Marketing, Southeastern Power... responsible for marketing may exceed market rates. Southeastern will work with the U.S. Army Corps of... rates for power will be the lowest possible rates consistent with sound business principles. Response 4...
Confidence Intervals for Error Rates Observed in Coded Communications Systems
NASA Astrophysics Data System (ADS)
Hamkins, J.
2015-05-01
We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Shihai; Lo, Chien-Chi; Li, Po-E
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...
2016-02-29
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
Refractive errors in children and adolescents in Bucaramanga (Colombia).
Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly
2017-01-01
The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.
Weaver, Amy L; Stutzman, Sonja E; Supnet, Charlene; Olson, DaiWai M
2016-03-01
The emergency department (ED) is demanding and high risk. The impact of sleep quantity has been hypothesized to impact patient care. This study investigated the hypothesis that fatigue and impaired mentation, due to sleep disturbance and shortened overall sleeping hours, would lead to increased nursing errors. This is a prospective observational study of 30 ED nurses using self-administered survey and sleep architecture measured by wrist actigraphy as predictors of self-reported error rates. An actigraphy device was worn prior to working a 12-hour shift and nurses completed the Pittsburgh Sleep Quality Index (PSQI). Error rates were reported on a visual analog scale at the end of a 12-hour shift. The PSQI responses indicated that 73.3% of subjects had poor sleep quality. Lower sleep quality measured by actigraphy (hours asleep/hours in bed) was associated with higher self-perceived minor errors. Sleep quantity (total hours slept) was not associated with minor, moderate, nor severe errors. Our study found that ED nurses' sleep quality, immediately prior to a working 12-hour shift, is more predictive of error than sleep quantity. These results present evidence that a "good night's sleep" prior to working a nursing shift in the ED is beneficial for reducing minor errors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Advancing the research agenda for diagnostic error reduction.
Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep
2013-10-01
Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.
Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate
NASA Astrophysics Data System (ADS)
Chau, H. F.
2002-12-01
A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.
Comparison of a Virtual Older Driver Assessment with an On-Road Driving Test.
Eramudugolla, Ranmalee; Price, Jasmine; Chopra, Sidhant; Li, Xiaolan; Anstey, Kaarin J
2016-12-01
To design a low-cost simulator-based driving assessment for older adults and to compare its validity with that of an on-road driving assessment and other measures of older driver risk. Cross-sectional observational study. Canberra, Australia. Older adult drivers (N = 47; aged 65-88, mean age 75.2). Error rate on a simulated drive with environment and scoring procedure matched to those of an on-road test. Other measures included participant age, simulator sickness severity, neuropsychological measures, and driver screening measures. Outcome variables included occupational therapist (OT)-rated on-road errors, on-road safety rating, and safety category. Participants' error rate on the simulated drive was significantly correlated with their OT-rated driving safety (correlation coefficient (r) = -0.398, P = .006), even after adjustment for age and simulator sickness (P = .009). The simulator error rate was a significant predictor of categorization as unsafe on the road (P = .02, sensitivity 69.2%, specificity 100%), with 13 (27%) drivers assessed as unsafe. Simulator error was also associated with other older driver safety screening measures such as useful field of view (r = 0.341, P = .02), DriveSafe (r = -0.455, P < .01), and visual motion sensitivity (r = 0.368, P = .01) but was not associated with memory (delayed word recall) or global cognition (Mini-Mental State Examination). Drivers made twice as many errors on the simulated assessment as during the on-road assessment (P < .001), with significant differences in the rate and type of errors between the two mediums. A low-cost simulator-based assessment is valid as a screening instrument for identifying at-risk older drivers but not as an alternative to on-road evaluation when accurate data on competence or pattern of impairment is required for licensing decisions and training programs. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
People in sub-Saharan Africa rate their health and health care among the lowest in the world.
Deaton, Angus S; Tortora, Robert
2015-03-01
The health of people in sub-Saharan Africa is a major global concern. However, data are weak, and little is known about how people in the region perceive their health or their health care. We used data from the Gallup World Poll in 2012 to document sub-Saharan Africans' perceived health status, their satisfaction with health care, their contact with medical professionals, and the priority they attach to health care. In comparison to other regions of the world, sub-Saharan Africa has the lowest ratings for well-being and the lowest satisfaction with health care. It also has the second-lowest perception of personal health, after only the former Soviet Union and its Eastern European satellites. HIV prevalence is positively correlated with perceived improvements in health care in countries with high prevalence. This is consistent with an improvement in at least some health care services as a result of the largely aid-funded rollout of antiretroviral treatment. Even so, sub-Saharan Africans do not prioritize health care as a matter of policy, although donors are increasingly shifting their aid efforts in the region toward health. Project HOPE—The People-to-People Health Foundation, Inc.
Report of the 1988 2-D Intercomparison Workshop, chapter 3
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar
1989-01-01
Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.
Allan, Darcey M.; Lonigan, Christopher J.
2014-01-01
Although both the Continuous Performance Test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (Mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An ADHD-rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across four temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to one type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. PMID:25419645
Allan, Darcey M; Lonigan, Christopher J
2015-06-01
Although both the continuous performance test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An attention deficit/hyperactivity disorder (ADHD) rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across 4 temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to 1 type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. (c) 2015 APA, all rights reserved).
Cochran, Gary L; Barrett, Ryan S; Horn, Susan D
2016-08-01
The role of pharmacist transcription, onsite pharmacist dispensing, use of automated dispensing cabinets (ADCs), nurse-nurse double checks, or barcode-assisted medication administration (BCMA) in reducing medication error rates in critical access hospitals (CAHs) was evaluated. Investigators used the practice-based evidence methodology to identify predictors of medication errors in 12 Nebraska CAHs. Detailed information about each medication administered was recorded through direct observation. Errors were identified by comparing the observed medication administered with the physician's order. Chi-square analysis and Fisher's exact test were used to measure differences between groups of medication-dispensing procedures. Nurses observed 6497 medications being administered to 1374 patients. The overall error rate was 1.2%. The transcription error rates for orders transcribed by an onsite pharmacist were slightly lower than for orders transcribed by a telepharmacy service (0.10% and 0.33%, respectively). Fewer dispensing errors occurred when medications were dispensed by an onsite pharmacist versus any other method of medication acquisition (0.10% versus 0.44%, p = 0.0085). The rates of dispensing errors for medications that were retrieved from a single-cell ADC (0.19%), a multicell ADC (0.45%), or a drug closet or general supply (0.77%) did not differ significantly. BCMA was associated with a higher proportion of dispensing and administration errors intercepted before reaching the patient (66.7%) compared with either manual double checks (10%) or no BCMA or double check (30.4%) of the medication before administration (p = 0.0167). Onsite pharmacist dispensing and BCMA were associated with fewer medication errors and are important components of a medication safety strategy in CAHs. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Type I and Type II error concerns in fMRI research: re-balancing the scale
Cunningham, William A.
2009-01-01
Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017
Accuracy assessment of high-rate GPS measurements for seismology
NASA Astrophysics Data System (ADS)
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
Validating Microwave-Based Satellite Rain Rate Retrievals Over TRMM Ground Validation Sites
NASA Astrophysics Data System (ADS)
Fisher, B. L.; Wolff, D. B.
2008-12-01
Multi-channel, passive microwave instruments are commonly used today to probe the structure of rain systems and to estimate surface rainfall from space. Until the advent of meteorological satellites and the development of remote sensing techniques for measuring precipitation from space, there was no observational system capable of providing accurate estimates of surface precipitation on global scales. Since the early 1970s, microwave measurements from satellites have provided quantitative estimates of surface rainfall by observing the emission and scattering processes due to the existence of clouds and precipitation in the atmosphere. This study assesses the relative performance of microwave precipitation estimates from seven polar-orbiting satellites and the TRMM TMI using four years (2003-2006) of instantaneous radar rain estimates obtained from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The seven polar orbiters include three different sensor types: SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), and AMSR-E. The TMI aboard the TRMM satellite flies in a sun asynchronous orbit between 35 S and 35 N latitudes. The rain information from these satellites are combined and used to generate several multi-satellite rain products, namely the Goddard TRMM Multi-satellite Precipitation Analysis (TMPA), NOAA's CPC Morphing Technique (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN). Instantaneous rain rates derived from each sensor were matched to the GV estimates in time and space at a resolution of 0.25 degrees. The study evaluates the measurement and error characteristics of the various satellite estimates through inter-comparisons with GV radar estimates. The GV rain observations provided an empirical ground-based reference for assessing the relative performance of each sensor and sensor class. Because the relative performance of the rain algorithms depends on the underlying surface terrain, the data for MELB was further stratified into ocean, land and coast categories using a 0.25 terrain mask. Relative to GV, AMSR-E and the TMI exhibited the highest correlation and skill over the full dynamic range of observed rain rates at both validation sites. The AMSU sensors, on the other hand, exhibited the lowest correlation and skill, though all sensors performed reasonably well compared to GV. The general tendency was for the microwave sensors to overestimate rain rates below 1 mm/hr where the sampling was highest and to underestimate the high rain rates above 10 mm/hr where the sampling was lowest. Underestimation of the low rain rate regime is attributed to difficulties of detecting and measuring low rain rates, while overestimation over the oceans was attributed largely to saturation of the brightness temperatures at high rain rates. Overall biases depended on the relative differences in the total rainfall at the extremes and the performance of each sensor at the nominal rain rates.
The incidence and severity of errors in pharmacist-written discharge medication orders.
Onatade, Raliat; Sawieres, Sara; Veck, Alexandra; Smith, Lindsay; Gore, Shivani; Al-Azeib, Sumiah
2017-08-01
Background Errors in discharge prescriptions are problematic. When hospital pharmacists write discharge prescriptions improvements are seen in the quality and efficiency of discharge. There is limited information on the incidence of errors in pharmacists' medication orders. Objective To investigate the extent and clinical significance of errors in pharmacist-written discharge medication orders. Setting 1000-bed teaching hospital in London, UK. Method Pharmacists in this London hospital routinely write discharge medication orders as part of the clinical pharmacy service. Convenient days, based on researcher availability, between October 2013 and January 2014 were selected. Pre-registration pharmacists reviewed all discharge medication orders written by pharmacists on these days and identified discrepancies between the medication history, inpatient chart, patient records and discharge summary. A senior clinical pharmacist confirmed the presence of an error. Each error was assigned a potential clinical significance rating (based on the NCCMERP scale) by a physician and an independent senior clinical pharmacist, working separately. Main outcome measure Incidence of errors in pharmacist-written discharge medication orders. Results 509 prescriptions, written by 51 pharmacists, containing 4258 discharge medication orders were assessed (8.4 orders per prescription). Ten prescriptions (2%), contained a total of ten erroneous orders (order error rate-0.2%). The pharmacist considered that one error had the potential to cause temporary harm (0.02% of all orders). The physician did not rate any of the errors with the potential to cause harm. Conclusion The incidence of errors in pharmacists' discharge medication orders was low. The quality, safety and policy implications of pharmacists routinely writing discharge medication orders should be further explored.
A fast radiative transfer method for the simulation of visible satellite imagery
NASA Astrophysics Data System (ADS)
Scheck, Leonhard; Frèrebeau, Pascal; Buras-Schnell, Robert; Mayer, Bernhard
2016-05-01
A computationally efficient radiative transfer method for the simulation of visible satellite images is presented. The top of atmosphere reflectance is approximated by a function depending on vertically integrated optical depths and effective particle sizes for water and ice clouds, the surface albedo, the sun and satellite zenith angles and the scattering angle. A look-up table (LUT) for this reflectance function is generated by means of the discrete ordinate method (DISORT). For a constant scattering angle the reflectance is a relatively smooth and symmetric function of the two zenith angles, which can be well approximated by the lowest-order terms of a 2D Fourier series. By storing only the lowest Fourier coefficients and adopting a non-equidistant grid for the scattering angle, the LUT is reduced to a size of 21 MB per satellite channel. The computation of the top of atmosphere reflectance requires only the calculation of the cloud parameters from the model state and the evaluation and interpolation of the reflectance function using the compressed LUT and is thus orders of magnitude faster than DISORT. The accuracy of the method is tested by generating synthetic satellite images for the 0.6 μm and 0.8 μm channels of the SEVIRI instrument for operational COSMO-DE model forecasts from the German Weather Service (DWD) and comparing them to DISORT results. For a test period in June the root mean squared absolute reflectance error is about 10-2 and the mean relative reflectance error is less than 2% for both channels. For scattering angles larger than 170 ° the rapid variation of reflectance with the particle size related to the backscatter glory reduces the accuracy and the errors increase by a factor of 3-4. Speed and accuracy of the new method are sufficient for operational data assimilation and high-resolution model verification applications.
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
NASA Astrophysics Data System (ADS)
Belda, Santiago; Heinkelmann, Robert; Ferrándiz, José M.; Karbon, Maria; Nilsson, Tobias; Schuh, Harald
2017-10-01
Very Long Baseline Interferometry (VLBI) is the only space geodetic technique capable of measuring all the Earth orientation parameters (EOP) accurately and simultaneously. Modeling the Earth's rotational motion in space within the stringent consistency goals of the Global Geodetic Observing System (GGOS) makes VLBI observations essential for constraining the rotation theories. However, the inaccuracy of early VLBI data and the outdated products could cause non-compliance with these goals. In this paper, we perform a global VLBI analysis of sessions with different processing settings to determine a new set of empirical corrections to the precession offsets and rates, and to the amplitudes of a wide set of terms included in the IAU 2006/2000A precession-nutation theory. We discuss the results in terms of consistency, systematic errors, and physics of the Earth. We find that the largest improvements w.r.t. the values from IAU 2006/2000A precession-nutation theory are associated with the longest periods (e.g., 18.6-yr nutation). A statistical analysis of the residuals shows that the provided corrections attain an error reduction at the level of 15 μas. Additionally, including a Free Core Nutation (FCN) model into a priori Celestial Pole Offsets (CPOs) provides the lowest Weighted Root Mean Square (WRMS) of residuals. We show that the CPO estimates are quite insensitive to TRF choice, but slightly sensitive to the a priori EOP and the inclusion of different VLBI sessions. Finally, the remaining residuals reveal two apparent retrograde signals with periods of nearly 2069 and 1034 days.
Kim, Myoungsoo
2010-04-01
The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, p<.001). There was no significant difference in some categories for organizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
The Relationship among Correct and Error Oral Reading Rates and Comprehension.
ERIC Educational Resources Information Center
Roberts, Michael; Smith, Deborah Deutsch
1980-01-01
Eight learning disabled boys (10 to 12 years old) who were seriously deficient in both their oral reading and comprehension performances participated in the study which investigated, through an applied behavior analysis model, the interrelationships of three reading variables--correct oral reading rates, error oral reading rates, and percentage of…
Patient safety culture perceptions in the college of dentistry
Al Sweleh, Fahad Saleh; Al Saedan, Abdullah Mohammed; Al Dayel, Omar Abdullah
2018-01-01
Abstract A positive safety culture is essential to patient safety because it improves quality of care. The aim of this study was to assess staff and student perceptions of the patient safety culture in the clinics of the College of Dentistry at King Saud University in Saudi Arabia. A cross-sectional study was conducted in the College of Dentistry at King Saud University in Saudi Arabia. It included 4th and 5th year students, interns, general practitioners, and dental assistants. The data were collected by using paper-based questionnaire of modified version of the Agency for Healthcare Research and Quality Hospital Survey on Patient Safety Culture. Data were entered into SPSS Version 20. Score on a particular safety culture dimension was calculated. The overall response rate was 72.8% (390/536). Team work dimension had the highest average percent positive dimension score (72.3%) while staffing had the lowest score (10%). Dental assistant had high agreement in Teamwork dimension (87.8%); Supervisor/Manager Expectations and Actions Promoting Patient Safety dimension (66.9%); Organizational Learning—Continuous Improvement dimension (79.1%); Management Support for Patient Safety dimension (84.5%); Feedback and Communication About Error dimension (58.3%); Frequency of Events Reported dimension (54.0%); Teamwork Across Units dimension (73.2%). Most of areas perceived that there is no event reported (76.1-85.3%) in the past 12 months. Overall patient safety grade is more than moderate in the clinic. Teamwork within Units and Organizational Learning—Continuous Improvement dimension had the highest score while staffing had the lowest score. Dental assistants perceived positive score in most dimensions while students perceived slight negative score in most dimensions. PMID:29480849
NASA Technical Reports Server (NTRS)
Clinton, N. J. (Principal Investigator)
1980-01-01
Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.
Burger, Joanna
2005-02-01
Considerable attention has been devoted to the concerns and perceptions of people residing around contaminated facilities, both brownfields in urban areas and others located in remote and lightly populated areas. This paper examines the concerns of recreationists and sportsmen residing near the Department of Energy's (DOE) Brookhaven National Laboratory, in central Long Island, one of the most densely populated regions in the United States, where tourism is of prime importance. On an open-ended question, the greatest concern was pollution, followed by environmental health as a global concern, and human health as a concern for Brookhaven. Accidents/spills, loss of public health, and loss of ecological health were rated highest among a list of concerns, and change in property values was rated lowest. When asked to rank seven concerns, protecting human health was ranked the highest, and economic interests were ranked the lowest. For future land use at Brookhaven, recreational uses were rated the highest, while building houses and factories, and storage of nuclear material were rated the lowest. These data can be used by managers, decision and policy makers, and the general public to assess and manage local and regional environmental concerns and to consider future land uses for decommissioned lands, such as those at Brookhaven.
Eberth, Jan M.; Huber, John Charles; Rene, Antonio
2010-01-01
Background Breast cancer incidence and mortality have been increasing among American Indian and Alaska Native (AI/AN) women, and their survival rate is the lowest of all racial/ethnic groups. Nevertheless, knowledge of AI/AN women’s breast cancer screening practices and their correlates is limited. Methods Using the 2003 California Health Interview Survey, we 1) compared the breast cancer screening practices of AI/AN women to other groups and 2) explored the association of several factors known or thought to influence AI/AN women’s breast cancer screening practices. Findings Compared to other races, AI/AN women had the lowest rate of mammogram screening (ever and within the past 2 years). For clinical breast exam receipt, Asian women had the lowest rate, followed by AI/AN women. Factors associated with AI/AN women’s breast cancer screening practices included older age, having a high school diploma or some college education, receipt of a Pap test within the past 3 years, and having visited a doctor within the past year. Conclusions Significant differences in breast cancer screening practices were noted between races, with AI/AN women often having significantly lower rates. Integrating these epidemiological findings into effective policy and practice requires additional applied research initiatives. PMID:20211430
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Global distortion of GPS networks associated with satellite antenna model errors
NASA Astrophysics Data System (ADS)
Cardellach, E.; Elósegui, P.; Davis, J. L.
2007-07-01
Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.
Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors
NASA Technical Reports Server (NTRS)
Cardellach, E.; Elosequi, P.; Davis, J. L.
2007-01-01
Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
Analysis of the “naming game” with learning errors in communications
NASA Astrophysics Data System (ADS)
Lou, Yang; Chen, Guanrong
2015-07-01
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.