Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems
NASA Astrophysics Data System (ADS)
El-Ghandour, Osama M.; Saha, Debabrata
1991-05-01
A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less
Syndromic surveillance for health information system failures: a feasibility study.
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-05-01
To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael
2002-01-01
Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214
Syndromic surveillance for health information system failures: a feasibility study
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-01-01
Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Kalet, A; Smith, W
2016-06-15
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection
Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-01-01
Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.
Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-08-18
The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.
Error-Related Psychophysiology and Negative Affect
ERIC Educational Resources Information Center
Hajcak, G.; McDonald, N.; Simons, R.F.
2004-01-01
The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, AL; Bhagwat, MS; Buzurovic, I
Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less
Decoy-state quantum key distribution with more than three types of photon intensity pulses
NASA Astrophysics Data System (ADS)
Chau, H. F.
2018-04-01
The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.
Experimental investigation of observation error in anuran call surveys
McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.
2010-01-01
Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R
2016-03-01
This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Wu, Zhijin; Liu, Dongmei; Sui, Yunxia
2008-02-01
The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Durand, Casey P
2013-01-01
Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
Adaboost multi-view face detection based on YCgCr skin color model
NASA Astrophysics Data System (ADS)
Lan, Qi; Xu, Zhiyong
2016-09-01
Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.
Fault-tolerant quantum error detection.
Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher
2017-10-01
Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Mapping DNA polymerase errors by single-molecule sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David F.; Lu, Jenny; Chang, Seungwoo
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Mapping DNA polymerase errors by single-molecule sequencing
Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...
2016-05-16
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Performance of the ICAO standard core service modulation and coding techniques
NASA Technical Reports Server (NTRS)
Lodge, John; Moher, Michael
1988-01-01
Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.
Fault-tolerant quantum error detection
Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher
2017-01-01
Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889
Design Consideration and Performance of Networked Narrowband Waveforms for Tactical Communications
2010-09-01
four proposed CPM modes, with perfect acquisition parameters, for both coherent and noncoherent detection using an iterative receiver with both inner...Figure 1: Bit error rate performance of various CPM modes with coherent and noncoherent detection. Figure 3 shows the corresponding relationship...symbols. Table 2 summarises the parameter Coherent results (cross) Noncoherent results (diamonds) Figur 1: Bit Error Rate Pe f rmance of
Teerawattananon, Kanlaya; Myint, Chaw-Yin; Wongkittirux, Kwanjai; Teerawattananon, Yot; Chinkulkitnivat, Bunyong; Orprayoon, Surapong; Kusakul, Suwat; Tengtrisorn, Supaporn; Jenchitr, Watanee
2014-01-01
As part of the development of a system for the screening of refractive error in Thai children, this study describes the accuracy and feasibility of establishing a program conducted by teachers. To assess the accuracy and feasibility of screening by teachers. A cross-sectional descriptive and analytical study was conducted in 17 schools in four provinces representing four geographic regions in Thailand. A two-staged cluster sampling was employed to compare the detection rate of refractive error among eligible students between trained teachers and health professionals. Serial focus group discussions were held for teachers and parents in order to understand their attitude towards refractive error screening at schools and the potential success factors and barriers. The detection rate of refractive error screening by teachers among pre-primary school children is relatively low (21%) for mild visual impairment but higher for moderate visual impairment (44%). The detection rate for primary school children is high for both levels of visual impairment (52% for mild and 74% for moderate). The focus group discussions reveal that both teachers and parents would benefit from further education regarding refractive errors and that the vast majority of teachers are willing to conduct a school-based screening program. Refractive error screening by health professionals in pre-primary and primary school children is not currently implemented in Thailand due to resource limitations. However, evidence suggests that a refractive error screening program conducted in schools by teachers in the country is reasonable and feasible because the detection and treatment of refractive error in very young generations is important and the screening program can be implemented and conducted with relatively low costs.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions
ERIC Educational Resources Information Center
Yormaz, Seha; Sünbül, Önder
2017-01-01
This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…
Passarge, Michelle; Fix, Michael K; Manser, Peter; Stampanoni, Marco F M; Siebers, Jeffrey V
2017-04-01
To develop a robust and efficient process that detects relevant dose errors (dose errors of ≥5%) in external beam radiation therapy and directly indicates the origin of the error. The process is illustrated in the context of electronic portal imaging device (EPID)-based angle-resolved volumetric-modulated arc therapy (VMAT) quality assurance (QA), particularly as would be implemented in a real-time monitoring program. A Swiss cheese error detection (SCED) method was created as a paradigm for a cine EPID-based during-treatment QA. For VMAT, the method compares a treatment plan-based reference set of EPID images with images acquired over each 2° gantry angle interval. The process utilizes a sequence of independent consecutively executed error detection tests: an aperture check that verifies in-field radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment check to examine if rotation, scaling, and translation are within tolerances; pixel intensity check containing the standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each check were determined. To test the SCED method, 12 different types of errors were selected to modify the original plan. A series of angle-resolved predicted EPID images were artificially generated for each test case, resulting in a sequence of precalculated frames for each modified treatment plan. The SCED method was applied multiple times for each test case to assess the ability to detect introduced plan variations. To compare the performance of the SCED process with that of a standard gamma analysis, both error detection methods were applied to the generated test cases with realistic noise variations. Averaged over ten test runs, 95.1% of all plan variations that resulted in relevant patient dose errors were detected within 2° and 100% within 14° (<4% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 89.1% were detected by the SCED method within 2°. Based on the type of check that detected the error, determination of error sources was achieved. With noise ranging from no random noise to four times the established noise value, the averaged relevant dose error detection rate of the SCED method was between 94.0% and 95.8% and that of gamma between 82.8% and 89.8%. An EPID-frame-based error detection process for VMAT deliveries was successfully designed and tested via simulations. The SCED method was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of relevant dose errors. Compared to a typical (3%, 3 mm) gamma analysis, the SCED method produced a higher detection rate for all introduced dose errors, identified errors in an earlier stage, displayed a higher robustness to noise variations, and indicated the error source. © 2017 American Association of Physicists in Medicine.
Invariance of the bit error rate in the ancilla-assisted homodyne detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide
2010-11-15
We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less
NASA Technical Reports Server (NTRS)
Heck, M. L.; Findlay, J. T.; Compton, H. R.
1983-01-01
The Aerodynamic Coefficient Identification Package (ACIP) is an instrument consisting of body mounted linear accelerometers, rate gyros, and angular accelerometers for measuring the Space Shuttle vehicular dynamics. The high rate recorded data are utilized for postflight aerodynamic coefficient extraction studies. Although consistent with pre-mission accuracies specified by the manufacturer, the ACIP data were found to contain detectable levels of systematic error, primarily bias, as well as scale factor, static misalignment, and temperature dependent errors. This paper summarizes the technique whereby the systematic ACIP error sources were detected, identified, and calibrated with the use of recorded dynamic data from the low rate, highly accurate Inertial Measurement Units.
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Kuchenbecker, Joern
2018-05-22
Pseudoisochromatic colour plates are constructed according to specific principles. They can be very different in quality. To check the diagnostic quality, they have to be tested on a large number of subjects, but this procedure is can be tedious and expensive. Therefore, the use of a standardised web-based test is recommended. Eight Pflüger trident colour plates (including 1 demo plate) according to the Velhagen edition of 1980 were digitised and inserted into a web-based colour vision test (www.color-vision-test.info). After visual display calibration and 2 demonstrations of the demo plate (#1) to introduce the test procedure, 7 red-green colour plates (#3, 4, 10, 11, 12, 13, 16) were presented in a randomised order in 3 different randomised positions each for 10 seconds. The user had to specify the opening of the Pflüger trident by a mouse click or arrow keys. 6360 evaluations of all plates from 2120 randomised subjects were included. Without error, the detection rates of the plates were between 72.2% (plate #3) and 90.7% (plate #16; n = 6360). With an error number of 7 errors per test, the detection rates of the plates were between 21.6% (plate #3) and 67.7% (plate #16; n = 1556). If an error number of 14 errors was used, the detection rates of the plates were between 10.9% (plate #11) and 40.1% (plate #16; n = 606). Plate #16 showed the highest detection rate - at zero error number as well as at the 7 and 14 error limit. The diagnostic quality of this plate was low. The colourimetric data were improved. The detection rate was then significantly lower. The differences in quality of pseudoisochromatic Pflüger trident colour plates can be tested without great effort using a web-based test. Optimisation of a poor quality colour plate can then be carried out. Georg Thieme Verlag KG Stuttgart · New York.
Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN
NASA Astrophysics Data System (ADS)
Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.
2016-12-01
In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.
Metacognition and proofreading: the roles of aging, motivation, and interest.
Hargis, Mary B; Yue, Carole L; Kerr, Tyson; Ikeda, Kenji; Murayama, Kou; Castel, Alan D
2017-03-01
The current study examined younger and older adults' error detection accuracy, prediction calibration, and postdiction calibration on a proofreading task, to determine if age-related differences would be present in this type of common error detection task. Participants were given text passages, and were first asked to predict the percentage of errors they would detect in the passage. They then read the passage and circled errors (which varied in complexity and locality), and made postdictions regarding their performance, before repeating this with another passage and answering a comprehension test of both passages. There were no age-related differences in error detection accuracy, text comprehension, or metacognitive calibration, though participants in both age groups were overconfident overall in their metacognitive judgments. Both groups gave similar ratings of motivation to complete the task. The older adults rated the passages as more interesting than younger adults did, although this level of interest did not appear to influence error-detection performance. The age equivalence in both proofreading ability and calibration suggests that the ability to proofread text passages and the associated metacognitive monitoring used in judging one's own performance are maintained in aging. These age-related similarities persisted when younger adults completed the proofreading tasks on a computer screen, rather than with paper and pencil. The findings provide novel insights regarding the influence that cognitive aging may have on metacognitive accuracy and text processing in an everyday task.
Improved Conflict Detection for Reducing Operational Errors in Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Hainz
2003-01-01
An operational error is an incident in which an air traffic controller allows the separation between two aircraft to fall below the minimum separation standard. The rates of such errors in the US have increased significantly over the past few years. This paper proposes new detection methods that can help correct this trend by improving on the performance of Conflict Alert, the existing software in the Host Computer System that is intended to detect and warn controllers of imminent conflicts. In addition to the usual trajectory based on the flight plan, a "dead-reckoning" trajectory (current velocity projection) is also generated for each aircraft and checked for conflicts. Filters for reducing common types of false alerts were implemented. The new detection methods were tested in three different ways. First, a simple flightpath command language was developed t o generate precisely controlled encounters for the purpose of testing the detection software. Second, written reports and tracking data were obtained for actual operational errors that occurred in the field, and these were "replayed" to test the new detection algorithms. Finally, the detection methods were used to shadow live traffic, and performance was analysed, particularly with regard to the false-alert rate. The results indicate that the new detection methods can provide timely warnings of imminent conflicts more consistently than Conflict Alert.
Caffeine enhances real-world language processing: evidence from a proofreading task.
Brunyé, Tad T; Mahoney, Caroline R; Rapp, David N; Ditman, Tali; Taylor, Holly A
2012-03-01
Caffeine has become the most prevalently consumed psychostimulant in the world, but its influences on daily real-world functioning are relatively unknown. The present work investigated the effects of caffeine (0 mg, 100 mg, 200 mg, 400 mg) on a commonplace language task that required readers to identify and correct 4 error types in extended discourse: simple local errors (misspelling 1- to 2-syllable words), complex local errors (misspelling 3- to 5-syllable words), simple global errors (incorrect homophones), and complex global errors (incorrect subject-verb agreement and verb tense). In 2 placebo-controlled, double-blind studies using repeated-measures designs, we found higher detection and repair rates for complex global errors, asymptoting at 200 mg in low consumers (Experiment 1) and peaking at 400 mg in high consumers (Experiment 2). In both cases, covariate analyses demonstrated that arousal state mediated the relationship between caffeine consumption and the detection and repair of complex global errors. Detection and repair rates for the other 3 error types were not affected by caffeine consumption. Taken together, we demonstrate that caffeine has differential effects on error detection and repair as a function of dose and error type, and this relationship is closely tied to caffeine's effects on subjective arousal state. These results support the notion that central nervous system stimulants may enhance global processing of language-based materials and suggest that such effects may originate in caffeine-related right hemisphere brain processes. Implications for understanding the relationships between caffeine consumption and real-world cognitive functioning are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.
A data-driven modeling approach to stochastic computation for low-energy biomedical devices.
Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen
2011-01-01
Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Shihai; Lo, Chien-Chi; Li, Po-E
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
ADEPT, a dynamic next generation sequencing data error-detection program with trimming
Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...
2016-02-29
Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, P; Olaciregui-Ruiz, I; Mijnheer, B
2016-06-15
Purpose: To investigate the sensitivity of an EPID-based 3D dose verification system to detect delivery errors in VMAT treatments. Methods: For this study 41 EPID-reconstructed 3D in vivo dose distributions of 15 different VMAT plans (H&N, lung, prostate and rectum) were selected. To simulate the effect of delivery errors, their TPS plans were modified by: 1) scaling of the monitor units by ±3% and ±6% and 2) systematic shifting of leaf bank positions by ±1mm, ±2mm and ±5mm. The 3D in vivo dose distributions where then compared to the unmodified and modified treatment plans. To determine the detectability of themore » various delivery errors, we made use of a receiver operator characteristic (ROC) methodology. True positive and false positive rates were calculated as a function of the γ-parameters γmean, γ1% (near-maximum γ) and the PTV dose parameter ΔD{sub 50} (i.e. D{sub 50}(EPID)-D{sub 50}(TPS)). The ROC curve is constructed by plotting the true positive rate vs. the false positive rate. The area under the ROC curve (AUC) then serves as a measure of the performance of the EPID dosimetry system in detecting a particular error; an ideal system has AUC=1. Results: The AUC ranges for the machine output errors and systematic leaf position errors were [0.64 – 0.93] and [0.48 – 0.92] respectively using γmean, [0.57 – 0.79] and [0.46 – 0.85] using γ1% and [0.61 – 0.77] and [ 0.48 – 0.62] using ΔD{sub 50}. Conclusion: For the verification of VMAT deliveries, the parameter γmean is the best discriminator for the detection of systematic leaf position errors and monitor unit scaling errors. Compared to γmean and γ1%, the parameter ΔD{sub 50} performs worse as a discriminator in all cases.« less
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
Generalized site occupancy models allowing for false positive and false negative errors
Royle, J. Andrew; Link, W.A.
2006-01-01
Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.
Emergency department discharge prescription errors in an academic medical center
Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.
2017-01-01
This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061
NASA Astrophysics Data System (ADS)
Chida, Y.; Takagawa, T.
2017-12-01
The observation data of GPS buoys which are installed in the offshore of Japan are used for monitoring not only waves but also tsunamis in Japan. The real-time data was successfully used to upgrade the tsunami warnings just after the 2011 Tohoku earthquake. Huge tsunamis can be easily detected because the signal-noise ratio is high enough, but moderate tsunami is not. GPS data sometimes include the error waveforms like tsunamis because of changing accuracy by the number and the position of GPS satellites. To distinguish the true tsunami waveforms from pseudo-tsunami ones is important for tsunami detection. In this research, a method to reduce misdetections of tsunami in the observation data of GPS buoys and to increase the efficiency of tsunami detection was developed.Firstly, the error waveforms were extracted by using the indexes of position dilution of precision, reliability of GPS satellite positioning and satellite number for calculation. Then, the output from this procedure was used for the Continuous Wavelet Transform (CWT) to analyze the time-frequency characteristics of error waveforms and real tsunami waveforms.We found that the error waveforms tended to appear when the accuracy of GPS buoys positioning was low. By extracting these waveforms, it was possible to decrease about 43% error waveforms without the reduction of the tsunami detection rate. Moreover, we found that the amplitudes of power spectra obtained from the error waveforms and real tsunamis were similar in the component of long period (4-65 minutes), on the other hand, the amplitude in the component of short period (< 1 minute) obtained from the error waveforms was significantly larger than that of the real tsunami waveforms. By thresholding of the short-period component, further extraction of error waveforms became possible without a significant reduction of tsunami detection rate.
Huff, Mark J; Umanath, Sharda
2018-06-01
In 2 experiments, we assessed age-related suggestibility to additive and contradictory misinformation (i.e., remembering of false details from an external source). After reading a fictional story, participants answered questions containing misleading details that were either additive (misleading details that supplemented an original event) or contradictory (errors that changed original details). On a final test, suggestibility was greater for additive than contradictory misinformation, and older adults endorsed fewer false contradictory details than younger adults. To mitigate suggestibility in Experiment 2, participants were warned about potential errors, instructed to detect errors, or instructed to detect errors after exposure to examples of additive and contradictory details. Again, suggestibility to additive misinformation was greater than contradictory, and older adults endorsed less contradictory misinformation. Only after detection instructions with misinformation examples were younger adults able to reduce contradictory misinformation effects and reduced these effects to the level of older adults. Additive misinformation however, was immune to all warning and detection instructions. Thus, older adults were less susceptible to contradictory misinformation errors, and younger adults could match this misinformation rate when warning/detection instructions were strong. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Corrections of clinical chemistry test results in a laboratory information system.
Wang, Sihe; Ho, Virginia
2004-08-01
The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.
Using Gaussian mixture models to detect and classify dolphin whistles and pulses.
Peso Parada, Pablo; Cardenal-López, Antonio
2014-06-01
In recent years, a number of automatic detection systems for free-ranging cetaceans have been proposed that aim to detect not just surfaced, but also submerged, individuals. These systems are typically based on pattern-recognition techniques applied to underwater acoustic recordings. Using a Gaussian mixture model, a classification system was developed that detects sounds in recordings and classifies them as one of four types: background noise, whistles, pulses, and combined whistles and pulses. The classifier was tested using a database of underwater recordings made off the Spanish coast during 2011. Using cepstral-coefficient-based parameterization, a sound detection rate of 87.5% was achieved for a 23.6% classification error rate. To improve these results, two parameters computed using the multiple signal classification algorithm and an unpredictability measure were included in the classifier. These parameters, which helped to classify the segments containing whistles, increased the detection rate to 90.3% and reduced the classification error rate to 18.1%. Finally, the potential of the multiple signal classification algorithm and unpredictability measure for estimating whistle contours and classifying cetacean species was also explored, with promising results.
McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.
2010-01-01
The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.
Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A
2015-10-01
Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Heart rate detection from an electronic weighing scale.
González-Landaeta, R; Casas, O; Pallàs-Areny, R
2007-01-01
We propose a novel technique for heart rate detection on a subject that stands on a common electronic weighing scale. The detection relies on sensing force variations related to the blood acceleration in the aorta, works even if wearing footwear, and does not require any sensors attached to the body. We have applied our method to three different weighing scales, and estimated whether their sensitivity and frequency response suited heart rate detection. Scale sensitivities were from 490 nV/V/N to 1670 nV/V/N, all had an underdamped transient response and their dynamic gain error was below 19% at 10 Hz, which are acceptable values for heart rate estimation. We also designed a pulse detection system based on off-the-shelf integrated circuits, whose gain was about 70x10(3) and able to sense force variations about 240 mN. The signal-to-noise ratio (SNR) of the main peaks of the pulse signal detected was higher than 48 dB, which is large enough to estimate the heart rate by simple signal processing methods. To validate the method, the ECG and the force signal were simultaneously recorded on 12 volunteers. The maximal error obtained from heart rates determined from these two signals was +/-0.6 beats/minute.
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.
Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M
2018-01-01
Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.
Tomography of a displacement photon counter for discrimination of single-rail optical qubits
NASA Astrophysics Data System (ADS)
Izumi, Shuro; Neergaard-Nielsen, Jonas S.; Andersen, Ulrik L.
2018-04-01
We investigate the performance of a detection strategy composed of a displacement operation and a photon counter, which is known as a beneficial tool in optical coherent communications, to the quantum state discrimination of the two superpositions of vacuum and single photon states corresponding to the {\\hat{σ }}x eigenstates in the single-rail encoding of photonic qubits. We experimentally characterize the detection strategy in vacuum-single photon two-dimensional space using quantum detector tomography and evaluate the achievable discrimination error probability from the reconstructed measurement operators. We furthermore derive the minimum error rate obtainable with Gaussian transformations and homodyne detection. Our proof-of-principle experiment shows that the proposed scheme can achieve a discrimination error surpassing homodyne detection.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.
Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Throughput of Coded Optical CDMA Systems with AND Detectors
NASA Astrophysics Data System (ADS)
Memon, Kehkashan A.; Umrani, Fahim A.; Umrani, A. W.; Umrani, Naveed A.
2012-09-01
Conventional detection techniques used in optical code-division multiple access (OCDMA) systems are not optimal and result in poor bit error rate performance. This paper analyzes the coded performance of optical CDMA systems with AND detectors for enhanced throughput efficiencies and improved error rate performance. The results show that the use of AND detectors significantly improve the performance of an optical channel.
Detection of Methicillin-Resistant Coagulase-Negative Staphylococci by the Vitek 2 System
Johnson, Kristen N.; Andreacchio, Kathleen
2014-01-01
The accurate performance of the Vitek 2 GP67 card for detecting methicillin-resistant coagulase-negative staphylococci (CoNS) is not known. We prospectively determined the ability of the Vitek 2 GP67 card to accurately detect methicillin-resistant CoNS, with mecA PCR results used as the gold standard for a 4-month period in 2012. Included in the study were 240 consecutively collected nonduplicate CoNS isolates. Cefoxitin susceptibility by disk diffusion testing was determined for all isolates. We found that the three tested systems, Vitek 2 oxacillin and cefoxitin testing and cefoxitin disk susceptibility testing, lacked specificity and, in some cases, sensitivity for detecting methicillin resistance. The Vitek 2 oxacillin and cefoxitin tests had very major error rates of 4% and 8%, respectively, and major error rates of 38% and 26%, respectively. Disk cefoxitin testing gave the best performance, with very major and major error rates of 2% and 24%, respectively. The test performances were species dependent, with the greatest errors found for Staphylococcus saprophyticus. While the 2014 CLSI guidelines recommend reporting isolates that test resistant by the oxacillin MIC or cefoxitin disk test as oxacillin resistant, following such guidelines produces erroneous results, depending on the test method and bacterial species tested. Vitek 2 cefoxitin testing is not an adequate substitute for cefoxitin disk testing. For critical-source isolates, mecA PCR, rather than Vitek 2 or cefoxitin disk testing, is required for optimal antimicrobial therapy. PMID:24951799
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
A Burst-Mode Photon-Counting Receiver with Automatic Channel Estimation and Bit Rate Detection
2016-02-24
communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode...obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver...receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB
Spatial heterogeneity of type I error for local cluster detection tests
2014-01-01
Background Just as power, type I error of cluster detection tests (CDTs) should be spatially assessed. Indeed, CDTs’ type I error and power have both a spatial component as CDTs both detect and locate clusters. In the case of type I error, the spatial distribution of wrongly detected clusters (WDCs) can be particularly affected by edge effect. This simulation study aims to describe the spatial distribution of WDCs and to confirm and quantify the presence of edge effect. Methods A simulation of 40 000 datasets has been performed under the null hypothesis of risk homogeneity. The simulation design used realistic parameters from survey data on birth defects, and in particular, two baseline risks. The simulated datasets were analyzed using the Kulldorff’s spatial scan as a commonly used test whose behavior is otherwise well known. To describe the spatial distribution of type I error, we defined the participation rate for each spatial unit of the region. We used this indicator in a new statistical test proposed to confirm, as well as quantify, the edge effect. Results The predefined type I error of 5% was respected for both baseline risks. Results showed strong edge effect in participation rates, with a descending gradient from center to edge, and WDCs more often centrally situated. Conclusions In routine analysis of real data, clusters on the edge of the region should be carefully considered as they rarely occur when there is no cluster. Further work is needed to combine results from power studies with this work in order to optimize CDTs performance. PMID:24885343
Processing Images of Craters for Spacecraft Navigation
NASA Technical Reports Server (NTRS)
Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.
2009-01-01
A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.
Photodiode-based cutting interruption sensor for near-infrared lasers.
Adelmann, B; Schleier, M; Neumeier, B; Hellmann, R
2016-03-01
We report on a photodiode-based sensor system to detect cutting interruptions during laser cutting with a fiber laser. An InGaAs diode records the thermal radiation from the process zone with a ring mirror and optical filter arrangement mounted between a collimation unit and a cutting head. The photodiode current is digitalized with a sample rate of 20 kHz and filtered with a Chebyshev Type I filter. From the measured signal during the piercing, a threshold value is calculated. When the diode signal exceeds this threshold during cutting, a cutting interruption is indicated. This method is applied to sensor signals from cutting mild steel, stainless steel, and aluminum, as well as different material thicknesses and also laser flame cutting, showing the possibility to detect cutting interruptions in a broad variety of applications. In a series of 83 incomplete cuts, every cutting interruption is successfully detected (alpha error of 0%), while no cutting interruption is reported in 266 complete cuts (beta error of 0%). With this remarkable high detection rate and low error rate, the possibility to work with different materials and thicknesses in combination with the easy mounting of the sensor unit also to existing cutting machines highlight the enormous potential for this sensor system in industrial applications.
Arduino-based noise robust online heart-rate detection.
Das, Sangita; Pal, Saurabh; Mitra, Madhuchhanda
2017-04-01
This paper introduces a noise robust real time heart rate detection system from electrocardiogram (ECG) data. An online data acquisition system is developed to collect ECG signals from human subjects. Heart rate is detected using window-based autocorrelation peak localisation technique. A low-cost Arduino UNO board is used to implement the complete automated process. The performance of the system is compared with PC-based heart rate detection technique. Accuracy of the system is validated through simulated noisy ECG data with various levels of signal to noise ratio (SNR). The mean percentage error of detected heart rate is found to be 0.72% for the noisy database with five different noise levels.
Item Discrimination and Type I Error in the Detection of Differential Item Functioning
ERIC Educational Resources Information Center
Li, Yanju; Brooks, Gordon P.; Johanson, George A.
2012-01-01
In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel-Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the…
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, E; Phillips, M; Bojechko, C
Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPSmore » dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.« less
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations
Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J.
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4–12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a ‘gold standard’ reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community. PMID:28727808
Peterson, J.; Dunham, J.B.
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.
Prescribing Errors Involving Medication Dosage Forms
Lesar, Timothy S
2002-01-01
CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138
Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.
Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep
2014-01-01
Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.
Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.
Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D
2016-10-01
Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Error Detection in Mechanized Classification Systems
ERIC Educational Resources Information Center
Hoyle, W. G.
1976-01-01
When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…
Flight test results of the strapdown ring laser gyro tetrad inertial navigation system
NASA Technical Reports Server (NTRS)
Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.
1983-01-01
A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.
Figueroa, Priscila I; Ziman, Alyssa; Wheeler, Christine; Gornbein, Jeffrey; Monson, Michael; Calhoun, Loni
2006-09-01
To detect miscollected (wrong blood in tube [WBIT]) samples, our institution requires a second independently drawn sample (check-type [CT]) on previously untyped, non-group O patients who are likely to require transfusion. During the 17-year period addressed by this report, 94 WBIT errors were detected: 57% by comparison with a historic blood type, 7% by the CT, and 35% by other means. The CT averted 5 potential ABO-incompatible transfusions. Our corrected WBIT error rate is 1 in 3,713 for verified samples tested between 2000 and 2003, the period for which actual number of CTs performed was available. The estimated rate of WBIT for the 17-year period is 1 in 2,262 samples. ABO-incompatible transfusions due to WBIT-type errors are avoided by comparison of current blood type results with a historic type, and the CT is an effective way to create a historic type.
Using a Calculated Pulse Rate with an Artificial Neural Network to Detect Irregular Interbeats.
Yeh, Bih-Chyun; Lin, Wen-Piao
2016-03-01
Heart rate is an important clinical measure that is often used in pathological diagnosis and prognosis. Valid detection of irregular heartbeats is crucial in the clinical practice. We propose an artificial neural network using the calculated pulse rate to detect irregular interbeats. The proposed system measures the calculated pulse rate to determine an "irregular interbeat on" or "irregular interbeat off" event. If an irregular interbeat is detected, the proposed system produces a danger warning, which is helpful for clinicians. If a non-irregular interbeat is detected, the proposed system displays the calculated pulse rate. We include a flow chart of the proposed software. In an experiment, we measure the calculated pulse rates and achieve an error percentage of < 3% in 20 participants with a wide age range. When we use the calculated pulse rates to detect irregular interbeats, we find such irregular interbeats in eight participants.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.
Fanning, Laura; Jones, Nick; Manias, Elizabeth
2016-04-01
The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
Andersen, Claus E; Nielsen, Søren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari
2009-11-01
The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with 192Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from +/-5 to +/-15 mm) were simulated in software in order to assess the ability of the system to detect errors. For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when going from integrating to time-resolved dose verification. The likelihood of detecting a +/-15 mm displacement error increased by a factor of 1.5 or more. In vivo fiber-coupled RL/OSL dosimetry based on detectors placed in standard brachytherapy needles was demonstrated. The time-resolved dose-rate measurements were found to provide a good way to visualize the progression and stability of PDR brachytherapy dose delivery, and time-resolved dose-rate measurements provided an increased sensitivity for detection of dose-delivery errors compared with time-integrated dosimetry.
Investigation of an Optimum Detection Scheme for a Star-Field Mapping System
NASA Technical Reports Server (NTRS)
Aldridge, M. D.; Credeur, L.
1970-01-01
An investigation was made to determine the optimum detection scheme for a star-field mapping system that uses coded detection resulting from starlight shining through specially arranged multiple slits of a reticle. The computer solution of equations derived from a theoretical model showed that the greatest probability of detection for a given star and background intensity occurred with the use of a single transparent slit. However, use of multiple slits improved the system's ability to reject the detection of undesirable lower intensity stars, but only by decreasing the probability of detection for lower intensity stars to be mapped. Also, it was found that the coding arrangement affected the root-mean-square star-position error and that detection is possible with error in the system's detected spin rate, though at a reduced probability.
Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David J; Mueller, Frank; Engelmann, Christian
Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less
Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.
Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente
2014-07-15
Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.
2014-10-15
Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less
Evidence of Selection against Complex Mitotic-Origin Aneuploidy during Preimplantation Development
McCoy, Rajiv C.; Demko, Zachary P.; Ryan, Allison; Banjevic, Milena; Hill, Matthew; Sigurjonsson, Styrmir; Rabinowitz, Matthew; Petrov, Dmitri A.
2015-01-01
Whole-chromosome imbalances affect over half of early human embryos and are the leading cause of pregnancy loss. While these errors frequently arise in oocyte meiosis, many such whole-chromosome abnormalities affecting cleavage-stage embryos are the result of chromosome missegregation occurring during the initial mitotic cell divisions. The first wave of zygotic genome activation at the 4–8 cell stage results in the arrest of a large proportion of embryos, the vast majority of which contain whole-chromosome abnormalities. Thus, the full spectrum of meiotic and mitotic errors can only be detected by sampling after the initial cell divisions, but prior to this selective filter. Here, we apply 24-chromosome preimplantation genetic screening (PGS) to 28,052 single-cell day-3 blastomere biopsies and 18,387 multi-cell day-5 trophectoderm biopsies from 6,366 in vitro fertilization (IVF) cycles. We precisely characterize the rates and patterns of whole-chromosome abnormalities at each developmental stage and distinguish errors of meiotic and mitotic origin without embryo disaggregation, based on informative chromosomal signatures. We show that mitotic errors frequently involve multiple chromosome losses that are not biased toward maternal or paternal homologs. This outcome is characteristic of spindle abnormalities and chaotic cell division detected in previous studies. In contrast to meiotic errors, our data also show that mitotic errors are not significantly associated with maternal age. PGS patients referred due to previous IVF failure had elevated rates of mitotic error, while patients referred due to recurrent pregnancy loss had elevated rates of meiotic error, controlling for maternal age. These results support the conclusion that mitotic error is the predominant mechanism contributing to pregnancy losses occurring prior to blastocyst formation. This high-resolution view of the full spectrum of whole-chromosome abnormalities affecting early embryos provides insight into the cytogenetic mechanisms underlying their formation and the consequences for human fertility. PMID:26491874
Feasibility of the capnogram to monitor ventilation rate during cardiopulmonary resuscitation.
Aramendi, Elisabete; Elola, Andoni; Alonso, Erik; Irusta, Unai; Daya, Mohamud; Russell, James K; Hubner, Pia; Sterz, Fritz
2017-01-01
The rates of chest compressions (CCs) and ventilations are both important metrics to monitor the quality of cardiopulmonary resuscitation (CPR). Capnography permits monitoring ventilation, but the CCs provided during CPR corrupt the capnogram and compromise the accuracy of automatic ventilation detectors. The aim of this study was to evaluate the feasibility of an automatic algorithm based on the capnogram to detect ventilations and provide feedback on ventilation rate during CPR, specifically addressing intervals where CCs are delivered. The dataset used to develop and test the algorithm contained in-hospital and out-of-hospital cardiac arrest episodes. The method relies on adaptive thresholding to detect ventilations in the first derivative of the capnogram. The performance of the detector was reported in terms of sensitivity (SE) and Positive Predictive Value (PPV). The overall performance was reported in terms of the rate error and errors in the hyperventilation alarms. Results were given separately for the intervals with CCs. A total of 83 episodes were considered, resulting in 4880min and 46,740 ventilations (8741 during CCs). The method showed an overall SE/PPV above 99% and 97% respectively, even in intervals with CCs. The error for the ventilation rate was below 1.8min -1 in any group, and >99% of the ventilation alarms were correctly detected. A method to provide accurate feedback on ventilation rate using only the capnogram is proposed. Its accuracy was proven even in intervals where canpography signal was severely corrupted by CCs. This algorithm could be integrated into monitor/defibrillators to provide reliable feedback on ventilation rate during CPR. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ultra-deep mutant spectrum profiling: improving sequencing accuracy using overlapping read pairs.
Chen-Harris, Haiyin; Borucki, Monica K; Torres, Clinton; Slezak, Tom R; Allen, Jonathan E
2013-02-12
High throughput sequencing is beginning to make a transformative impact in the area of viral evolution. Deep sequencing has the potential to reveal the mutant spectrum within a viral sample at high resolution, thus enabling the close examination of viral mutational dynamics both within- and between-hosts. The challenge however, is to accurately model the errors in the sequencing data and differentiate real viral mutations, particularly those that exist at low frequencies, from sequencing errors. We demonstrate that overlapping read pairs (ORP) -- generated by combining short fragment sequencing libraries and longer sequencing reads -- significantly reduce sequencing error rates and improve rare variant detection accuracy. Using this sequencing protocol and an error model optimized for variant detection, we are able to capture a large number of genetic mutations present within a viral population at ultra-low frequency levels (<0.05%). Our rare variant detection strategies have important implications beyond viral evolution and can be applied to any basic and clinical research area that requires the identification of rare mutations.
Judging the judges' performance in rhythmic gymnastics.
Flessas, Konstantinos; Mylonas, Dimitris; Panagiotaropoulou, Georgia; Tsopani, Despina; Korda, Alexandrea; Siettos, Constantinos; Di Cagno, Alessandra; Evdokimidis, Ioannis; Smyrnis, Nikolaos
2015-03-01
Rhythmic gymnastics (RG) is an aesthetic event balancing between art and sport that also has a performance rating system (Code of Points) given by the International Gymnastics Federation. It is one of the sports in which competition results greatly depend on the judges' evaluation. In the current study, we explored the judges' performance in a five-gymnast ensemble routine. An expert-novice paradigm (10 international-level, 10 national-level, and 10 novice-level judges) was implemented under a fully simulated procedure of judgment in a five-gymnast ensemble routine of RG using two videos of routines performed by the Greek national team of RG. Simultaneous recordings of two-dimensional eye movements were taken during the judgment procedure to assess the percentage of time spent by each judge viewing the videos and fixation performance of each judge when an error in gymnast performance had occurred. All judge level groups had very modest performance of error recognition on gymnasts' routines, and the best international judges reported approximately 40% of true errors. Novice judges spent significantly more time viewing the videos compared with national and international judges and spent significantly more time fixating detected errors than the other two groups. National judges were the only group that made efficient use of fixation to detect errors. The fact that international-level judges outperformed both other groups, while not relying on visual fixation to detect errors, suggests that these experienced judges probably make use of other cognitive strategies, increasing their overall error detection efficiency, which was, however, still far below optimum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, D; McCarthy, A; Galavis, P
Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less
Online Error Reporting for Managing Quality Control Within Radiology.
Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L
2016-06-01
Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.
Cost-benefit analysis: newborn screening for inborn errors of metabolism in Lebanon.
Khneisser, I; Adib, S; Assaad, S; Megarbane, A; Karam, P
2015-12-01
Few countries in the Middle East-North Africa region have adopted national newborn screening for inborn errors of metabolism by tandem mass spectrometry (MS/MS). We aimed to evaluate the cost-benefit of newborn screening for such disorders in Lebanon, as a model for other developing countries in the region. Average costs of expected care for inborn errors of metabolism cases as a group, between ages 0 and 18, early and late diagnosed, were calculated from 2007 to 2013. The monetary value of early detection using MS/MS was compared with that of clinical "late detection", including cost of diagnosis and hospitalizations. During this period, 126000 newborns were screened. Incidence of detected cases was 1/1482, which can be explained by high consanguinity rates in Lebanon. A reduction by half of direct cost of care, reaching on average 31,631 USD per detected case was shown. This difference more than covers the expense of starting a newborn screening programme. Although this model does not take into consideration the indirect benefits of the better quality of life of those screened early, it can be argued that direct and indirect costs saved through early detection of these disorders are important enough to justify universal publicly-funded screening, especially in developing countries with high consanguinity rates, as shown through this data from Lebanon. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Spence, Rodney L.
1993-01-01
The important principles of direct- and heterodyne-detection optical free-space communications are reviewed. Signal-to-noise-ratio (SNR) and bit-error-rate (BER) expressions are derived for both the direct-detection and heterodyne-detection optical receivers. For the heterodyne system, performance degradation resulting from received-signal and local oscillator-beam misalignment and laser phase noise is analyzed. Determination of interfering background power from local and extended background sources is discussed. The BER performance of direct- and heterodyne-detection optical links in the presence of Rayleigh-distributed random pointing and tracking errors is described. Finally, several optical systems employing Nd:YAG, GaAs, and CO2 laser sources are evaluated and compared to assess their feasibility in providing high-data-rate (10- to 1000-Mbps) Mars-to-Earth communications. It is shown that the root mean square (rms) pointing and tracking accuracy is a critical factor in defining the system transmitting laser-power requirements and telescope size and that, for a given rms error, there is an optimum telescope aperture size that minimizes the required power. The results of the analysis conducted indicate that, barring the achievement of extremely small rms pointing and tracking errors (less than 0.2 microrad), the two most promising types of optical systems are those that use an Nd:YAG laser (lambda = 1.064 microns) and high-order pulse position modulator (PPM) and direct detection, and those that use a CO2 laser (lambda = 10.6 microns) and phase shifting keying homodyne modulation and coherent detection. For example, for a PPM order of M = 64 and an rms pointing accuracy of 0.4 microrad, an Nd:YAG system can be used to implement a 100-Mbps Mars link with a 40-cm transmitting telescope, a 20-W laser, and a 10-m receiving photon bucket. Under the same conditions, a CO2 system would require 3-m transmitting and receiving telescopes and a 32-W laser to implement such a link. Other types of optical systems, such as a semiconductor laser systems, are impractical in the presence of large rms pointing errors because of the high power requirements of the 100-Mbps Mars link, even when optimal-size telescopes are used.
Type I Error Inflation for Detecting DIF in the Presence of Impact
ERIC Educational Resources Information Center
DeMars, Christine E.
2010-01-01
In this brief explication, two challenges for using differential item functioning (DIF) measures when there are large group differences in true proficiency are illustrated. Each of these difficulties may lead to inflated Type I error rates, for very different reasons. One problem is that groups matched on observed score are not necessarily well…
Data-driven region-of-interest selection without inflating Type I error rate.
Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard
2017-01-01
In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.
2012-01-01
Background Presented is the method “Detection and Outline Error Estimates” (DOEE) for assessing rater agreement in the delineation of multiple sclerosis (MS) lesions. The DOEE method divides operator or rater assessment into two parts: 1) Detection Error (DE) -- rater agreement in detecting the same regions to mark, and 2) Outline Error (OE) -- agreement of the raters in outlining of the same lesion. Methods DE, OE and Similarity Index (SI) values were calculated for two raters tested on a set of 17 fluid-attenuated inversion-recovery (FLAIR) images of patients with MS. DE, OE, and SI values were tested for dependence with mean total area (MTA) of the raters' Region of Interests (ROIs). Results When correlated with MTA, neither DE (ρ = .056, p=.83) nor the ratio of OE to MTA (ρ = .23, p=.37), referred to as Outline Error Rate (OER), exhibited significant correlation. In contrast, SI is found to be strongly correlated with MTA (ρ = .75, p < .001). Furthermore, DE and OER values can be used to model the variation in SI with MTA. Conclusions The DE and OER indices are proposed as a better method than SI for comparing rater agreement of ROIs, which also provide specific information for raters to improve their agreement. PMID:22812697
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nose, Takayuki, E-mail: nose-takayuki@nms.ac.jp; Chatani, Masashi; Otani, Yuki
Purpose: High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Methods and Materials: Conventional X-ray fluoroscopic images aremore » degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Results: Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. Conclusions: With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use.« less
Nose, Takayuki; Chatani, Masashi; Otani, Yuki; Teshima, Teruki; Kumita, Shinichirou
2017-03-15
High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Conventional X-ray fluoroscopic images are degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use. Copyright © 2016 Elsevier Inc. All rights reserved.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
Pightling, Arthur W.; Petronella, Nicholas; Pagotto, Franco
2014-01-01
The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should test a variety of conditions to achieve optimal results. PMID:25144537
[The endpoint detection of cough signal in continuous speech].
Yang, Guoqing; Mo, Hongqiang; Li, Wen; Lian, Lianfang; Zheng, Zeguang
2010-06-01
The endpoint detection of cough signal in continuous speech has been researched in order to improve the efficiency and veracity of manual recognition or computer-based automatic recognition. First, using the short time zero crossing ratio(ZCR) for identifying the suspicious coughs and getting the threshold of short time energy based on acoustic characteristics of cough. Then, the short time energy is combined with short time ZCR in order to implement the endpoint detection of cough in continuous speech. To evaluate the effect of the method, first, the virtual number of coughs in each recording was identified by two experienced doctors using the graphical user interface (GUI). Second, the recordings were analyzed by automatic endpoint detection program under Matlab7.0. Finally, the comparison between these two results showed: The error rate of undetected cough is 2.18%, and 98.13% of noise, silence and speech were removed. The way of setting short time energy threshold is robust. The endpoint detection program can remove most speech and noise, thus maintaining a lower rate of error.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.
Klumpp, John; Brandl, Alexander
2015-03-01
A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Anthony, E-mail: anthony.arnold@sesiahs.health.nsw.gov.a; Delaney, Geoff P.; Cassapi, Lynette
Purpose: Radiotherapy is a common treatment for cancer patients. Although incidence of error is low, errors can be severe or affect significant numbers of patients. In addition, errors will often not manifest until long periods after treatment. This study describes the development of an incident reporting tool that allows categorical analysis and time trend reporting, covering first 3 years of use. Methods and Materials: A radiotherapy-specific incident analysis system was established. Staff members were encouraged to report actual errors and near-miss events detected at prescription, simulation, planning, or treatment phases of radiotherapy delivery. Trend reporting was reviewed monthly. Results: Reportsmore » were analyzed for the first 3 years of operation (May 2004-2007). A total of 688 reports was received during the study period. The actual error rate was 0.2% per treatment episode. During the study period, the actual error rates reduced significantly from 1% per year to 0.3% per year (p < 0.001), as did the total event report rates (p < 0.0001). There were 3.5 times as many near misses reported compared with actual errors. Conclusions: This system has allowed real-time analysis of events within a radiation oncology department to a reduced error rate through focus on learning and prevention from the near-miss reports. Plans are underway to develop this reporting tool for Australia and New Zealand.« less
Farwell, Lawrence A; Richardson, Drew C; Richardson, Graham M
2013-08-01
Brain fingerprinting detects concealed information stored in the brain by measuring brainwave responses. We compared P300 and P300-MERMER event-related brain potentials for error rate/accuracy and statistical confidence in four field/real-life studies. 76 tests detected presence or absence of information regarding (1) real-life events including felony crimes; (2) real crimes with substantial consequences (either a judicial outcome, i.e., evidence admitted in court, or a $100,000 reward for beating the test); (3) knowledge unique to FBI agents; and (4) knowledge unique to explosives (EOD/IED) experts. With both P300 and P300-MERMER, error rate was 0 %: determinations were 100 % accurate, no false negatives or false positives; also no indeterminates. Countermeasures had no effect. Median statistical confidence for determinations was 99.9 % with P300-MERMER and 99.6 % with P300. Brain fingerprinting methods and scientific standards for laboratory and field applications are discussed. Major differences in methods that produce different results are identified. Markedly different methods in other studies have produced over 10 times higher error rates and markedly lower statistical confidences than those of these, our previous studies, and independent replications. Data support the hypothesis that accuracy, reliability, and validity depend on following the brain fingerprinting scientific standards outlined herein.
Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree
NASA Astrophysics Data System (ADS)
Zheng, Amin; Cheung, Gene; Florencio, Dinei
2018-07-01
With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x'|y) of an estimated string x' given y with its code rate R(x'). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.
Terkola, R; Czejka, M; Bérubé, J
2017-08-01
Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.
Chaos-on-a-chip secures data transmission in optical fiber links.
Argyris, Apostolos; Grivas, Evangellos; Hamacher, Michael; Bogris, Adonis; Syvridis, Dimitris
2010-03-01
Security in information exchange plays a central role in the deployment of modern communication systems. Besides algorithms, chaos is exploited as a real-time high-speed data encryption technique which enhances the security at the hardware level of optical networks. In this work, compact, fully controllable and stably operating monolithic photonic integrated circuits (PICs) that generate broadband chaotic optical signals are incorporated in chaos-encoded optical transmission systems. Data sequences with rates up to 2.5 Gb/s with small amplitudes are completely encrypted within these chaotic carriers. Only authorized counterparts, supplied with identical chaos generating PICs that are able to synchronize and reproduce the same carriers, can benefit from data exchange with bit-rates up to 2.5Gb/s with error rates below 10(-12). Eavesdroppers with access to the communication link experience a 0.5 probability to detect correctly each bit by direct signal detection, while eavesdroppers supplied with even slightly unmatched hardware receivers are restricted to data extraction error rates well above 10(-3).
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
Effect of Multiple Testing Adjustment in Differential Item Functioning Detection
ERIC Educational Resources Information Center
Kim, Jihye; Oshima, T. C.
2013-01-01
In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less
Improving patient safety through quality assurance.
Raab, Stephen S
2006-05-01
Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.
Comparison of three PCR-based assays for SNP genotyping in sugar beet
USDA-ARS?s Scientific Manuscript database
Background: PCR allelic discrimination technologies have broad applications in the detection of single nucleotide polymorphisms (SNPs) in genetics and genomics. The use of fluorescence-tagged probes is the leading method for targeted SNP detection, but assay costs and error rates could be improved t...
Unexpected Direction of Differential Item Functioning
ERIC Educational Resources Information Center
Park, Sangwook
2011-01-01
Many studies have been conducted to evaluate the performance of DIF detection methods, when two groups have different ability distributions. Such studies typically have demonstrated factors that are associated with inflation of Type I error rates in DIF detection, such as mean ability differences. However, no study has examined how the direction…
[Medical errors: inevitable but preventable].
Giard, R W
2001-10-27
Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.
Detection of IMRT delivery errors based on a simple constancy check of transit dose by using an EPID
NASA Astrophysics Data System (ADS)
Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun
2015-11-01
Beam delivery errors during intensity modulated radiotherapy (IMRT) were detected based on a simple constancy check of the transit dose by using an electronic portal imaging device (EPID). Twenty-one IMRT plans were selected from various treatment sites, and the transit doses during treatment were measured by using an EPID. Transit doses were measured 11 times for each course of treatment, and the constancy check was based on gamma index (3%/3 mm) comparisons between a reference dose map (the first measured transit dose) and test dose maps (the following ten measured dose maps). In a simulation using an anthropomorphic phantom, the average passing rate of the tested transit dose was 100% for three representative treatment sites (head & neck, chest, and pelvis), indicating that IMRT was highly constant for normal beam delivery. The average passing rate of the transit dose for 1224 IMRT fields from 21 actual patients was 97.6% ± 2.5%, with the lower rate possibly being due to inaccuracies of patient positioning or anatomic changes. An EPIDbased simple constancy check may provide information about IMRT beam delivery errors during treatment.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I
2003-01-01
Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel
Differentially coherent quadrature-quadrature phase shift keying (Q2PSK)
NASA Astrophysics Data System (ADS)
Saha, Debabrata; El-Ghandour, Osama
The quadrature-quadrature phase-shift-keying (Q2PSK) signaling scheme uses the vertices of a hypercube of dimension four. A generalized Q2PSK signaling format for differentially coherent detection at the receiver is considered. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. The symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/Nb. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK.
Fault detection and isolation in motion monitoring system.
Kim, Duk-Jin; Suk, Myoung Hoon; Prabhakaran, B
2012-01-01
Pervasive computing becomes very active research field these days. A watch that can trace human movement to record motion boundary as well as to study of finding social life pattern by one's localized visiting area. Pervasive computing also helps patient monitoring. A daily monitoring system helps longitudinal study of patient monitoring such as Alzheimer's and Parkinson's or obesity monitoring. Due to the nature of monitoring sensor (on-body wireless sensor), however, signal noise or faulty sensors errors can be present at any time. Many research works have addressed these problems any with a large amount of sensor deployment. In this paper, we present the faulty sensor detection and isolation using only two on-body sensors. We have been investigating three different types of sensor errors: the SHORT error, the CONSTANT error, and the NOISY SENSOR error (see more details on section V). Our experimental results show that the success rate of isolating faulty signals are an average of over 91.5% on fault type 1, over 92% on fault type 2, and over 99% on fault type 3 with the fault prior of 30% sensor errors.
Error rates in forensic DNA analysis: definition, numbers, impact and communication.
Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid
2014-09-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors
Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun
2015-01-01
This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086
Point counts from clustered populations: Lessons from an experiment with Hawaiian crows
Hayward, G.D.; Kepler, C.B.; Scott, J.M.
1991-01-01
We designed an experiment to identify factors contributing most to error in counts of Hawaiian Crow or Alala (Corvus hawaiiensis) groups that are detected aurally. Seven observers failed to detect calling Alala on 197 of 361 3-min point counts on four transects extending from cages with captive Alala. A detection curve describing the relation between frequency of flock detection and distance typified the distribution expected in transect or point counts. Failure to detect calling Alala was affected most by distance, observer, and Alala calling frequency. The number of individual Alala calling was not important in detection rate. Estimates of the number of Alala calling (flock size) were biased and imprecise: average difference between number of Alala calling and number heard was 3.24 (.+-. 0.277). Distance, observer, number of Alala calling, and Alala calling frequency all contributed to errors in estimates of group size (P < 0.0001). Multiple regression suggested that number of Alala calling contributed most to errors. These results suggest that well-designed point counts may be used to estimate the number of Alala flocks but cast doubt on attempts to estimate flock size when individuals are counted aurally.
An Evaluation of Commercial Pedometers for Monitoring Slow Walking Speed Populations.
Beevi, Femina H A; Miranda, Jorge; Pedersen, Christian F; Wagner, Stefan
2016-05-01
Pedometers are considered desirable devices for monitoring physical activity. Two population groups of interest include patients having undergone surgery in the lower extremities or who are otherwise weakened through disease, medical treatment, or surgery procedures, as well as the slow walking senior population. For these population groups, pedometers must be able to perform reliably and accurately at slow walking speeds. The objectives of this study were to evaluate the step count accuracy of three commercially available pedometers, the Yamax (Tokyo, Japan) Digi-Walker(®) SW-200 (YM), the Omron (Kyoto, Japan) HJ-720 (OM), and the Fitbit (San Francisco, CA) Zip (FB), at slow walking speeds, specifically at 1, 2, and 3 km/h, and to raise awareness of the necessity of focusing research on step-counting devices and algorithms for slow walking populations. Fourteen participants 29.93 ±4.93 years of age were requested to walk on a treadmill at the three specified speeds, in four trials of 100 steps each. The devices were worn by the participants on the waist belt. The pedometer counts were recorded, and the error percentage was calculated. The error rate of all three evaluated pedometers decreased with the increase of speed: at 1 km/h the error rates varied from 87.11% (YM) to 95.98% (FB), at 2 km/h the error rates varied from 17.27% (FB) to 46.46% (YM), and at 3 km/h the error rates varied from 22.46% (YM) to a slight overcount of 0.70% (FB). It was observed that all the evaluated devices have high error rates at 1 km/h and mixed error rates at 2 km/h, and at 3 km/h the error rates are the smallest of the three assessed speeds, with the OM and the FB having a slight overcount. These results show that research on pedometers' software and hardware should focus more on accurate step detection at slow walking speeds.
Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.
2014-01-01
A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941
Eliminating ambiguity in digital signals
NASA Technical Reports Server (NTRS)
Weber, W. J., III
1979-01-01
Multiamplitude minimum shift keying (mamsk) transmission system, method of differential encoding overcomes problem of ambiguity associated with advanced digital-transmission techniques with little or no penalty in transmission rate, error rate, or system complexity. Principle of method states, if signal points are properly encoded and decoded, bits are detected correctly, regardless of phase ambiguities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, T; Kumaraswamy, L
Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10more » CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.« less
Rast, Philippe; Hofer, Scott M.
2014-01-01
We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544
Uemura, Kazuki; Hasegawa, Takashi; Tougou, Hiroki; Shuhei, Takahashi; Uchiyama, Yasushi
2015-01-01
We aimed to clarify postural control deficits in older adults with mild cognitive impairment (MCI) at high risk of falling by addressing the inhibitory process. This study involved 376 community-dwelling older adults with MCI. Participants were instructed to execute forward stepping on the side indicated by the central arrow while ignoring the 2 flanking arrows on each side (→→→→→, congruent, or →→←→→, incongruent). Initial weight transfer direction errors [anticipatory postural adjustment (APA) errors], step execution times, and divided phases (reaction, APA, and swing phases) were measured from vertical force data. Participants were categorized as fallers (n = 37) and non-fallers (n = 339) based on fall experiences in the last 12 months. There were no differences in the step execution times, swing phases, step error rates, and APA error rates between groups, but fallers had a significantly longer APA phase relative to non-fallers in trials of the incongruent condition with APA errors (p = 0.005). Fallers also had a longer reaction phase in trials with the correct APA, regardless of the condition (p = 0.01). Analyses of choice stepping with visual interference can detect prolonged postural preparation as a specific falling-associated deficit in older adults with MCI. © 2015 S. Karger AG, Basel.
Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.
2016-01-01
Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsin-Chen; Tan, Jun; Dolly, Steven
2015-02-15
Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy basedmore » on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately employed to test the effectiveness of the proposed contouring error detection strategy. Results: An evaluation tool was implemented to illustrate how the proposed strategy automatically detects the radiation therapy contouring errors for a given patient and provides 3D graphical visualization of error detection results as well. The contouring error detection results were achieved with an average sensitivity of 0.954/0.906 and an average specificity of 0.901/0.909 on the centroid/volume related contouring errors of all the tested samples. As for the detection results on structural shape related contouring errors, an average sensitivity of 0.816 and an average specificity of 0.94 on all the tested samples were obtained. The promising results indicated the feasibility of the proposed strategy for the detection of contouring errors with low false detection rate. Conclusions: The proposed strategy can reliably identify contouring errors based upon inter- and intrastructural constraints derived from clinically approved contours. It holds great potential for improving the radiation therapy workflow. ROC and box plot analyses allow for analytically tuning of the system parameters to satisfy clinical requirements. Future work will focus on the improvement of strategy reliability by utilizing more training sets and additional geometric attribute constraints.« less
Schmidt, Maria A; Morgan, Robert
2008-10-01
To investigate bolus timing artifacts that impair depiction of renal arteries at contrast material-enhanced magnetic resonance (MR) angiography and to determine the effect of contrast agent infusion rates on artifact generation. Renal contrast-enhanced MR angiography was simulated for a variety of infusion schemes, assuming both correct and incorrect timing between data acquisition and contrast agent injection. In addition, the ethics committee approved the retrospective evaluation of clinical breath-hold renal contrast-enhanced MR angiographic studies obtained with automated detection of contrast agent arrival. Twenty-two studies were evaluated for their ability to depict the origin of renal arteries in patent vessels and for any signs of timing errors. Simulations showed that a completely artifactual stenosis or an artifactual overestimation of an existing stenosis at the renal artery origin can be caused by timing errors of the order of 5 seconds in examinations performed with contrast agent infusion rates compatible with or higher than those of hand injections. Lower infusion rates make the studies more likely to accurately depict the origin of the renal arteries. In approximately one-third of all clinical examinations, different contrast agent uptake rates were detected on the left and right sides of the body, and thus allowed us to confirm that it is often impossible to optimize depiction of both renal arteries. In three renal arteries, a signal void was found at the origin in a patent vessel, and delayed contrast agent arrival was confirmed. Computer simulations and clinical examinations showed that timing errors impair the accurate depiction of renal artery origins. (c) RSNA, 2008.
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
ERIC Educational Resources Information Center
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
AfterQC: automatic filtering, trimming, error removing and quality control for fastq data.
Chen, Shifu; Huang, Tanxiao; Zhou, Yanqing; Han, Yue; Xu, Mingyan; Gu, Jia
2017-03-14
Some applications, especially those clinical applications requiring high accuracy of sequencing data, usually have to face the troubles caused by unavoidable sequencing errors. Several tools have been proposed to profile the sequencing quality, but few of them can quantify or correct the sequencing errors. This unmet requirement motivated us to develop AfterQC, a tool with functions to profile sequencing errors and correct most of them, plus highly automated quality control and data filtering features. Different from most tools, AfterQC analyses the overlapping of paired sequences for pair-end sequencing data. Based on overlapping analysis, AfterQC can detect and cut adapters, and furthermore it gives a novel function to correct wrong bases in the overlapping regions. Another new feature is to detect and visualise sequencing bubbles, which can be commonly found on the flowcell lanes and may raise sequencing errors. Besides normal per cycle quality and base content plotting, AfterQC also provides features like polyX (a long sub-sequence of a same base X) filtering, automatic trimming and K-MER based strand bias profiling. For each single or pair of FastQ files, AfterQC filters out bad reads, detects and eliminates sequencer's bubble effects, trims reads at front and tail, detects the sequencing errors and corrects part of them, and finally outputs clean data and generates HTML reports with interactive figures. AfterQC can run in batch mode with multiprocess support, it can run with a single FastQ file, a single pair of FastQ files (for pair-end sequencing), or a folder for all included FastQ files to be processed automatically. Based on overlapping analysis, AfterQC can estimate the sequencing error rate and profile the error transform distribution. The results of our error profiling tests show that the error distribution is highly platform dependent. Much more than just another new quality control (QC) tool, AfterQC is able to perform quality control, data filtering, error profiling and base correction automatically. Experimental results show that AfterQC can help to eliminate the sequencing errors for pair-end sequencing data to provide much cleaner outputs, and consequently help to reduce the false-positive variants, especially for the low-frequency somatic mutations. While providing rich configurable options, AfterQC can detect and set all the options automatically and require no argument in most cases.
Colour and spatial cueing in low-prevalence visual search.
Russell, Nicholas C C; Kunar, Melina A
2012-01-01
In visual search, 30-40% of targets with a prevalence rate of 2% are missed, compared to 7% of targets with a prevalence rate of 50% (Wolfe, Horowitz, & Kenner, 2005). This "low-prevalence" (LP) effect is thought to occur as participants are making motor errors, changing their response criteria, and/or quitting their search too soon. We investigate whether colour and spatial cues, known to improve visual search when the target has a high prevalence (HP), benefit search when the target is rare. Experiments 1 and 2 showed that although knowledge of the target's colour reduces miss errors overall, it does not eliminate the LP effect as more targets were missed at LP than at HP. Furthermore, detection of a rare target is significantly impaired if it appears in an unexpected colour-more so than if the prevalence of the target is high (Experiment 2). Experiment 3 showed that, if a rare target is exogenously cued, target detection is improved but still impaired relative to high-prevalence conditions. Furthermore, if the cue is absent or invalid, the percentage of missed targets increases. Participants were given the option to correct motor errors in all three experiments, which reduced but did not eliminate the LP effect. The results suggest that although valid colour and spatial cues improve target detection, participants still miss more targets at LP than at HP. Furthermore, invalid cues at LP are very costly in terms of miss errors. We discuss our findings in relation to current theories and applications of LP search.
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Davidson, Frederic; Field, Christopher
1990-01-01
A 50 Mbps direct detection optical communication system for use in an intersatellite link was constructed with an AlGaAs laser diode transmitter and a silicon avalanche photodiode photodetector. The system used a Q = 4 PPM format. The receiver consisted of a maximum likelihood PPM detector and a timing recovery subsystem. The PPM slot clock was recovered at the receiver by using a transition detector followed by a PLL. The PPM word clock was recovered by using a second PLL whose input was derived from the presence of back-to-back PPM pulses contained in the received random PPM pulse sequences. The system achieved a bit error rate of 0.000001 at less than 50 detected signal photons/information bit. The receiver was capable of acquiring and maintaining slot and word synchronization for received signal levels greater than 20 photons/information bit, at which the receiver bit error rate was about 0.01.
Detecting wrong notes in advance: neuronal correlates of error monitoring in pianists.
Ruiz, María Herrojo; Jabusch, Hans-Christian; Altenmüller, Eckart
2009-11-01
Music performance is an extremely rapid process with low incidence of errors even at the fast rates of production required. This is possible only due to the fast functioning of the self-monitoring system. Surprisingly, no specific data about error monitoring have been published in the music domain. Consequently, the present study investigated the electrophysiological correlates of executive control mechanisms, in particular error detection, during piano performance. Our target was to extend the previous research efforts on understanding of the human action-monitoring system by selecting a highly skilled multimodal task. Pianists had to retrieve memorized music pieces at a fast tempo in the presence or absence of auditory feedback. Our main interest was to study the interplay between auditory and sensorimotor information in the processes triggered by an erroneous action, considering only wrong pitches as errors. We found that around 70 ms prior to errors a negative component is elicited in the event-related potentials and is generated by the anterior cingulate cortex. Interestingly, this component was independent of the auditory feedback. However, the auditory information did modulate the processing of the errors after their execution, as reflected in a larger error positivity (Pe). Our data are interpreted within the context of feedforward models and the auditory-motor coupling.
Parallel pulse processing and data acquisition for high speed, low error flow cytometry
van den Engh, Gerrit J.; Stokdijk, Willem
1992-01-01
A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.
NASA Technical Reports Server (NTRS)
Dohi, Tomohiro; Nitta, Kazumasa; Ueda, Takashi
1993-01-01
This paper proposes a new type of coherent demodulator, the unique-word (UW)-reverse-modulation type demodulator, for burst signal controlled by voice operated transmitter (VOX) in mobile satellite communication channels. The demodulator has three individual circuits: a pre-detection signal combiner, a pre-detection UW detector, and a UW-reverse-modulation type demodulator. The pre-detection signal combiner combines signal sequences received by two antennas and improves bit energy-to-noise power density ratio (E(sub b)/N(sub 0)) 2.5 dB to yield 10(exp -3) average bit error rate (BER) when carrier power-to-multipath power ratio (CMR) is 15 dB. The pre-detection UW detector improves UW detection probability when the frequency offset is large. The UW-reverse-modulation type demodulator realizes a maximum pull-in frequency of 3.9 kHz, the pull-in time is 2.4 seconds and frequency error is less than 20 Hz. The performances of this demodulator are confirmed through computer simulations and its effect is clarified in real-time experiments at a bit rate of 16.8 kbps using a digital signal processor (DSP).
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi
2018-05-01
The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Controlling qubit drift by recycling error correction syndromes
NASA Astrophysics Data System (ADS)
Blume-Kohout, Robin
2015-03-01
Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE
Discrimination of plant-parasitic nematodes from complex soil communities using ecometagenetics.
Porazinska, Dorota L; Morgan, Matthew J; Gaspar, John M; Court, Leon N; Hardy, Christopher M; Hodda, Mike
2014-07-01
Many plant pathogens are microscopic, cryptic, and difficult to diagnose. The new approach of ecometagenetics, involving ultrasequencing, bioinformatics, and biostatistics, has the potential to improve diagnoses of plant pathogens such as nematodes from the complex mixtures found in many agricultural and biosecurity situations. We tested this approach on a gradient of complexity ranging from a few individuals from a few species of known nematode pathogens in a relatively defined substrate to a complex and poorly known suite of nematode pathogens in a complex forest soil, including its associated biota of unknown protists, fungi, and other microscopic eukaryotes. We added three known but contrasting species (Pratylenchus neglectus, the closely related P. thornei, and Heterodera avenae) to half the set of substrates, leaving the other half without them. We then tested whether all nematode pathogens-known and unknown, indigenous, and experimentally added-were detected consistently present or absent. We always detected the Pratylenchus spp. correctly and with the number of sequence reads proportional to the numbers added. However, a single cyst of H. avenae was only identified approximately half the time it was present. Other plant-parasitic nematodes and nematodes from other trophic groups were detected well but other eukaryotes were detected less consistently. DNA sampling errors or informatic errors or both were involved in misidentification of H. avenae; however, the proportions of each varied in the different bioinformatic pipelines and with different parameters used. To a large extent, false-positive and false-negative errors were complementary: pipelines and parameters with the highest false-positive rates had the lowest false-negative rates and vice versa. Sources of error identified included assumptions in the bioinformatic pipelines, slight differences in primer regions, the number of sequence reads regarded as the minimum threshold for inclusion in analysis, and inaccessible DNA in resistant life stages. Identification of the sources of error allows us to suggest ways to improve identification using ecometagenetics.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
An experiment in software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
NASA Astrophysics Data System (ADS)
Zhang, Y. M.; Evans, J. R. G.; Yang, S. F.
2010-11-01
The authors have discovered a systematic, intelligent and potentially automatic method to detect errors in handbooks and stop their transmission using unrecognised relationships between materials properties. The scientific community relies on the veracity of scientific data in handbooks and databases, some of which have a long pedigree covering several decades. Although various outlier-detection procedures are employed to detect and, where appropriate, remove contaminated data, errors, which had not been discovered by established methods, were easily detected by our artificial neural network in tables of properties of the elements. We started using neural networks to discover unrecognised relationships between materials properties and quickly found that they were very good at finding inconsistencies in groups of data. They reveal variations from 10 to 900% in tables of property data for the elements and point out those that are most probably correct. Compared with the statistical method adopted by Ashby and co-workers [Proc. R. Soc. Lond. Ser. A 454 (1998) p. 1301, 1323], this method locates more inconsistencies and could be embedded in database software for automatic self-checking. We anticipate that our suggestion will be a starting point to deal with this basic problem that affects researchers in every field. The authors believe it may eventually moderate the current expectation that data field error rates will persist at between 1 and 5%.
An Investigation into Soft Error Detection Efficiency at Operating System Level
Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894
An investigation into soft error detection efficiency at operating system level.
Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.
Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills
ERIC Educational Resources Information Center
Waggoner, Dori T.
2011-01-01
This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…
MS-READ: Quantitative measurement of amino acid incorporation.
Mohler, Kyle; Aerni, Hans-Rudolf; Gassaway, Brandon; Ling, Jiqiang; Ibba, Michael; Rinehart, Jesse
2017-11-01
Ribosomal protein synthesis results in the genetically programmed incorporation of amino acids into a growing polypeptide chain. Faithful amino acid incorporation that accurately reflects the genetic code is critical to the structure and function of proteins as well as overall proteome integrity. Errors in protein synthesis are generally detrimental to cellular processes yet emerging evidence suggest that proteome diversity generated through mistranslation may be beneficial under certain conditions. Cumulative translational error rates have been determined at the organismal level, however codon specific error rates and the spectrum of misincorporation errors from system to system remain largely unexplored. In particular, until recently technical challenges have limited the ability to detect and quantify comparatively rare amino acid misincorporation events, which occur orders of magnitude less frequently than canonical amino acid incorporation events. We now describe a technique for the quantitative analysis of amino acid incorporation that provides the sensitivity necessary to detect mistranslation events during translation of a single codon at frequencies as low as 1 in 10,000 for all 20 proteinogenic amino acids, as well as non-proteinogenic and modified amino acids. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.
WE-D-BRA-04: Online 3D EPID-Based Dose Verification for Optimum Patient Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spreeuw, H; Rozendaal, R; Olaciregui-Ruiz, I
2015-06-15
Purpose: To develop an online 3D dose verification tool based on EPID transit dosimetry to ensure optimum patient safety in radiotherapy treatments. Methods: A new software package was developed which processes EPID portal images online using a back-projection algorithm for the 3D dose reconstruction. The package processes portal images faster than the acquisition rate of the portal imager (∼ 2.5 fps). After a portal image is acquired, the software seeks for “hot spots” in the reconstructed 3D dose distribution. A hot spot is in this study defined as a 4 cm{sup 3} cube where the average cumulative reconstructed dose exceedsmore » the average total planned dose by at least 20% and 50 cGy. If a hot spot is detected, an alert is generated resulting in a linac halt. The software has been tested by irradiating an Alderson phantom after introducing various types of serious delivery errors. Results: In our first experiment the Alderson phantom was irradiated with two arcs from a 6 MV VMAT H&N treatment having a large leaf position error or a large monitor unit error. For both arcs and both errors the linac was halted before dose delivery was completed. When no error was introduced, the linac was not halted. The complete processing of a single portal frame, including hot spot detection, takes about 220 ms on a dual hexacore Intel Xeon 25 X5650 CPU at 2.66 GHz. Conclusion: A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for various kinds of gross delivery errors. The detection of hot spots was proven to be effective for the timely detection of these errors. Current work is focused on hot spot detection criteria for various treatment sites and the introduction of a clinical pilot program with online verification of hypo-fractionated (lung) treatments.« less
Kamps-Hughes, Nick; McUsic, Andrew; Kurihara, Laurie; Harkins, Timothy T.; Pal, Prithwish; Ray, Claire
2018-01-01
The accurate detection of ultralow allele frequency variants in DNA samples is of interest in both research and medical settings, particularly in liquid biopsies where cancer mutational status is monitored from circulating DNA. Next-generation sequencing (NGS) technologies employing molecular barcoding have shown promise but significant sensitivity and specificity improvements are still needed to detect mutations in a majority of patients before the metastatic stage. To address this we present analytical validation data for ERASE-Seq (Elimination of Recurrent Artifacts and Stochastic Errors), a method for accurate and sensitive detection of ultralow frequency DNA variants in NGS data. ERASE-Seq differs from previous methods by creating a robust statistical framework to utilize technical replicates in conjunction with background error modeling, providing a 10 to 100-fold reduction in false positive rates compared to published molecular barcoding methods. ERASE-Seq was tested using spiked human DNA mixtures with clinically realistic DNA input quantities to detect SNVs and indels between 0.05% and 1% allele frequency, the range commonly found in liquid biopsy samples. Variants were detected with greater than 90% sensitivity and a false positive rate below 0.1 calls per 10,000 possible variants. The approach represents a significant performance improvement compared to molecular barcoding methods and does not require changing molecular reagents. PMID:29630678
Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T
2018-03-05
Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes
2017-09-01
The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min -1 . Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean : R 2 = 0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.
NASA Astrophysics Data System (ADS)
Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes
2017-09-01
The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min-1. Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean: R 2 = 0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
Detecting DIF in Polytomous Items Using MACS, IRT and Ordinal Logistic Regression
ERIC Educational Resources Information Center
Elosua, Paula; Wells, Craig
2013-01-01
The purpose of the present study was to compare the Type I error rate and power of two model-based procedures, the mean and covariance structure model (MACS) and the item response theory (IRT), and an observed-score based procedure, ordinal logistic regression, for detecting differential item functioning (DIF) in polytomous items. A simulation…
A Comparison of the Rasch Separate Calibration and Between-Fit Methods of Detecting Item Bias.
ERIC Educational Resources Information Center
Smith, Richard M.
1996-01-01
The separate calibration t-test approach of B. Wright and M. Stone (1979) and the common calibration between-fit approach of B. Wright, R. Mead, and R. Draba (1976) appeared to have similar Type I error rates and similar power to detect item bias within a Rasch framework. (SLD)
Identification of Carbon loss in the production of pilot-scale Carbon nanotube using gauze reactor
NASA Astrophysics Data System (ADS)
Wulan, P. P. D. K.; Purwanto, W. W.; Yeni, N.; Lestari, Y. D.
2018-03-01
Carbon loss more than 65% was the major obstacles in the Carbon Nanotube (CNT) production using gauze pilot scale reactor. The results showed that the initial carbon loss calculation is 27.64%. The calculation of carbon loss, then, takes place with various corrections parameters of: product flow rate error measurement, feed flow rate changes, gas product composition by Gas Chromatography Flame Ionization Detector (GC FID), and the carbon particulate by glass fiber filters. Error of product flow rate due to the measurement with bubble soap gives calculation error of carbon loss for about ± 4.14%. Changes in the feed flow rate due to CNT growth in the reactor reduce carbon loss by 4.97%. The detection of secondary hydrocarbon with GC FID during CNT production process reduces carbon loss by 5.14%. Particulates carried by product stream are very few and merely correct the carbon loss about 0.05%. Taking all the factors into account, the amount of carbon loss within this study is (17.21 ± 4.14)%. Assuming that 4.14% of carbon loss is due to the error measurement of product flow rate, the amount of carbon loss is 13.07%. It means that more than 57% of carbon loss within this study is identified.
Errors in fluid therapy in medical wards.
Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin
2012-04-01
Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.
Ground Validation Assessments of GPM Core Observatory Science Requirements
NASA Astrophysics Data System (ADS)
Petersen, Walt; Huffman, George; Kidd, Chris; Skofronick-Jackson, Gail
2017-04-01
NASA Global Precipitation Measurement (GPM) Mission science requirements define specific measurement error standards for retrieved precipitation parameters such as rain rate, raindrop size distribution, and falling snow detection on instantaneous temporal scales and spatial resolutions ranging from effective instrument fields of view [FOV], to grid scales of 50 km x 50 km. Quantitative evaluation of these requirements intrinsically relies on GPM precipitation retrieval algorithm performance in myriad precipitation regimes (and hence, assumptions related to physics) and on the quality of ground-validation (GV) data being used to assess the satellite products. We will review GPM GV products, their quality, and their application to assessing GPM science requirements, interleaving measurement and precipitation physical considerations applicable to the approaches used. Core GV data products used to assess GPM satellite products include 1) two minute and 30-minute rain gauge bias-adjusted radar rain rate products and precipitation types (rain/snow) adapted/modified from the NOAA/OU multi-radar multi-sensor (MRMS) product over the continental U.S.; 2) Polarimetric radar estimates of rain rate over the ocean collected using the K-Pol radar at Kwajalein Atoll in the Marshall Islands and the Middleton Island WSR-88D radar located in the Gulf of Alaska; and 3) Multi-regime, field campaign and site-specific disdrometer-measured rain/snow size distribution (DSD), phase and fallspeed information used to derive polarimetric radar-based DSD retrievals and snow water equivalent rates (SWER) for comparison to coincident GPM-estimated DSD and precipitation rates/types, respectively. Within the limits of GV-product uncertainty we demonstrate that the GPM Core satellite meets its basic mission science requirements for a variety of precipitation regimes. For the liquid phase, we find that GPM radar-based products are particularly successful in meeting bias and random error requirements associated with retrievals of rain rate and required +/- 0.5 millimeter error bounds for mass-weighted mean drop diameter. Version-04 (V4) GMI GPROF radiometer-based rain rate products exhibit reasonable agreement with GV, but do not completely meet mission science requirements over the continental U.S. for lighter rain rates (e.g., 1 mm/hr) due to excessive random error ( 75%). Importantly, substantial corrections were made to the V4 GPROF algorithm and preliminary analysis of Version 5 (V5) rain products indicates more robust performance relative to GV. For the frozen phase and a modest GPM requirement to "demonstrate detection of snowfall", DPR products do successfully identify snowfall within the sensitivity and beam sampling limits of the DPR instrument ( 12 dBZ lower limit; lowest clutter-free bins). Similarly, the GPROF algorithm successfully "detects" falling snow and delineates it from liquid precipitation. However, the GV approach to computing falling-snow "detection" statistics is intrinsically tied to GPROF Bayesian algorithm-based thresholds of precipitation "detection" and model analysis temperature, and is not sufficiently tied to SWER. Hence we will also discuss ongoing work to establish the lower threshold SWER for "detection" using combined GV radar, gauge and disdrometer-based case studies.
Characterizing a four-qubit planar lattice for arbitrary error detection
NASA Astrophysics Data System (ADS)
Chow, Jerry M.; Srinivasan, Srikanth J.; Magesan, Easwar; Córcoles, A. D.; Abraham, David W.; Gambetta, Jay M.; Steffen, Matthias
2015-05-01
Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computations if the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015].
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828
Evaluation of drug administration errors in a teaching hospital
2012-01-01
Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837
Evaluation of drug administration errors in a teaching hospital.
Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre
2012-03-12
Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.
A novel method for routine quality assurance of volumetric-modulated arc therapy.
Wang, Qingxin; Dai, Jianrong; Zhang, Ke
2013-10-01
Volumetric-modulated arc therapy (VMAT) is delivered through synchronized variation of gantry angle, dose rate, and multileaf collimator (MLC) leaf positions. The delivery dynamic nature challenges the parameter setting accuracy of linac control system. The purpose of this study was to develop a novel method for routine quality assurance (QA) of VMAT linacs. ArcCheck is a detector array with diodes distributing in spiral pattern on cylindrical surface. Utilizing its features, a QA plan was designed to strictly test all varying parameters during VMAT delivery on an Elekta Synergy linac. In this plan, there are 24 control points. The gantry rotates clockwise from 181° to 179°. The dose rate, gantry speed, and MLC positions cover their ranges commonly used in clinic. The two borders of MLC-shaped field seat over two columns of diodes of ArcCheck when the gantry rotates to the angle specified by each control point. The ratio of dose rate between each of these diodes and the diode closest to the field center is a certain value and sensitive to the MLC positioning error of the leaf crossing the diode. Consequently, the positioning error can be determined by the ratio with the help of a relationship curve. The time when the gantry reaches the angle specified by each control point can be acquired from the virtual inclinometer that is a feature of ArcCheck. The gantry speed between two consecutive control points is then calculated. The aforementioned dose rate is calculated from an acm file that is generated during ArcCheck measurements. This file stores the data measured by each detector in 50 ms updates with each update in a separate row. A computer program was written in MATLAB language to process the data. The program output included MLC positioning errors and the dose rate at each control point as well as the gantry speed between control points. To evaluate this method, this plan was delivered for four consecutive weeks. The actual dose rate and gantry speed were compared with the QA plan specified. Additionally, leaf positioning errors were intentionally introduced to investigate the sensitivity of this method. The relationship curves were established for detecting MLC positioning errors during VMAT delivery. For four consecutive weeks measured, 98.4%, 94.9%, 89.2%, and 91.0% of the leaf positioning errors were within ± 0.5 mm, respectively. For the intentionally introduced leaf positioning systematic errors of -0.5 and +1 mm, the detected leaf positioning errors of 20 Y1 leaf were -0.48 ± 0.14 and 1.02 ± 0.26 mm, respectively. The actual gantry speed and dose rate closely followed the values specified in the VMAT QA plan. This method can assess the accuracy of MLC positions and the dose rate at each control point as well as the gantry speed between control points at the same time. It is efficient and suitable for routine quality assurance of VMAT.
Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten
2013-01-01
Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315
An empirical comparison of several recent epistatic interaction detection methods.
Wang, Yue; Liu, Guimei; Feng, Mengling; Wong, Limsoon
2011-11-01
Many new methods have recently been proposed for detecting epistatic interactions in GWAS data. There is, however, no in-depth independent comparison of these methods yet. Five recent methods-TEAM, BOOST, SNPHarvester, SNPRuler and Screen and Clean (SC)-are evaluated here in terms of power, type-1 error rate, scalability and completeness. In terms of power, TEAM performs best on data with main effect and BOOST performs best on data without main effect. In terms of type-1 error rate, TEAM and BOOST have higher type-1 error rates than SNPRuler and SNPHarvester. SC does not control type-1 error rate well. In terms of scalability, we tested the five methods using a dataset with 100 000 SNPs on a 64 bit Ubuntu system, with Intel (R) Xeon(R) CPU 2.66 GHz, 16 GB memory. TEAM takes ~36 days to finish and SNPRuler reports heap allocation problems. BOOST scales up to 100 000 SNPs and the cost is much lower than that of TEAM. SC and SNPHarvester are the most scalable. In terms of completeness, we study how frequently the pruning techniques employed by these methods incorrectly prune away the most significant epistatic interactions. We find that, on average, 20% of datasets without main effect and 60% of datasets with main effect are pruned incorrectly by BOOST, SNPRuler and SNPHarvester. The software for the five methods tested are available from the URLs below. TEAM: http://csbio.unc.edu/epistasis/download.php BOOST: http://ihome.ust.hk/~eeyang/papers.html. SNPHarvester: http://bioinformatics.ust.hk/SNPHarvester.html. SNPRuler: http://bioinformatics.ust.hk/SNPRuler.zip. Screen and Clean: http://wpicr.wpic.pitt.edu/WPICCompGen/. wangyue@nus.edu.sg.
Basavanhally, Ajay; Viswanath, Satish; Madabhushi, Anant
2015-01-01
Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1) predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2) the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS) in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine) are explored. Further quantitative evaluation in terms of interquartile range (IQR) suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140) than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305) that does not employ cross-validation sampling for all three datasets. PMID:25993029
Di, Huige; Zhang, Zhanfei; Hua, Hangbo; Zhang, Jiaqi; Hua, Dengxin; Wang, Yufeng; He, Tingyao
2017-03-06
Accurate aerosol optical properties could be obtained via the high spectral resolution lidar (HSRL) technique, which employs a narrow spectral filter to suppress the Rayleigh or Mie scattering in lidar return signals. The ability of the filter to suppress Rayleigh or Mie scattering is critical for HSRL. Meanwhile, it is impossible to increase the rejection of the filter without limitation. How to optimize the spectral discriminator and select the appropriate suppression rate of the signal is important to us. The HSRL technology was thoroughly studied based on error propagation. Error analyses and sensitivity studies were carried out on the transmittance characteristics of the spectral discriminator. Moreover, ratwo different spectroscopic methods for HSRL were described and compared: one is to suppress the Mie scattering; the other is to suppress the Rayleigh scattering. The corresponding HSRLs were simulated and analyzed. The results show that excessive suppression of Rayleigh scattering or Mie scattering in a high-spectral channel is not necessary if the transmittance of the spectral filter for molecular and aerosol scattering signals can be well characterized. When the ratio of transmittance of the spectral filter for aerosol scattering and molecular scattering is less than 0.1 or greater than 10, the detection error does not change much with its value. This conclusion implies that we have more choices for the high-spectral discriminator in HSRL. Moreover, the detection errors of HSRL regarding the two spectroscopic methods vary greatly with the atmospheric backscattering ratio. To reduce the detection error, it is necessary to choose a reasonable spectroscopic method. The detection method of suppressing the Rayleigh signal and extracting the Mie signal can achieve less error in a clear atmosphere, while the method of suppressing the Mie signal and extracting the Rayleigh signal can achieve less error in a polluted atmosphere.
Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.
Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J
2016-10-24
In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gravitational wave spectroscopy of binary neutron star merger remnants with mode stacking
NASA Astrophysics Data System (ADS)
Yang, Huan; Paschalidis, Vasileios; Yagi, Kent; Lehner, Luis; Pretorius, Frans; Yunes, Nicolás
2018-01-01
A binary neutron star coalescence event has recently been observed for the first time in gravitational waves, and many more detections are expected once current ground-based detectors begin operating at design sensitivity. As in the case of binary black holes, gravitational waves generated by binary neutron stars consist of inspiral, merger, and postmerger components. Detecting the latter is important because it encodes information about the nuclear equation of state in a regime that cannot be probed prior to merger. The postmerger signal, however, can only be expected to be measurable by current detectors for events closer than roughly ten megaparsecs, which given merger rate estimates implies a low probability of observation within the expected lifetime of these detectors. We carry out Monte Carlo simulations showing that the dominant postmerger signal (the ℓ=m =2 mode) from individual binary neutron star mergers may not have a good chance of observation even with the most sensitive future ground-based gravitational wave detectors proposed so far (the Einstein Telescope and Cosmic Explorer, for certain equations of state, assuming a full year of operation, the latest merger rates, and a detection threshold corresponding to a signal-to-noise ratio of 5). For this reason, we propose two methods that stack the postmerger signal from multiple binary neutron star observations to boost the postmerger detection probability. The first method follows a commonly used practice of multiplying the Bayes factors of individual events. The second method relies on an assumption that the mode phase can be determined from the inspiral waveform, so that coherent mode stacking of the data from different events becomes possible. We find that both methods significantly improve the chances of detecting the dominant postmerger signal, making a detection very likely after a year of observation with Cosmic Explorer for certain equations of state. We also show that in terms of detection, coherent stacking is more efficient in accumulating confidence for the presence of postmerger oscillations in a signal than the first method. Moreover, assuming the postmerger signal is detected with Cosmic Explorer via stacking, we estimate through a Fisher analysis that the peak frequency can be measured to a statistical error of ˜4 - 20 Hz for certain equations of state. Such an error corresponds to a neutron star radius measurement to within ˜15 - 56 m , a fractional relative error ˜4 %, suggesting that systematic errors from theoretical modeling (≳100 m ) may dominate the error budget.
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
Evaluation of a Web-based Error Reporting Surveillance System in a Large Iranian Hospital.
Askarian, Mehrdad; Ghoreishi, Mahboobeh; Akbari Haghighinejad, Hourvash; Palenik, Charles John; Ghodsi, Maryam
2017-08-01
Proper reporting of medical errors helps healthcare providers learn from adverse incidents and improve patient safety. A well-designed and functioning confidential reporting system is an essential component to this process. There are many error reporting methods; however, web-based systems are often preferred because they can provide; comprehensive and more easily analyzed information. This study addresses the use of a web-based error reporting system. This interventional study involved the application of an in-house designed "voluntary web-based medical error reporting system." The system has been used since July 2014 in Nemazee Hospital, Shiraz University of Medical Sciences. The rate and severity of errors reported during the year prior and a year after system launch were compared. The slope of the error report trend line was steep during the first 12 months (B = 105.727, P = 0.00). However, it slowed following launch of the web-based reporting system and was no longer statistically significant (B = 15.27, P = 0.81) by the end of the second year. Most recorded errors were no-harm laboratory types and were due to inattention. Usually, they were reported by nurses and other permanent employees. Most reported errors occurred during morning shifts. Using a standardized web-based error reporting system can be beneficial. This study reports on the performance of an in-house designed reporting system, which appeared to properly detect and analyze medical errors. The system also generated follow-up reports in a timely and accurate manner. Detection of near-miss errors could play a significant role in identifying areas of system defects.
Powerful Inference with the D-Statistic on Low-Coverage Whole-Genome Data
Soraggi, Samuele; Wiuf, Carsten; Albrechtsen, Anders
2017-01-01
The detection of ancient gene flow between human populations is an important issue in population genetics. A common tool for detecting ancient admixture events is the D-statistic. The D-statistic is based on the hypothesis of a genetic relationship that involves four populations, whose correctness is assessed by evaluating specific coincidences of alleles between the groups. When working with high-throughput sequencing data, calling genotypes accurately is not always possible; therefore, the D-statistic currently samples a single base from the reads of one individual per population. This implies ignoring much of the information in the data, an issue especially striking in the case of ancient genomes. We provide a significant improvement to overcome the problems of the D-statistic by considering all reads from multiple individuals in each population. We also apply type-specific error correction to combat the problems of sequencing errors, and show a way to correct for introgression from an external population that is not part of the supposed genetic relationship, and how this leads to an estimate of the admixture rate. We prove that the D-statistic is approximated by a standard normal distribution. Furthermore, we show that our method outperforms the traditional D-statistic in detecting admixtures. The power gain is most pronounced for low and medium sequencing depth (1–10×), and performances are as good as with perfectly called genotypes at a sequencing depth of 2×. We show the reliability of error correction in scenarios with simulated errors and ancient data, and correct for introgression in known scenarios to estimate the admixture rates. PMID:29196497
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
NASA Astrophysics Data System (ADS)
Zhu, Yixiao; Jiang, Mingxuan; Ruan, Xiaoke; Chen, Zeyu; Li, Chenjia; Zhang, Fan
2018-05-01
We experimentally demonstrate 6.4 Tb/s wavelength division multiplexed (WDM) direct-detection transmission based on Nyquist twin-SSB modulation over 25 km SSMF with bit error rates (BERs) below the 20% hard-decision forward error correction (HD-FEC) threshold of 1.5 × 10-2. The two sidebands of each channel are separately detected using Kramers-Kronig receiver without MIMO equalization. We also carry out numerical simulations to evaluate the system robustness against I/Q amplitude imbalance, I/Q phase deviation and the extinction ratio of modulator, respectively. Furthermore, we show in simulation that the requirement of steep edge optical filter can be relaxed if multi-input-multi-output (MIMO) equalization between the two sidebands is used.
Parallel pulse processing and data acquisition for high speed, low error flow cytometry
Engh, G.J. van den; Stokdijk, W.
1992-09-22
A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.
Parallel Processing of Broad-Band PPM Signals
NASA Technical Reports Server (NTRS)
Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement
2010-01-01
A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boehnke, E McKenzie; DeMarco, J; Steers, J
2016-06-15
Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.
2015-04-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.
A soft kinetic data structure for lesion border detection.
Kockara, Sinan; Mete, Mutlu; Yip, Vincent; Lee, Brendan; Aydin, Kemal
2010-06-15
The medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented. Graph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset.
Spatial Support Vector Regression to Detect Silent Errors in the Exascale Era
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo
As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs) or silent errors are one of the major sources that corrupt the executionresults of HPC applications without being detected. In this work, we explore a low-memory-overhead SDC detector, by leveraging epsilon-insensitive support vector machine regression, to detect SDCs that occur in HPC applications that can be characterized by an impact error bound. The key contributions are three fold. (1) Our design takes spatialfeatures (i.e., neighbouring data values for each data pointmore » in a snapshot) into training data, such that little memory overhead (less than 1%) is introduced. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show thatour detector can achieve the detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% of false positive rate for most cases. Our detector incurs low performance overhead, 5% on average, for all benchmarks studied in the paper. Compared with other state-of-the-art techniques, our detector exhibits the best tradeoff considering the detection ability and overheads.« less
Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument
NASA Astrophysics Data System (ADS)
Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.; Menlove, Howard O.; Flaska, Marek; Pozzi, Sara A.
2017-07-01
The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. The NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrument that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.
Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.
The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. Thus the NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrumentmore » that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.« less
Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument
Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.; ...
2017-01-05
The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. Thus the NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrumentmore » that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.« less
Detection of pointing errors with CMOS-based camera in intersatellite optical communications
NASA Astrophysics Data System (ADS)
Yu, Si-yuan; Ma, Jing; Tan, Li-ying
2005-01-01
For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.
Researchers at the National Cancer Institute (NCI) developed a genetic assay for detecting transcription errors in RNA synthesis. This new assay extends the familiar concept of an Ames test which monitors DNA damage and synthesis errors to the previously inaccessible issue of RNA synthesis fidelity. The FDA requires genetic DNA focused tests for all drug approval as it assesses the in vivo mutagenic and carcinogenic potential of a drug. The new assay will open an approach to monitoring the impact of treatments on the accuracy of RNA synthesis. Errors in transcription have been hypothesized to be a component of aging and age-related diseases. The National Cancer Institute (NCI) seeks licensing partners for the genetic assay.
Residents' numeric inputting error in computerized physician order entry prescription.
Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong
2016-04-01
Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Acea Nebril, B; Gómez Freijoso, C
1997-03-01
To determine the accuracy of bibliographic citation in Revista Española de Enfermedades Digestivas (REED) and compare it with other Spanish and international journals. We reviewed all 1995 volumes of the REED and randomly selected 100 references from these volumes. Nine citations of non-journal articles were excluded and the remaining 91 citations were carefully scrutinized. Each original article was compared for author's name, title of article, name of journal, volume number, year of publication and pages. Some type of error was detected in 61.6% of the references and on 3 occasions (3.3%) impeded finding to the original article. Errors were found in authors (37.3%); article title (16.4%), pages (6.6%), journal title (4.4%), volume (2.2%) and year (1%). A single error was found in 42 citations, 2 were found in 13 and 3 were found in 1. REED's rate of error in references is comparable to the rates of other spanish and international journals. Authors should exercise more care in preparing bibliographies and should invest more effort in verification of quoted references.
[Qualitative evaluation of blood products records in a hospital].
Lartigue, B; Catillon, E
2012-02-01
This study aimed at evaluating the qualitative performance of blood products traceability from paper and electronic medical records in a hospital. Quality of date/time documentation was assessed by detection, for 20minutes or more, of chronological errors and inter-source inconsistencies, in a random sample of 168 blood products transfused during 2009. A receipt date/time was confirmed in 52% of paper records; a data entry error was attested in 25% of paper records, and 21% of electronic records. A transfusion date/time was notified in 93% of paper records, with a data entry error in 26% of paper records and 25% of electronic records. The patient medical record held at least one date/time error in 18% and 17%, for receipt and transfusion respectively. Environmental factors (clinical setting, urgency, blood product category) did not contributed to data error rates. Although blood products traceability has good quantitative results, the recorded documentation is not qualitative. In our study, data entry errors are similar in electronic or paper records, but the global failure rate is lesser in electronic records because omissions are controlled. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Fetal QRS detection and heart rate estimation: a wavelet-based approach.
Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula
2014-08-01
Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
NASA Astrophysics Data System (ADS)
Gourdji, S.; Yadav, V.; Karion, A.; Mueller, K. L.; Kort, E. A.; Conley, S.; Ryerson, T. B.; Nehrkorn, T.
2017-12-01
The ability of atmospheric inverse models to detect, spatially locate and quantify emissions from large point sources in urban domains needs improvement before inversions can be used reliably as carbon monitoring tools. In this study, we use the Aliso Canyon natural gas leak from October 2015 to February 2016 (near Los Angeles, CA) as a natural tracer experiment to assess inversion quality by comparison with published estimates of leak rates calculated using a mass balance approach (Conley et al., 2016). Fourteen dedicated flights were flown in horizontal transects downwind and throughout the duration of the leak to sample CH4 mole fractions and collect meteorological information for use in the mass-balance estimates. The same CH4 observational data were then used here in geostatistical inverse models with no prior assumptions about the leak location or emission rate and flux sensitivity matrices generated using the WRF-STILT atmospheric transport model. Transport model errors were assessed by comparing WRF-STILT wind speeds, wind direction and planetary boundary layer (PBL) height to those observed on the plane; the impact of these errors in the inversions, and the optimal inversion setup for reducing their influence was also explored. WRF-STILT provides a reasonable simulation of true atmospheric conditions on most flight dates, given the complex terrain and known difficulties in simulating atmospheric transport under such conditions. Moreover, even large (>120°) errors in wind direction were found to be tolerable in terms of spatially locating the leak rate within a 5-km radius of the actual site. Errors in the WRF-STILT wind speed (>50%) and PBL height have more negative impacts on the inversions, with too high wind speeds (typically corresponding with too low PBL heights) resulting in overestimated leak rates, and vice-versa. Coarser data averaging intervals and the use of observed wind speed errors in the model-data mismatch covariance matrix are shown to help reduce the influence of transport model errors, by averaging out compensating errors and de-weighting the influence of problematic observations. This study helps to enable the integration of aircraft measurements with other tower-based data in larger inverse models that can reliably detect, locate and quantify point source emissions in urban areas.
UAS Well Clear Recovery Against Non-Cooperative Intruders Using Vertical Maneuvers
NASA Technical Reports Server (NTRS)
Cone, Andrew C.; Thipphavong, David; Lee, Seung Man; Santiago, Confesor
2017-01-01
This paper documents a study that drove the development of a mathematical expression in the detect-and-avoid (DAA) minimum operational performance standards (MOPS) for unmanned aircraft systems (UAS). This equation describes the conditions under which vertical maneuver guidance should be provided during recovery of DAA well clear separation with a non-cooperative VFR aircraft. Although the original hypothesis was that vertical maneuvers for DAA well clear recovery should only be offered when sensor vertical rate errors are small, this paper suggests that UAS climb and descent performance should be considered-in addition to sensor errors for vertical position and vertical rate-when determining whether to offer vertical guidance. A fast-time simulation study involving 108,000 encounters between a UAS and a non-cooperative visual-flight-rules aircraft was conducted. Results are presented showing that, when vertical maneuver guidance for DAA well clear recovery was suppressed, the minimum vertical separation increased by roughly 50 feet (or horizontal separation by 500 to 800 feet). However, the percentage of encounters that had a risk of collision when performing vertical well clear recovery maneuvers was reduced as UAS vertical rate performance increased and sensor vertical rate errors decreased. A class of encounter is identified for which vertical-rate error had a large effect on the efficacy of horizontal maneuvers due to the difficulty of making the correct left/right turn decision: crossing conflict with intruder changing altitude. Overall, these results support logic that would allow vertical maneuvers when UAS vertical performance is sufficient to avoid the intruder, based on the intruder's estimated vertical position and vertical rate, as well as the vertical rate error of the UAS' sensor.
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015
NASA Astrophysics Data System (ADS)
Han, Xifeng; Zhou, Wen
2018-03-01
Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.
An improved pi/4-QPSK with nonredundant error correction for satellite mobile broadcasting
NASA Technical Reports Server (NTRS)
Feher, Kamilo; Yang, Jiashi
1991-01-01
An improved pi/4-quadrature phase-shift keying (QPSK) receiver that incorporates a simple nonredundant error correction (NEC) structure is proposed for satellite and land-mobile digital broadcasting. The bit-error-rate (BER) performance of the pi/4-QPSK with NEC is analyzed and evaluated in a fast Rician fading and additive white Gaussian noise (AWGN) environment using computer simulation. It is demonstrated that with simple electronics the performance of a noncoherently detected pi/4-QPSK signal in both AWGN and fast Rician fading can be improved. When the K-factor (a ratio of average power of multipath signal to direct path power) of the Rician channel decreases, the improvement increases. An improvement of 1.2 dB could be obtained at a BER of 0.0001 in the AWGN channel. This performance gain is achieved without requiring any signal redundancy and additional bandwidth. Three types of noncoherent detection schemes of pi/4-QPSK with NEC structure, such as IF band differential detection, baseband differential detection, and FM discriminator, are discussed. It is concluded that the pi/4-QPSK with NEC is an attractive scheme for power-limited satellite land-mobile broadcasting systems.
ERIC Educational Resources Information Center
Hallin, Anna Eva; Reuterskiöld, Christina
2017-01-01
Purpose: The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies. Method:…
Sada, Oumer; Melkie, Addisu; Shibeshi, Workineh
2015-09-16
Medication errors (MEs) are important problems in all hospitalized populations, especially in intensive care unit (ICU). Little is known about the prevalence of medication prescribing errors in the ICU of hospitals in Ethiopia. The aim of this study was to assess medication prescribing errors in the ICU of Tikur Anbessa Specialized Hospital using retrospective cross-sectional analysis of patient cards and medication charts. About 220 patient charts were reviewed with a total of 1311 patient-days, and 882 prescription episodes. 359 MEs were detected; with prevalence of 40 per 100 orders. Common prescribing errors were omission errors 154 (42.89%), 101 (28.13%) wrong combination, 48 (13.37%) wrong abbreviation, 30 (8.36%) wrong dose, wrong frequency 18 (5.01%) and wrong indications 8 (2.23%). The present study shows that medication errors are common in medical ICU of Tikur Anbessa Specialized Hospital. These results suggest future targets of prevention strategies to reduce the rate of medication error.
Identification of Matra Region and Overlapping Characters for OCR of Printed Bengali Scripts
NASA Astrophysics Data System (ADS)
Goswami, Subhra Sundar
One of the important reasons for poor recognition rate in optical character recognition (OCR) system is the error in character segmentation. In case of Bangla scripts, the errors occur due to several reasons, which include incorrect detection of matra (headline), over-segmentation and under-segmentation. We have proposed a robust method for detecting the headline region. Existence of overlapping characters (in under-segmented parts) in scanned printed documents is a major problem in designing an effective character segmentation procedure for OCR systems. In this paper, a predictive algorithm is developed for effectively identifying overlapping characters and then selecting the cut-borders for segmentation. Our method can be successfully used in achieving high recognition result.
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory
NASA Astrophysics Data System (ADS)
Yan, Daqin; Wang, Fuzhong; Wang, Shuo
2017-12-01
Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.
LEA Detection and Tracking Method for Color-Independent Visual-MIMO
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-01-01
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563
LEA Detection and Tracking Method for Color-Independent Visual-MIMO.
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-07-02
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.
What errors do peer reviewers detect, and does training improve their ability to detect them?
Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard
2008-10-01
To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.
Adaptive Impact-Driven Detection of Silent Data Corruption for HPC Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Cappello, Franck
For exascale HPC applications, silent data corruption (SDC) is one of the most dangerous problems because there is no indication that there are errors during the execution. We propose an adaptive impact-driven method that can detect SDCs dynamically. The key contributions are threefold. (1) We carefully characterize 18 real-world HPC applications and discuss the runtime data features, as well as the impact of the SDCs on their execution results. (2) We propose an impact-driven detection model that does not blindly improve the prediction accuracy, but instead detects only influential SDCs to guarantee user-acceptable execution results. (3) Our solution can adaptmore » to dynamic prediction errors based on local runtime data and can automatically tune detection ranges for guaranteeing low false alarms. Experiments show that our detector can detect 80-99.99% of SDCs with a false alarm rate less that 1% of iterations for most cases. The memory cost and detection overhead are reduced to 15% and 6.3%, respectively, for a large majority of applications.« less
Amelogenin test: From forensics to quality control in clinical and biochemical genomics.
Francès, F; Portolés, O; González, J I; Coltell, O; Verdú, F; Castelló, A; Corella, D
2007-01-01
The increasing number of samples from the biomedical genetic studies and the number of centers participating in the same involves increasing risk of mistakes in the different sample handling stages. We have evaluated the usefulness of the amelogenin test for quality control in sample identification. Amelogenin test (frequently used in forensics) was undertaken on 1224 individuals participating in a biomedical study. Concordance between referred sex in the database and amelogenin test was estimated. Additional sex-error genetic detecting systems were developed. The overall concordance rate was 99.84% (1222/1224). Two samples showed a female amelogenin test outcome, being codified as males in the database. The first, after checking sex-specific biochemical and clinical profile data was found to be due to a codification error in the database. In the second, after checking the database, no apparent error was discovered because a correct male profile was found. False negatives in amelogenin male sex determination were discarded by additional tests, and feminine sex was confirmed. A sample labeling error was revealed after a new DNA extraction. The amelogenin test is a useful quality control tool for detecting sex-identification errors in large genomic studies, and can contribute to increase its validity.
Lepoittevin, Camille; Frigerio, Jean-Marc; Garnier-Géré, Pauline; Salin, Franck; Cervera, María-Teresa; Vornam, Barbara; Harvengt, Luc; Plomion, Christophe
2010-01-01
Background There is considerable interest in the high-throughput discovery and genotyping of single nucleotide polymorphisms (SNPs) to accelerate genetic mapping and enable association studies. This study provides an assessment of EST-derived and resequencing-derived SNP quality in maritime pine (Pinus pinaster Ait.), a conifer characterized by a huge genome size (∼23.8 Gb/C). Methodology/Principal Findings A 384-SNPs GoldenGate genotyping array was built from i/ 184 SNPs originally detected in a set of 40 re-sequenced candidate genes (in vitro SNPs), chosen on the basis of functionality scores, presence of neighboring polymorphisms, minor allele frequencies and linkage disequilibrium and ii/ 200 SNPs screened from ESTs (in silico SNPs) selected based on the number of ESTs used for SNP detection, the SNP minor allele frequency and the quality of SNP flanking sequences. The global success rate of the assay was 66.9%, and a conversion rate (considering only polymorphic SNPs) of 51% was achieved. In vitro SNPs showed significantly higher genotyping-success and conversion rates than in silico SNPs (+11.5% and +18.5%, respectively). The reproducibility was 100%, and the genotyping error rate very low (0.54%, dropping down to 0.06% when removing four SNPs showing elevated error rates). Conclusions/Significance This study demonstrates that ESTs provide a resource for SNP identification in non-model species, which do not require any additional bench work and little bio-informatics analysis. However, the time and cost benefits of in silico SNPs are counterbalanced by a lower conversion rate than in vitro SNPs. This drawback is acceptable for population-based experiments, but could be dramatic in experiments involving samples from narrow genetic backgrounds. In addition, we showed that both the visual inspection of genotyping clusters and the estimation of a per SNP error rate should help identify markers that are not suitable to the GoldenGate technology in species characterized by a large and complex genome. PMID:20543950
ERIC Educational Resources Information Center
Guler, Nese; Penfield, Randall D.
2009-01-01
In this study, we investigate the logistic regression (LR), Mantel-Haenszel (MH), and Breslow-Day (BD) procedures for the simultaneous detection of both uniform and nonuniform differential item functioning (DIF). A simulation study was used to assess and compare the Type I error rate and power of a combined decision rule (CDR), which assesses DIF…
A Canopy Density Model for Planar Orchard Target Detection Based on Ultrasonic Sensors
Li, Hanzhe; Zhai, Changyuan; Weckler, Paul; Wang, Ning; Yang, Shuo; Zhang, Bo
2016-01-01
Orchard target-oriented variable rate spraying is an effective method to reduce pesticide drift and excessive residues. To accomplish this task, the orchard targets’ characteristic information is needed to control liquid flow rate and airflow rate. One of the most important characteristics is the canopy density. In order to establish the canopy density model for a planar orchard target which is indispensable for canopy density calculation, a target density detection testing system was developed based on an ultrasonic sensor. A time-domain energy analysis method was employed to analyze the ultrasonic signal. Orthogonal regression central composite experiments were designed and conducted using man-made canopies of known density with three or four layers of leaves. Two model equations were obtained, of which the model for the canopies with four layers was found to be the most reliable. A verification test was conducted with different layers at the same density values and detecting distances. The test results showed that the relative errors of model density values and actual values of five, four, three and two layers of leaves were acceptable, while the maximum relative errors were 17.68%, 25.64%, 21.33% and 29.92%, respectively. It also suggested the model equation with four layers had a good applicability with different layers which increased with adjacent layers. PMID:28029132
Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu
2015-01-01
Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908
Automated Identification of Abnormal Adult EEGs
López, S.; Suarez, G.; Jungreis, D.; Obeid, I.; Picone, J.
2016-01-01
The interpretation of electroencephalograms (EEGs) is a process that is still dependent on the subjective analysis of the examiners. Though interrater agreement on critical events such as seizures is high, it is much lower on subtler events (e.g., when there are benign variants). The process used by an expert to interpret an EEG is quite subjective and hard to replicate by machine. The performance of machine learning technology is far from human performance. We have been developing an interpretation system, AutoEEG, with a goal of exceeding human performance on this task. In this work, we are focusing on one of the early decisions made in this process – whether an EEG is normal or abnormal. We explore two baseline classification algorithms: k-Nearest Neighbor (kNN) and Random Forest Ensemble Learning (RF). A subset of the TUH EEG Corpus was used to evaluate performance. Principal Components Analysis (PCA) was used to reduce the dimensionality of the data. kNN achieved a 41.8% detection error rate while RF achieved an error rate of 31.7%. These error rates are significantly lower than those obtained by random guessing based on priors (49.5%). The majority of the errors were related to misclassification of normal EEGs. PMID:27195311
Powerful Inference with the D-Statistic on Low-Coverage Whole-Genome Data.
Soraggi, Samuele; Wiuf, Carsten; Albrechtsen, Anders
2018-02-02
The detection of ancient gene flow between human populations is an important issue in population genetics. A common tool for detecting ancient admixture events is the D-statistic. The D-statistic is based on the hypothesis of a genetic relationship that involves four populations, whose correctness is assessed by evaluating specific coincidences of alleles between the groups. When working with high-throughput sequencing data, calling genotypes accurately is not always possible; therefore, the D-statistic currently samples a single base from the reads of one individual per population. This implies ignoring much of the information in the data, an issue especially striking in the case of ancient genomes. We provide a significant improvement to overcome the problems of the D-statistic by considering all reads from multiple individuals in each population. We also apply type-specific error correction to combat the problems of sequencing errors, and show a way to correct for introgression from an external population that is not part of the supposed genetic relationship, and how this leads to an estimate of the admixture rate. We prove that the D-statistic is approximated by a standard normal distribution. Furthermore, we show that our method outperforms the traditional D-statistic in detecting admixtures. The power gain is most pronounced for low and medium sequencing depth (1-10×), and performances are as good as with perfectly called genotypes at a sequencing depth of 2×. We show the reliability of error correction in scenarios with simulated errors and ancient data, and correct for introgression in known scenarios to estimate the admixture rates. Copyright © 2018 Soraggi et al.
Mapping slope movements in Alpine environments using TerraSAR-X interferometric methods
NASA Astrophysics Data System (ADS)
Barboux, Chloé; Strozzi, Tazio; Delaloye, Reynald; Wegmüller, Urs; Collet, Claude
2015-11-01
Mapping slope movements in Alpine environments is an increasingly important task in the context of climate change and natural hazard management. We propose the detection, mapping and inventorying of slope movements using different interferometric methods based on TerraSAR-X satellite images. Differential SAR interferograms (DInSAR), Persistent Scatterer Interferometry (PSI), Short-Baseline Interferometry (SBAS) and a semi-automated texture image analysis are presented and compared in order to determine their contribution for the automatic detection and mapping of slope movements of various velocity rates encountered in Alpine environments. Investigations are conducted in a study region of about 6 km × 6 km located in the Western Swiss Alps using a unique large data set of 140 DInSAR scenes computed from 51 summer TerraSAR-X (TSX) acquisitions from 2008 to 2012. We found that PSI is able to precisely detect only points moving with velocities below 3.5 cm/yr in the LOS, with a root mean squared error of about 0.58 cm/yr compared to DGPS records. SBAS employed with 11 days summer interferograms increases the range of detectable movements to rates up to 35 cm/yr in the LOS with a root mean squared error of 6.36 cm/yr, but inaccurate measurements due to phase unwrapping are already possible for velocity rates larger than 20 cm/year. With the semi-automated texture image analysis the rough estimation of the velocity rates over an outlined moving zone is accurate for rates of "cm/day", "dm/month" and "cm/month", but due to the decorrelation of yearly TSX interferograms this method fails for the observation of slow movements in the "cm/yr" range.
Radiologic Errors in Patients With Lung Cancer
Forrest, John V.; Friedman, Paul J.
1981-01-01
Some 20 percent to 50 percent of detectable malignant lesions are missed or misdiagnosed at the time of their first radiologic appearance. These errors can result in delayed diagnosis and treatment, which may affect a patient's survival. Use of moderately high (130 to 150) kilovolt peak films, awareness of portions of the lung where lesions are often missed (such as lung apices and paramediastinal and hilar areas), careful comparison of current roentgenograms with those taken previously and the use of an independent second observer can help to minimize the rate of radiologic diagnostic errors in patients with lung cancer. ImagesFigure 3.Figure 4. PMID:7257363
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
The impact of using an intravenous workflow management system (IVWMS) on cost and patient safety.
Lin, Alex C; Deng, Yihong; Thaibah, Hilal; Hingl, John; Penm, Jonathan; Ivey, Marianne F; Thomas, Mark
2018-07-01
The aim of this study was to determine the financial costs associated with wasted and missing doses before and after the implementation of an intravenous workflow management system (IVWMS) and to quantify the number and the rate of detected intravenous (IV) preparation errors. A retrospective analysis of the sample hospital information system database was conducted using three months of data before and after the implementation of an IVWMS System (DoseEdge ® ) which uses barcode scanning and photographic technologies to track and verify each step of the preparation process. The financial impact associated with wasted and missing >IV doses was determined by combining drug acquisition, labor, accessory, and disposal costs. The intercepted error reports and pharmacist detected error reports were drawn from the IVWMS to quantify the number of errors by defined error categories. The total number of IV doses prepared before and after the implementation of the IVWMS system were 110,963 and 101,765 doses, respectively. The adoption of the IVWMS significantly reduced the amount of wasted and missing IV doses by 14,176 and 2268 doses, respectively (p < 0.001). The overall cost savings of using the system was $144,019 over 3 months. The total number of errors detected was 1160 (1.14%) after using the IVWMS. The implementation of the IVWMS facilitated workflow changes that led to a positive impact on cost and patient safety. The implementation of the IVWMS increased patient safety by enforcing standard operating procedures and bar code verifications. Published by Elsevier B.V.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
Integrated analysis of error detection and recovery
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1985-01-01
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.
[Remote system of natural gas leakage based on multi-wavelength characteristics spectrum analysis].
Li, Jing; Lu, Xu-Tao; Yang, Ze-Hui
2014-05-01
In order to be able to quickly, to a wide range of natural gas pipeline leakage monitoring, the remote detection system for concentration of methane gas was designed based on static Fourier transform interferometer. The system used infrared light, which the center wavelength was calibrated to absorption peaks of methane molecules, to irradiated tested area, and then got the interference fringes by converging collimation system and interference module. Finally, the system calculated the concentration-path-length product in tested area by multi-wavelength characteristics spectrum analysis algorithm, furthermore the inversion of the corresponding concentration of methane. By HITRAN spectrum database, Selected wavelength position of 1. 65 microm as the main characteristic absorption peaks, thereby using 1. 65 pm DFB laser as the light source. In order to improve the detection accuracy and stability without increasing the hardware configuration of the system, solved absorbance ratio by the auxiliary wave-length, and then get concentration-path-length product of measured gas by the method of the calculation proportion of multi-wavelength characteristics. The measurement error from external disturbance is caused by this innovative approach, and it is more similar to a differential measurement. It will eliminate errors in the process of solving the ratio of multi-wavelength characteristics, and can improve accuracy and stability of the system. The infrared absorption spectrum of methane is constant, the ratio of absorbance of any two wavelengths by methane is also constant. The error coefficients produced by the system is the same when it received the same external interference, so the measured noise of the system can be effectively reduced by the ratio method. Experimental tested standards methane gas tank with leaking rate constant. Using the tested data of PN1000 type portable methane detector as the standard data, and were compared to the tested data of the system, while tested distance of the system were 100, 200 and 500 m. Experimental results show that the methane concentration detected value was stable after a certain time leakage, the concentration-path-length product value of the system was stable. For detection distance of 100 m, the detection error of the concentration-path-length product was less than 1. 0%. With increasing distance from tested area, the detection error is increased correspondingly. When the distance was 500 m, the detection error was less than 4. 5%. In short, the detected error of the system is less than 5. 0% after the gas leakage stable, to meet the requirements of the field of natural gas leakage remote sensing.
An improved PCA method with application to boiler leak detection.
Sun, Xi; Marquez, Horacio J; Chen, Tongwen; Riaz, Muhammad
2005-07-01
Principal component analysis (PCA) is a popular fault detection technique. It has been widely used in process industries, especially in the chemical industry. In industrial applications, achieving a sensitive system capable of detecting incipient faults, which maintains the false alarm rate to a minimum, is a crucial issue. Although a lot of research has been focused on these issues for PCA-based fault detection and diagnosis methods, sensitivity of the fault detection scheme versus false alarm rate continues to be an important issue. In this paper, an improved PCA method is proposed to address this problem. In this method, a new data preprocessing scheme and a new fault detection scheme designed for Hotelling's T2 as well as the squared prediction error are developed. A dynamic PCA model is also developed for boiler leak detection. This new method is applied to boiler water/steam leak detection with real data from Syncrude Canada's utility plant in Fort McMurray, Canada. Our results demonstrate that the proposed method can effectively reduce false alarm rate, provide effective and correct leak alarms, and give early warning to operators.
Vision-based method for detecting driver drowsiness and distraction in driver monitoring system
NASA Astrophysics Data System (ADS)
Jo, Jaeik; Lee, Sung Joo; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie
2011-12-01
Most driver-monitoring systems have attempted to detect either driver drowsiness or distraction, although both factors should be considered for accident prevention. Therefore, we propose a new driver-monitoring method considering both factors. We make the following contributions. First, if the driver is looking ahead, drowsiness detection is performed; otherwise, distraction detection is performed. Thus, the computational cost and eye-detection error can be reduced. Second, we propose a new eye-detection algorithm that combines adaptive boosting, adaptive template matching, and blob detection with eye validation, thereby reducing the eye-detection error and processing time significantly, which is hardly achievable using a single method. Third, to enhance eye-detection accuracy, eye validation is applied after initial eye detection, using a support vector machine based on appearance features obtained by principal component analysis (PCA) and linear discriminant analysis (LDA). Fourth, we propose a novel eye state-detection algorithm that combines appearance features obtained using PCA and LDA, with statistical features such as the sparseness and kurtosis of the histogram from the horizontal edge image of the eye. Experimental results showed that the detection accuracies of the eye region and eye states were 99 and 97%, respectively. Both driver drowsiness and distraction were detected with a success rate of 98%.
NASA Astrophysics Data System (ADS)
Tsai, Nan-Chyuan; Sue, Chung-Yang
2010-02-01
Owing to the imposed but undesired accelerations such as quadrature error and cross-axis perturbation, the micro-machined gyroscope would not be unconditionally retained at resonant mode. Once the preset resonance is not sustained, the performance of the micro-gyroscope is accordingly degraded. In this article, a direct model reference adaptive control loop which is integrated with a modified disturbance estimating observer (MDEO) is proposed to guarantee the resonant oscillations at drive mode and counterbalance the undesired disturbance mainly caused by quadrature error and cross-axis perturbation. The parameters of controller are on-line innovated by the dynamic error between the MDEO output and expected response. In addition, Lyapunov stability theory is employed to examine the stability of the closed-loop control system. Finally, the efficacy of numerical evaluation on the exerted time-varying angular rate, which is to be detected and measured by the gyroscope, is verified by intensive simulations.
Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver
NASA Astrophysics Data System (ADS)
Nazrul Islam, A. K. M.; Majumder, S. P.
2017-12-01
An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.
Applications and error correction for adiabatic quantum optimization
NASA Astrophysics Data System (ADS)
Pudenz, Kristen
Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.
NASA Astrophysics Data System (ADS)
Liu, Dachang; Wang, Zixiong; Liu, Jianguo; Tan, Jun; Yu, Lijuan; Mei, Haiping; Zhou, Yusong; Zhu, Ninghua
2017-10-01
The performance of a free-space optical communication system is highly affected by the atmospheric turbulence in terms of scintillation. An optical communication system based on intensity-modulation direct-detection was built with 1-km transmission distance to evaluate the bit error rate (BER) performance over real atmospheric turbulence. 2.5-, 5-, and 10-Gbps data rate transmissions were carried out, where error-free transmission could be achieved during over 37% of the 2.5-Gbps transmissions and over 43% of the 5-Gbps transmissions. In the rest of the transmissions, BER deteriorated as the refractive-index structure constant increased, while the two measured items have almost the same trend.
Sensitivity in error detection of patient specific QA tools for IMRT plans
NASA Astrophysics Data System (ADS)
Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.
2016-03-01
The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.
A Mechanism for Error Detection in Speeded Response Time Tasks
ERIC Educational Resources Information Center
Holroyd, Clay B.; Yeung, Nick; Coles, Michael G. H.; Cohen, Jonathan D.
2005-01-01
The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors…
Procedural error monitoring and smart checklists
NASA Technical Reports Server (NTRS)
Palmer, Everett
1990-01-01
Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…
Application of human reliability analysis to nursing errors in hospitals.
Inoue, Kayoko; Koizumi, Akio
2004-12-01
Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical errors: "violation of rules" with a weight of 826 x 10(-4), "failure of labor management" with a weight of 661 x 10(-4), and "defects in the standardization of nursing practices" with a weight of 495 x 10(-4).
RACER: Effective Race Detection Using AspectJ
NASA Technical Reports Server (NTRS)
Bodden, Eric; Havelund, Klaus
2008-01-01
The limits of coding with joint constraints on detected and undetected error rates Programming errors occur frequently in large software systems, and even more so if these systems are concurrent. In the past, researchers have developed specialized programs to aid programmers detecting concurrent programming errors such as deadlocks, livelocks, starvation and data races. In this work we propose a language extension to the aspect-oriented programming language AspectJ, in the form of three new built-in pointcuts, lock(), unlock() and may be Shared(), which allow programmers to monitor program events where locks are granted or handed back, and where values are accessed that may be shared amongst multiple Java threads. We decide thread-locality using a static thread-local objects analysis developed by others. Using the three new primitive pointcuts, researchers can directly implement efficient monitoring algorithms to detect concurrent programming errors online. As an example, we expose a new algorithm which we call RACER, an adoption of the well-known ERASER algorithm to the memory model of Java. We implemented the new pointcuts as an extension to the Aspect Bench Compiler, implemented the RACER algorithm using this language extension and then applied the algorithm to the NASA K9 Rover Executive. Our experiments proved our implementation very effective. In the Rover Executive RACER finds 70 data races. Only one of these races was previously known.We further applied the algorithm to two other multi-threaded programs written by Computer Science researchers, in which we found races as well.
ERIC Educational Resources Information Center
Hidalgo, Mª Dolores; Gómez-Benito, Juana; Zumbo, Bruno D.
2014-01-01
The authors analyze the effectiveness of the R[superscript 2] and delta log odds ratio effect size measures when using logistic regression analysis to detect differential item functioning (DIF) in dichotomous items. A simulation study was carried out, and the Type I error rate and power estimates under conditions in which only statistical testing…
ERIC Educational Resources Information Center
Shih, Ching-Lin; Wang, Wen-Chung
2009-01-01
The multiple indicators, multiple causes (MIMIC) method with a pure short anchor was proposed to detect differential item functioning (DIF). A simulation study showed that the MIMIC method with an anchor of 1, 2, 4, or 10 DIF-free items yielded a well-controlled Type I error rate even when such tests contained as many as 40% DIF items. In general,…
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
Hayden, Randall T; Patterson, Donna J; Jay, Dennis W; Cross, Carl; Dotson, Pamela; Possel, Robert E; Srivastava, Deo Kumar; Mirro, Joseph; Shenep, Jerry L
2008-02-01
To assess the ability of a bar code-based electronic positive patient and specimen identification (EPPID) system to reduce identification errors in a pediatric hospital's clinical laboratory. An EPPID system was implemented at a pediatric oncology hospital to reduce errors in patient and laboratory specimen identification. The EPPID system included bar-code identifiers and handheld personal digital assistants supporting real-time order verification. System efficacy was measured in 3 consecutive 12-month time frames, corresponding to periods before, during, and immediately after full EPPID implementation. A significant reduction in the median percentage of mislabeled specimens was observed in the 3-year study period. A decline from 0.03% to 0.005% (P < .001) was observed in the 12 months after full system implementation. On the basis of the pre-intervention detected error rate, it was estimated that EPPID prevented at least 62 mislabeling events during its first year of operation. EPPID decreased the rate of misidentification of clinical laboratory samples. The diminution of errors observed in this study provides support for the development of national guidelines for the use of bar coding for laboratory specimens, paralleling recent recommendations for medication administration.
Families as Partners in Hospital Error and Adverse Event Surveillance
Khan, Alisa; Coffey, Maitreya; Litterer, Katherine P.; Baird, Jennifer D.; Furtak, Stephannie L.; Garcia, Briana M.; Ashland, Michele A.; Calaman, Sharon; Kuzma, Nicholas C.; O’Toole, Jennifer K.; Patel, Aarti; Rosenbluth, Glenn; Destino, Lauren A.; Everhart, Jennifer L.; Good, Brian P.; Hepps, Jennifer H.; Dalal, Anuj K.; Lipsitz, Stuart R.; Yoon, Catherine S.; Zigmont, Katherine R.; Srivastava, Rajendu; Starmer, Amy J.; Sectish, Theodore C.; Spector, Nancy D.; West, Daniel C.; Landrigan, Christopher P.
2017-01-01
IMPORTANCE Medical errors and adverse events (AEs) are common among hospitalized children. While clinician reports are the foundation of operational hospital safety surveillance and a key component of multifaceted research surveillance, patient and family reports are not routinely gathered. We hypothesized that a novel family-reporting mechanism would improve incident detection. OBJECTIVE To compare error and AE rates (1) gathered systematically with vs without family reporting, (2) reported by families vs clinicians, and (3) reported by families vs hospital incident reports. DESIGN, SETTING, AND PARTICIPANTS We conducted a prospective cohort study including the parents/caregivers of 989 hospitalized patients 17 years and younger (total 3902 patient-days) and their clinicians from December 2014 to July 2015 in 4 US pediatric centers. Clinician abstractors identified potential errors and AEs by reviewing medical records, hospital incident reports, and clinician reports as well as weekly and discharge Family Safety Interviews (FSIs). Two physicians reviewed and independently categorized all incidents, rating severity and preventability (agreement, 68%–90%; κ, 0.50–0.68). Discordant categorizations were reconciled. Rates were generated using Poisson regression estimated via generalized estimating equations to account for repeated measures on the same patient. MAIN OUTCOMES AND MEASURES Error and AE rates. RESULTS Overall, 746 parents/caregivers consented for the study. Of these, 717 completed FSIs. Their median (interquartile range) age was 32.5 (26–40) years; 380 (53.0%) were nonwhite, 566 (78.9%) were female, 603 (84.1%) were English speaking, and 380 (53.0%) had attended college. Of 717 parents/caregivers completing FSIs, 185 (25.8%) reported a total of 255 incidents, which were classified as 132 safety concerns (51.8%), 102 nonsafety-related quality concerns (40.0%), and 21 other concerns (8.2%). These included 22 preventable AEs (8.6%), 17 nonharmful medical errors (6.7%), and 11 nonpreventable AEs (4.3%) on the study unit. In total, 179 errors and 113 AEs were identified from all sources. Family reports included 8 otherwise unidentified AEs, including 7 preventable AEs. Error rates with family reporting (45.9 per 1000 patient-days) were 1.2-fold (95%CI, 1.1–1.2) higher than rates without family reporting (39.7 per 1000 patient-days). Adverse event rates with family reporting (28.7 per 1000 patient-days) were 1.1-fold (95%CI, 1.0–1.2; P=.006) higher than rates without (26.1 per 1000 patient-days). Families and clinicians reported similar rates of errors (10.0 vs 12.8 per 1000 patient-days; relative rate, 0.8; 95%CI, .5–1.2) and AEs (8.5 vs 6.2 per 1000 patient-days; relative rate, 1.4; 95%CI, 0.8–2.2). Family-reported error rates were 5.0-fold (95%CI, 1.9–13.0) higher and AE rates 2.9-fold (95% CI, 1.2–6.7) higher than hospital incident report rates. CONCLUSIONS AND RELEVANCE Families provide unique information about hospital safety and should be included in hospital safety surveillance in order to facilitate better design and assessment of interventions to improve safety. PMID:28241211
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
Ballantyne, A. P.; Andres, R.; Houghton, R.; ...
2015-04-30
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less
Sayler, Elaine; Eldredge-Hindy, Harriet; Dinome, Jessie; Lockamy, Virginia; Harrison, Amy S
2015-01-01
The planning procedure for Valencia and Leipzig surface applicators (VLSAs) (Nucletron, Veenendaal, The Netherlands) differs substantially from CT-based planning; the unfamiliarity could lead to significant errors. This study applies failure modes and effects analysis (FMEA) to high-dose-rate (HDR) skin brachytherapy using VLSAs to ensure safety and quality. A multidisciplinary team created a protocol for HDR VLSA skin treatments and applied FMEA. Failure modes were identified and scored by severity, occurrence, and detectability. The clinical procedure was then revised to address high-scoring process nodes. Several key components were added to the protocol to minimize risk probability numbers. (1) Diagnosis, prescription, applicator selection, and setup are reviewed at weekly quality assurance rounds. Peer review reduces the likelihood of an inappropriate treatment regime. (2) A template for HDR skin treatments was established in the clinic's electronic medical record system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planner as well as increases the detectability of an error. (3) A screen check was implemented during the second check to increase detectability of an error. (4) To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display, facilitating data entry and verification. (5) VLSAs are color coded and labeled to match the electronic medical record prescriptions, simplifying in-room selection and verification. Multidisciplinary planning and FMEA increased detectability and reduced error probability during VLSA HDR brachytherapy. This clinical model may be useful to institutions implementing similar procedures. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
2009-01-01
Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298
NASA Astrophysics Data System (ADS)
Song, YoungJae; Sepulveda, Francisco
2017-02-01
Objective. Self-paced EEG-based BCIs (SP-BCIs) have traditionally been avoided due to two sources of uncertainty: (1) precisely when an intentional command is sent by the brain, i.e., the command onset detection problem, and (2) how different the intentional command is when compared to non-specific (or idle) states. Performance evaluation is also a problem and there are no suitable standard metrics available. In this paper we attempted to tackle these issues. Approach. Self-paced covert sound-production cognitive tasks (i.e., high pitch and siren-like sounds) were used to distinguish between intentional commands (IC) and idle states. The IC states were chosen for their ease of execution and negligible overlap with common cognitive states. Band power and a digital wavelet transform were used for feature extraction, and the Davies-Bouldin index was used for feature selection. Classification was performed using linear discriminant analysis. Main results. Performance was evaluated under offline and simulated-online conditions. For the latter, a performance score called true-false-positive (TFP) rate, ranging from 0 (poor) to 100 (perfect), was created to take into account both classification performance and onset timing errors. Averaging the results from the best performing IC task for all seven participants, an 77.7% true-positive (TP) rate was achieved in offline testing. For simulated-online analysis the best IC average TFP score was 76.67% (87.61% TP rate, 4.05% false-positive rate). Significance. Results were promising when compared to previous IC onset detection studies using motor imagery, in which best TP rates were reported as 72.0% and 79.7%, and which, crucially, did not take timing errors into account. Moreover, based on our literature review, there is no previous covert sound-production onset detection system for spBCIs. Results showed that the proposed onset detection technique and TFP performance metric have good potential for use in SP-BCIs.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less
Fine-Scale Event Location and Error Analysis in NET-VISA
NASA Astrophysics Data System (ADS)
Arora, N. S.; Russell, S.
2016-12-01
NET-VISA is a generative probabilistic model for the occurrence of seismic, hydro, and atmospheric events, and the propagation of energy from these events through various mediums and phases before being detected, or misdetected, by IMS stations. It is built on top of the basic station, and arrival detection processing at the IDC, and is currently being tested in the IDC network processing pipelines. A key distinguishing feature of NET-VISA is that it is easy to incorporate prior scientific knowledge and historical data into the probabilistic model. The model accounts for both detections and mis-detections when forming events, and this allows it to make more accurate event hypothesis. It has been continuously evaluated since 2012, and in each year it makes a roughly 60% reduction in the number of missed events without increasing the false event rate as compared to the existing GA algorithm. More importantly the model finds large numbers of events that have been confirmed by regional seismic bulletins but missed by the IDC analysts using the same data. In this work we focus on enhancements to the model to improve the location accuracy, and error ellipses. We will present a new version of the model that focuses on the fine scale around the event location, and present error ellipses and analysis of recent important events.
Improving TCP Network Performance by Detecting and Reacting to Packet Reordering
NASA Technical Reports Server (NTRS)
Kruse, Hans; Ostermann, Shawn; Allman, Mark
2003-01-01
There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP s performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of 10(exp -5)
NASA Technical Reports Server (NTRS)
Buechler, W.; Tucker, A. G.
1981-01-01
Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN.
Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís
2018-01-11
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN
Pedreiras, Paulo; Almeida, Luís
2018-01-01
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots. PMID:29324723
Sidi, Avner; Gravenstein, Nikolaus; Vasilopoulos, Terrie; Lampotang, Samsun
2017-06-02
We describe observed improvements in nontechnical or "higher-order" deficiencies and cognitive performance skills in an anesthesia residency cohort for a 1-year time interval. Our main objectives were to evaluate higher-order, cognitive performance and to demonstrate that simulation can effectively serve as an assessment of cognitive skills and can help detect "higher-order" deficiencies, which are not as well identified through more traditional assessment tools. We hypothesized that simulation can identify longitudinal changes in cognitive skills and that cognitive performance deficiencies can then be remediated over time. We used 50 scenarios evaluating 35 residents during 2 subsequent years, and 18 of those 35 residents were evaluated in both years (post graduate years 3 then 4) in the same or similar scenarios. Individual basic knowledge and cognitive performance during simulation-based scenarios were assessed using a 20- to 27-item scenario-specific checklist. Items were labeled as basic knowledge/technical (lower-order cognition) or advanced cognitive/nontechnical (higher-order cognition). Identical or similar scenarios were repeated annually by a subset of 18 residents during 2 successive academic years. For every scenario and item, we calculated group error scenario rate (frequency) and individual (resident) item success. Grouped individuals' success rates are calculated as mean (SD), and item success grade and group error rates are calculated and presented as proportions. For all analyses, α level is 0.05. Overall PGY4 residents' error rates were lower and success rates higher for the cognitive items compared with technical item performance in the operating room and resuscitation domains. In all 3 clinical domains, the cognitive error rate by PGY4 residents was fairly low (0.00-0.22) and the cognitive success rate by PGY4 residents was high (0.83-1.00) and significantly better compared with previous annual assessments (P < 0.05). Overall, there was an annual decrease in error rates for 2 years, primarily driven by decreases in cognitive errors. The most commonly observed cognitive error types remained anchoring, availability bias, premature closure, and confirmation bias. Simulation-based assessments can highlight cognitive performance areas of relative strength, weakness, and progress in a resident or resident cohort. We believe that they can therefore be used to inform curriculum development including activities that require higher-level cognitive processing.
Permanence analysis of a concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.; Kasami, T.
1983-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.
Probability of undetected error after decoding for a concatenated coding scheme
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin
2012-01-01
The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.
Currie detection limits in gamma-ray spectroscopy.
De Geer, Lars-Erik
2004-01-01
Currie Hypothesis testing is applied to gamma-ray spectral data, where an optimum part of the peak is used and the background is considered well known from nearby channels. With this, the risk of making Type I errors is about 100 times lower than commonly assumed. A programme, PeakMaker, produces random peaks with given characteristics on the screen and calculations are done to facilitate a full use of Poisson statistics in spectrum analyses. SHORT TECHNICAL NOTE SUMMARY: The Currie decision limit concept applied to spectral data is reinterpreted, which gives better consistency between the selected error risk and the observed error rates. A PeakMaker program is described and the few count problem is analyzed.
An effective hair detection algorithm for dermoscopic melanoma images of skin lesions
NASA Astrophysics Data System (ADS)
Chakraborti, Damayanti; Kaur, Ravneet; Umbaugh, Scott; LeAnder, Robert
2016-09-01
Dermoscopic images are obtained using the method of skin surface microscopy. Pigmented skin lesions are evaluated in terms of texture features such as color and structure. Artifacts, such as hairs, bubbles, black frames, ruler-marks, etc., create obstacles that prevent accurate detection of skin lesions by both clinicians and computer-aided diagnosis. In this article, we propose a new algorithm for the automated detection of hairs, using an adaptive, Canny edge-detection method, followed by morphological filtering and an arithmetic addition operation. The algorithm was applied to 50 dermoscopic melanoma images. In order to ascertain this method's relative detection accuracy, it was compared to the Razmjooy hair-detection method [1], using segmentation error (SE), true detection rate (TDR) and false positioning rate (FPR). The new method produced 6.57% SE, 96.28% TDR and 3.47% FPR, compared to 15.751% SE, 86.29% TDR and 11.74% FPR produced by the Razmjooy method [1]. Because of the 7.27-9.99% improvement in those parameters, we conclude that the new algorithm produces much better results for detecting thick, thin, dark and light hairs. The new method proposed here, shows an appreciable difference in the rate of detecting bubbles, as well.
Low-complexity R-peak detection for ambulatory fetal monitoring.
Rooijakkers, Michael J; Rabotti, Chiara; Oei, S Guid; Mischi, Massimo
2012-07-01
Non-invasive fetal health monitoring during pregnancy is becoming increasingly important because of the increasing number of high-risk pregnancies. Despite recent advances in signal-processing technology, which have enabled fetal monitoring during pregnancy using abdominal electrocardiogram (ECG) recordings, ubiquitous fetal health monitoring is still unfeasible due to the computational complexity of noise-robust solutions. In this paper, an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, without reducing the R-peak detection performance compared to the existing R-peak detection schemes. Validation of the algorithm is performed on three manually annotated datasets. With a detection error rate of 0.23%, 1.32% and 9.42% on the MIT/BIH Arrhythmia and in-house maternal and fetal databases, respectively, the detection rate of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.
Non-intrusive practitioner pupil detection for unmodified microscope oculars.
Fuhl, Wolfgang; Santini, Thiago; Reichert, Carsten; Claus, Daniel; Herkommer, Alois; Bahmani, Hamed; Rifai, Katharina; Wahl, Siegfried; Kasneci, Enkelejda
2016-12-01
Modern microsurgery is a long and complex task requiring the surgeon to handle multiple microscope controls while performing the surgery. Eye tracking provides an additional means of interaction for the surgeon that could be used to alleviate this situation, diminishing surgeon fatigue and surgery time, thus decreasing risks of infection and human error. In this paper, we introduce a novel algorithm for pupil detection tailored for eye images acquired through an unmodified microscope ocular. The proposed approach, the Hough transform, and six state-of-the-art pupil detection algorithms were evaluated on over 4000 hand-labeled images acquired from a digital operating microscope with a non-intrusive monitoring system for the surgeon eyes integrated. Our results show that the proposed method reaches detection rates up to 71% for an error of ≈3% w.r.t the input image diagonal; none of the state-of-the-art pupil detection algorithms performed satisfactorily. The algorithm and hand-labeled data set can be downloaded at:: www.ti.uni-tuebingen.de/perception. Copyright © 2016 Elsevier Ltd. All rights reserved.
Robust crop and weed segmentation under uncontrolled outdoor illumination.
Jeon, Hong Y; Tian, Lei F; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).
Modeling And Detecting Anomalies In Scada Systems
NASA Astrophysics Data System (ADS)
Svendsen, Nils; Wolthusen, Stephen
The detection of attacks and intrusions based on anomalies is hampered by the limits of specificity underlying the detection techniques. However, in the case of many critical infrastructure systems, domain-specific knowledge and models can impose constraints that potentially reduce error rates. At the same time, attackers can use their knowledge of system behavior to mask their manipulations, causing adverse effects to observed only after a significant period of time. This paper describes elementary statistical techniques that can be applied to detect anomalies in critical infrastructure networks. A SCADA system employed in liquefied natural gas (LNG) production is used as a case study.
Performance Evaluation of Solar Blind NLOS Ultraviolet Communication Systems
2008-12-01
noise and signal count statistical distributions . Then we further link key system parameters such as path loss and communication bit error rate (BER... quantum noise limited photon-counting detection. These benefits can now begin to be realized based on technological advances in both miniaturized...multiplication gain of 105~107, high responsivity of 62 A/W, large detection area of a few cm2, reasonable quantum efficiency of 15%, and low dark current
Renshaw, A A; Lezon, K M; Wilbur, D C
2001-04-25
Routine quality control rescreening often is used to calculate the false-negative rate (FNR) of gynecologic cytology. Theoretic analysis suggests that this is not appropriate, due to the high FNR of rescreening and the inability to actually measure it. The authors sought to determine the FNR of manual rescreening in a large, prospective, two-arm clinical trial using an analytic instrument in the evaluation. The results of the Autopap System Clinical Trial, encompassing 25,124 analyzed slides, were reviewed. The false-negative and false-positive rates at various thresholds were determined for routine primary screening, routine rescreening, Autopap primary screening, and Autopap rescreening by using a simple, standard methodology. The FNR of routine manual rescreening at the level of atypical squamous cells of undetermined significance (ASCUS) was 73%, more than 3 times the FNR of primary screening; 11 cases were detected. The FNR of Autopap rescreening was 34%; 80 cases were detected. Routine manual rescreening decreased the laboratory FNR by less than 1%; Autopap rescreening reduced the overall laboratory FNR by 5.7%. At the same time, the false-positive rate for Autopap screening was significantly less than that of routine manual screening at the ASCUS level (4.7% vs. 5.6%; P < 0.0001). Rescreening with the Autopap system remained more sensitive than manual rescreening at the low grade squamous intraepithelial lesions threshold (FNR of 58.8% vs. 100%, respectively), although the number of cases rescreened was low. Routine manual rescreening cannot be used to calculate the FNR of primary screening. Routine rescreening is an extremely ineffective method to detect error and thereby decrease a laboratory's FNR. The Autopap system is a much more effective way of detecting errors within a laboratory and reduces the laboratory's FNR by greater than 25%.
NASA Astrophysics Data System (ADS)
Bechet, P.; Mitran, R.; Munteanu, M.
2013-08-01
Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.
Error analysis for fast scintillator-based inertial confinement fusion burn history measurements
NASA Astrophysics Data System (ADS)
Lerche, R. A.; Ognibene, T. J.
1999-01-01
Plastic scintillator material acts as a neutron-to-light converter in instruments that make inertial confinement fusion burn history measurements. Light output for a detected neutron in current instruments has a fast rise time (<20 ps) and a relatively long decay constant (1.2 ns). For a burst of neutrons whose duration is much shorter than the decay constant, instantaneous light output is approximately proportional to the integral of the neutron interaction rate with the scintillator material. Burn history is obtained by deconvolving the exponential decay from the recorded signal. The error in estimating signal amplitude for these integral measurements is calculated and compared with a direct measurement in which light output is linearly proportional to the interaction rate.
Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr
2015-01-01
ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, D; Dyer, B; Kumaran Nair, C
Purpose: The Integral Quality Monitor (IQM), developed by iRT Systems GmbH (Koblenz, Germany) is a large-area, linac-mounted ion chamber used to monitor photon fluence during patient treatment. Our previous work evaluated the change of the ion chamber’s response to deviations from static 1×1 cm2 and 10×10 cm2 photon beams and other characteristics integral to use in external beam detection. The aim of this work is to simulate two external beam radiation delivery errors, quantify the detection of simulated errors and evaluate the reduction in patient harm resulting from detection. Methods: Two well documented radiation oncology delivery errors were selected formore » simulation. The first error was recreated by modifying a wedged whole breast treatment, removing the physical wedge and calculating the planned dose with Pinnacle TPS (Philips Radiation Oncology Systems, Fitchburg, WI). The second error was recreated by modifying a static-gantry IMRT pharyngeal tonsil plan to be delivered in 3 unmodulated fractions. A radiation oncologist evaluated the dose for simulated errors and predicted morbidity and mortality commiserate with the original reported toxicity, indicating that reported errors were approximately simulated. The ion chamber signal of unmodified treatments was compared to the simulated error signal and evaluated in Pinnacle TPS again with radiation oncologist prediction of simulated patient harm. Results: Previous work established that transmission detector system measurements are stable within 0.5% standard deviation (SD). Errors causing signal change greater than 20 SD (10%) were considered detected. The whole breast and pharyngeal tonsil IMRT simulated error increased signal by 215% and 969%, respectively, indicating error detection after the first fraction and IMRT segment, respectively. Conclusion: The transmission detector system demonstrated utility in detecting clinically significant errors and reducing patient toxicity/harm in simulated external beam delivery. Future work will evaluate detection of other smaller magnitude delivery errors.« less
Sample-size needs for forestry herbicide trials
S.M. Zedaker; T.G. Gregoire; James H. Miller
1994-01-01
Forest herbicide experiments are increasingly being designed to evaluate smaller treatment differences when comparing existing effective treatments, tank mix ratios, surfactants, and new low-rate products. The ability to detect small differences in efficacy is dependent upon the relationship among sample size. type I and II error probabilities, and the coefficients of...
Chong, Jo Woon; Dao, Duy K; Salehizadeh, S M A; McManus, David D; Darling, Chad E; Chon, Ki H; Mendelson, Yitzhak
2014-11-01
Motion and noise artifacts (MNA) are a serious obstacle in utilizing photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a MNA detection method which can provide a clean vs. corrupted decision on each successive PPG segment. For motion artifact detection, we compute four time-domain parameters: (1) standard deviation of peak-to-peak intervals (2) standard deviation of peak-to-peak amplitudes (3) standard deviation of systolic and diastolic interval ratios, and (4) mean standard deviation of pulse shape. We have adopted a support vector machine (SVM) which takes these parameters from clean and corrupted PPG signals and builds a decision boundary to classify them. We apply several distinct features of the PPG data to enhance classification performance. The algorithm we developed was verified on PPG data segments recorded by simulation, laboratory-controlled and walking/stair-climbing experiments, respectively, and we compared several well-established MNA detection methods to our proposed algorithm. All compared detection algorithms were evaluated in terms of motion artifact detection accuracy, heart rate (HR) error, and oxygen saturation (SpO2) error. For laboratory controlled finger, forehead recorded PPG data and daily-activity movement data, our proposed algorithm gives 94.4, 93.4, and 93.7% accuracies, respectively. Significant reductions in HR and SpO2 errors (2.3 bpm and 2.7%) were noted when the artifacts that were identified by SVM-MNA were removed from the original signal than without (17.3 bpm and 5.4%). The accuracy and error values of our proposed method were significantly higher and lower, respectively, than all other detection methods. Another advantage of our method is its ability to provide highly accurate onset and offset detection times of MNAs. This capability is important for an automated approach to signal reconstruction of only those data points that need to be reconstructed, which is the subject of the companion paper to this article. Finally, our MNA detection algorithm is real-time realizable as the computational speed on the 7-s PPG data segment was found to be only 7 ms with a Matlab code.
Pavone, Enea Francesco; Tieri, Gaetano; Rizza, Giulia; Tidoni, Emmanuele; Grisoni, Luigi; Aglioti, Salvatore Maria
2016-01-13
Brain monitoring of errors in one's own and other's actions is crucial for a variety of processes, ranging from the fine-tuning of motor skill learning to important social functions, such as reading out and anticipating the intentions of others. Here, we combined immersive virtual reality and EEG recording to explore whether embodying the errors of an avatar by seeing it from a first-person perspective may activate the error monitoring system in the brain of an onlooker. We asked healthy participants to observe, from a first- or third-person perspective, an avatar performing a correct or an incorrect reach-to-grasp movement toward one of two virtual mugs placed on a table. At the end of each trial, participants reported verbally how much they embodied the avatar's arm. Ratings were maximal in first-person perspective, indicating that immersive virtual reality can be a powerful tool to induce embodiment of an artificial agent, even through mere visual perception and in the absence of any cross-modal boosting. Observation of erroneous grasping from a first-person perspective enhanced error-related negativity and medial-frontal theta power in the trials where human onlookers embodied the virtual character, hinting at the tight link between early, automatic coding of error detection and sense of embodiment. Error positivity was similar in 1PP and 3PP, suggesting that conscious coding of errors is similar for self and other. Thus, embodiment plays an important role in activating specific components of the action monitoring system when others' errors are coded as if they are one's own errors. Detecting errors in other's actions is crucial for social functions, such as reading out and anticipating the intentions of others. Using immersive virtual reality and EEG recording, we explored how the brain of an onlooker reacted to the errors of an avatar seen from a first-person perspective. We found that mere observation of erroneous actions enhances electrocortical markers of error detection in the trials where human onlookers embodied the virtual character. Thus, the cerebral system for action monitoring is maximally activated when others' errors are coded as if they are one's own errors. The results have important implications for understanding how the brain can control the external world and thus creating new brain-computer interfaces. Copyright © 2016 the authors 0270-6474/16/360268-12$15.00/0.
Effects of Contextual Sight-Singing and Aural Skills Training on Error-Detection Abilities.
ERIC Educational Resources Information Center
Sheldon, Deborah A.
1998-01-01
Examines the effects of contextual sight-singing and ear training on pitch and rhythm error detection abilities among undergraduate instrumental music education majors. Shows that additional training produced better error detection, particularly with rhythm errors and in one-part examples. Maintains that differences attributable to texture were…
Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach
NASA Astrophysics Data System (ADS)
Bähr, Hermann; Hanssen, Ramon F.
2012-12-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
Imperfect pathogen detection from non-invasive skin swabs biases disease inference
DiRenzo, Graziella V.; Grant, Evan H. Campbell; Longo, Ana; Che-Castaldo, Christian; Zamudio, Kelly R.; Lips, Karen
2018-01-01
1. Conservation managers rely on accurate estimates of disease parameters, such as pathogen prevalence and infection intensity, to assess disease status of a host population. However, these disease metrics may be biased if low-level infection intensities are missed by sampling methods or laboratory diagnostic tests. These false negatives underestimate pathogen prevalence and overestimate mean infection intensity of infected individuals. 2. Our objectives were two-fold. First, we quantified false negative error rates of Batrachochytrium dendrobatidis on non-invasive skin swabs collected from an amphibian community in El Copé, Panama. We swabbed amphibians twice in sequence, and we used a recently developed hierarchical Bayesian estimator to assess disease status of the population. Second, we developed a novel hierarchical Bayesian model to simultaneously account for imperfect pathogen detection from field sampling and laboratory diagnostic testing. We evaluated the performance of the model using simulations and varying sampling design to quantify the magnitude of bias in estimates of pathogen prevalence and infection intensity. 3. We show that Bd detection probability from skin swabs was related to host infection intensity, where Bd infections < 10 zoospores have < 95% probability of being detected. If imperfect Bd detection was not considered, then Bd prevalence was underestimated by as much as 16%. In the Bd-amphibian system, this indicates a need to correct for imperfect pathogen detection caused by skin swabs in persisting host communities with low-level infections. More generally, our results have implications for study designs in other disease systems, particularly those with similar objectives, biology, and sampling decisions. 4. Uncertainty in pathogen detection is an inherent property of most sampling protocols and diagnostic tests, where the magnitude of bias depends on the study system, type of infection, and false negative error rates. Given that it may be difficult to know this information in advance, we advocate that the most cautious approach is to assume all errors are possible and to accommodate them by adjusting sampling designs. The modeling framework presented here improves the accuracy in estimating pathogen prevalence and infection intensity.
Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan
2018-02-01
The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
An audit on the reporting of critical results in a tertiary institute.
Rensburg, Megan A; Nutt, Louise; Zemlin, Annalise E; Erasmus, Rajiv T
2009-03-01
Critical result reporting is a requirement for accreditation by accreditation bodies worldwide. Accurate, prompt communication of results to the clinician by the laboratory is of extreme importance. Repeating of the critical result by the recipient has been used as a means to improve the accuracy of notification. Our objective was to assess the accuracy of notification of critical chemical pathology laboratory results telephoned out to clinicians/clinical areas. We hypothesize that read-back of telephoned critical laboratory results by the recipient may improve the accuracy of the notification. This was a prospective study, where all critical results telephoned by chemical pathologists and registrars at Tygerberg Hospital were monitored for one month. The recipient was required to repeat the result (patient name, folder number and test results). Any error, as well as the designation of the recipient was logged. Of 472 outgoing telephone calls, 51 errors were detected (error rate 10.8%). Most errors were made when recording the folder number (64.7%), with incorrect patient name being the lowest (5.9%). Calls to the clinicians had the highest error rate (20%), most of them being the omission of recording folder numbers. Our audit highlights the potential errors during the post-analytical phase of laboratory testing. The importance of critical result reporting is still poorly recognized in South Africa. Implementation of a uniform accredited practice for communication of critical results can reduce error and improve patient safety.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
NASA Astrophysics Data System (ADS)
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement results show that soft errors in the memories increase proportionally with the neutron flux, whereas they decrease with increasing the supply voltages. NISC design considerations include the effects of device scaling, 10B content in the BPSG layer, incoming neutron energy, and critical charge of the node for this dissertation. NISCSAT simulations were performed with various memory node models to account these effects. Device scaling simulations showed that any further increase in the thickness of the BPSG layer beyond 2 mum causes self-shielding of the incoming neutrons due to the BPSG layer and results in lower detection efficiencies. Moreover, if the BPSG layer is located more than 4 mum apart from the depletion region in the node, there are no soft errors in the node due to the fact that both of the reaction products have lower ranges in the silicon or any possible node layers. Calculation results regarding the critical charge indicated that the mean charge deposition of the reaction products in the sensitive volume of the node is about 15 fC. It is evident that the NISC design should have a memory architecture with a critical charge of 15 fC or less to obtain higher detection efficiencies. Moreover, the sensitive volume should be placed in close proximity to the BPSG layers so that its location would be within the range of alpha and 7Li particles. Results showed that the distance between the BPSG layer and the sensitive volume should be less than 2 mum to increase the detection efficiency of the NISC. Incoming neutron energy was also investigated by simulations and the results obtained from these simulations showed that NISC neutron detection efficiency is related with the neutron cross-sections of 10B (n,alpha) 7Li reaction, e.g., ratio of the thermal (0.0253 eV) to fast (2 MeV) neutron detection efficiencies is approximately equal to 8000:1. Environmental conditions and their effects on the NISC performance were also studied in this research. Cosmic rays were modeled and simulated via NISCSAT to investigate detection reliability of the NISC. Simulation results show that cosmic rays account for less than 2 % of the soft errors for the thermal neutron detection. On the other hand, fast neutron detection by the NISC, which already has a poor efficiency due to the low neutron cross-sections, becomes almost impossible at higher altitudes where the cosmic ray fluxes and their energies are higher. NISCSAT simulations regarding soft error dependency of the NISC for temperature and electromagnetic fields show that there are no significant effects in the NISC detection efficiency. Furthermore, the detection efficiency of the NISC decreases with both air humidity and use of moderators since the incoming neutrons scatter away before reaching the memory surface.
Liu, Mao Tong; Lim, Han Chuen
2014-09-22
When implementing O-band quantum key distribution on optical fiber transmission lines carrying C-band data traffic, noise photons that arise from spontaneous Raman scattering or insufficient filtering of the classical data channels could cause the quantum bit-error rate to exceed the security threshold. In this case, a photon heralding scheme may be used to reject the uncorrelated noise photons in order to restore the quantum bit-error rate to a low level. However, the secure key rate would suffer unless one uses a heralded photon source with sufficiently high heralding rate and heralding efficiency. In this work we demonstrate a heralded photon source that has a heralding efficiency that is as high as 74.5%. One disadvantage of a typical heralded photon source is that the long deadtime of the heralding detector results in a significant drop in the heralding rate. To counter this problem, we propose a passively spatial-multiplexed configuration at the heralding arm. Using two heralding detectors in this configuration, we obtain an increase in the heralding rate by 37% and a corresponding increase in the heralded photon detection rate by 16%. We transmit the O-band photons over 10 km of noisy optical fiber to observe the relation between quantum bit-error rate and noise-degraded second-order correlation function of the transmitted photons. The effects of afterpulsing when we shorten the deadtime of the heralding detectors are also observed and discussed.
Form Overrides Meaning When Bilinguals Monitor for Errors
Ivanova, Iva; Ferreira, Victor S.; Gollan, Tamar H.
2016-01-01
Bilinguals rarely produce unintended language switches, which may in part be because switches are detected and corrected by an internal monitor. But are language switches easier or harder to detect than within-language semantic errors? To approximate internal monitoring, bilinguals listened (Experiment 1) or read aloud (Experiment 2) stories, and detected language switches (translation equivalents or semantically unrelated to expected words) and within-language errors (semantically related or unrelated to expected words). Bilinguals detected semantically related within-language errors most slowly and least accurately, language switches more quickly and accurately than within-language errors, and (in Experiment 2), translation equivalents as quickly and accurately as unrelated language switches. These results suggest that internal monitoring of form (which can detect mismatches in language membership) completes earlier than, and is independent of, monitoring of meaning. However, analysis of reading times prior to error detection revealed meaning violations to be more disruptive for processing than language violations. PMID:28649169
Detecting local diversity-dependence in diversification.
Xu, Liang; Etienne, Rampal S
2018-04-06
Whether there are ecological limits to species diversification is a hotly debated topic. Molecular phylogenies show slowdowns in lineage accumulation, suggesting that speciation rates decline with increasing diversity. A maximum-likelihood (ML) method to detect diversity-dependent (DD) diversification from phylogenetic branching times exists, but it assumes that diversity-dependence is a global phenomenon and therefore ignores that the underlying species interactions are mostly local, and not all species in the phylogeny co-occur locally. Here, we explore whether this ML method based on the nonspatial diversity-dependence model can detect local diversity-dependence, by applying it to phylogenies, simulated with a spatial stochastic model of local DD speciation, extinction, and dispersal between two local communities. We find that type I errors (falsely detecting diversity-dependence) are low, and the power to detect diversity-dependence is high when dispersal rates are not too low. Interestingly, when dispersal is high the power to detect diversity-dependence is even higher than in the nonspatial model. Moreover, estimates of intrinsic speciation rate, extinction rate, and ecological limit strongly depend on dispersal rate. We conclude that the nonspatial DD approach can be used to detect diversity-dependence in clades of species that live in not too disconnected areas, but parameter estimates must be interpreted cautiously. © 2018 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Ballari, D.; Castro, E.; Campozano, L.
2016-06-01
Precipitation monitoring is of utmost importance for water resource management. However, in regions of complex terrain such as Ecuador, the high spatio-temporal precipitation variability and the scarcity of rain gauges, make difficult to obtain accurate estimations of precipitation. Remotely sensed estimated precipitation, such as the Multi-satellite Precipitation Analysis TRMM, can cope with this problem after a validation process, which must be representative in space and time. In this work we validate monthly estimates from TRMM 3B43 satellite precipitation (0.25° x 0.25° resolution), by using ground data from 14 rain gauges in Ecuador. The stations are located in the 3 most differentiated regions of the country: the Pacific coastal plains, the Andean highlands, and the Amazon rainforest. Time series, between 1998 - 2010, of imagery and rain gauges were compared using statistical error metrics such as bias, root mean square error, and Pearson correlation; and with detection indexes such as probability of detection, equitable threat score, false alarm rate and frequency bias index. The results showed that precipitation seasonality is well represented and TRMM 3B43 acceptably estimates the monthly precipitation in the three regions of the country. According to both, statistical error metrics and detection indexes, the coastal and Amazon regions are better estimated quantitatively than the Andean highlands. Additionally, it was found that there are better estimations for light precipitation rates. The present validation of TRMM 3B43 provides important results to support further studies on calibration and bias correction of precipitation in ungagged watershed basins.
IMRT QA: Selecting gamma criteria based on error detection sensitivity.
Steers, Jennifer M; Fraass, Benedick A
2016-04-01
The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
Liu, Yang; Tong, Shoufeng; Chang, Shuai; Song, Yansong; Dong, Yan; Zhao, Xin; An, Zhe; Yu, Fuwan
2018-05-10
Optical phase-locked loops are an effective detection method in high-speed and long-distance laser communication. Although this method can detect weak signal light and maintain a small bit error rate, it is difficult to perform because identifying the phase difference between the signal light and the local oscillator accurately has always been a technical challenge. Thus, a series of studies is conducted to address this issue. First, a delayed exclusive or gate (XOR) phase detector with multi-level loop compound control is proposed. Then, a 50 ps delay line and relative signal-to-noise ratio control at 15 dB are produced through theoretical derivation and simulation. Thereafter, a phase discrimination module is designed on a 15 cm×5 cm printed circuit board board. Finally, the experiment platform is built for verification. Experimental results show that the phase discrimination range is -1.1 to 1.1 GHz, and the gain is 0.82 mV/MHz. Three times the standard deviation, that is, 0.064 V, is observed between the test and theoretical values. The accuracy of phase detection is better than 0.07 V, which meets the design standards. A coherent carrier recovery test system is established. The delayed XOR gate has good performance in this system. When the communication rate is 5 Gbps, the system realizes a bit error rate of 1.55×10 -8 when the optical power of the signal is -40.4 dBm. When the communication rate is increased to 10 Gbps, the detection sensitivity drops to -39.5 dBm and still shows good performance in high-speed communications. This work provides a reference for future high-speed coherent homodyne detection in space. Ideas for the next phase of this study are presented at the end of this paper.
Assessment of the relative merits of a few methods to detect evolutionary trends.
Laurin, Michel
2010-12-01
Some of the most basic questions about the history of life concern evolutionary trends. These include determining whether or not metazoans have become more complex over time, whether or not body size tends to increase over time (the Cope-Depéret rule), or whether or not brain size has increased over time in various taxa, such as mammals and birds. Despite the proliferation of studies on such topics, assessment of the reliability of results in this field is hampered by the variability of techniques used and the lack of statistical validation of these methods. To solve this problem, simulations are performed using a variety of evolutionary models (gradual Brownian motion, speciational Brownian motion, and Ornstein-Uhlenbeck), with or without a drift of variable amplitude, with variable variance of tips, and with bounds placed close or far from the starting values and final means of simulated characters. These are used to assess the relative merits (power, Type I error rate, bias, and mean absolute value of error on slope estimate) of several statistical methods that have recently been used to assess the presence of evolutionary trends in comparative data. Results show widely divergent performance of the methods. The simple, nonphylogenetic regression (SR) and variance partitioning using phylogenetic eigenvector regression (PVR) with a broken stick selection procedure have greatly inflated Type I error rate (0.123-0.180 at a 0.05 threshold), which invalidates their use in this context. However, they have the greatest power. Most variants of Felsenstein's independent contrasts (FIC; five of which are presented) have adequate Type I error rate, although two have a slightly inflated Type I error rate with at least one of the two reference trees (0.064-0.090 error rate at a 0.05 threshold). The power of all contrast-based methods is always much lower than that of SR and PVR, except under Brownian motion with a strong trend and distant bounds. Mean absolute value of error on slope of all FIC methods is slightly higher than that of phylogenetic generalized least squares (PGLS), SR, and PVR. PGLS performs well, with low Type I error rate, low error on regression coefficient, and power comparable with some FIC methods. Four variants of skewness analysis are examined, and a new method to assess significance of results is presented. However, all have consistently low power, except in rare combinations of trees, trend strength, and distance between final means and bounds. Globally, the results clearly show that FIC-based methods and PGLS are globally better than nonphylogenetic methods and variance partitioning with PVR. FIC methods and PGLS are sensitive to the model of evolution (and, hence, to branch length errors). Our results suggest that regressing raw character contrasts against raw geological age contrasts yields a good combination of power and Type I error rate. New software to facilitate batch analysis is presented.
Follow-up of negative MRI-targeted prostate biopsies: when are we missing cancer?
Gold, Samuel A; Hale, Graham R; Bloom, Jonathan B; Smith, Clayton P; Rayn, Kareem N; Valera, Vladimir; Wood, Bradford J; Choyke, Peter L; Turkbey, Baris; Pinto, Peter A
2018-05-21
Multiparametric magnetic resonance imaging (mpMRI) has improved clinicians' ability to detect clinically significant prostate cancer (csPCa). Combining or fusing these images with the real-time imaging of transrectal ultrasound (TRUS) allows urologists to better sample lesions with a targeted biopsy (Tbx) leading to the detection of greater rates of csPCa and decreased rates of low-risk PCa. In this review, we evaluate the technical aspects of the mpMRI-guided Tbx procedure to identify possible sources of error and provide clinical context to a negative Tbx. A literature search was conducted of possible reasons for false-negative TBx. This includes discussion on false-positive mpMRI findings, termed "PCa mimics," that may incorrectly suggest high likelihood of csPCa as well as errors during Tbx resulting in inexact image fusion or biopsy needle placement. Despite the strong negative predictive value associated with Tbx, concerns of missed disease often remain, especially with MR-visible lesions. This raises questions about what to do next after a negative Tbx result. Potential sources of error can arise from each step in the targeted biopsy process ranging from "PCa mimics" or technical errors during mpMRI acquisition to failure to properly register MRI and TRUS images on a fusion biopsy platform to technical or anatomic limits on needle placement accuracy. A better understanding of these potential pitfalls in the mpMRI-guided Tbx procedure will aid interpretation of a negative Tbx, identify areas for improving technical proficiency, and improve both physician understanding of negative Tbx and patient-management options.
Concurrent remote entanglement with quantum error correction against photon losses
NASA Astrophysics Data System (ADS)
Roy, Ananda; Stone, A. Douglas; Jiang, Liang
2016-09-01
Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
Roberts, Rachel M; Davis, Melissa C
2015-01-01
There is a need for an evidence-based approach to training professional psychologists in the administration and scoring of standardized tests such as the Wechsler Adult Intelligence Scale (WAIS) due to substantial evidence that these tasks are associated with numerous errors that have the potential to significantly impact clients' lives. Twenty three post-graduate psychology students underwent training in using the WAIS-IV according to a best-practice teaching model that involved didactic teaching, independent study of the test manual, and in-class practice with teacher supervision and feedback. Video recordings and test protocols from a role-played test administration were analyzed for errors according to a comprehensive checklist with self, peer, and faculty member reviews. 91.3% of students were rated as having demonstrated competency in administration and scoring. All students were found to make errors, with substantially more errors being detected by the faculty member than by self or peers. Across all subtests, the most frequent errors related to failure to deliver standardized instructions verbatim from the manual. The failure of peer and self-reviews to detect the majority of the errors suggests that novice feedback (self or peers) may be ineffective to eliminate errors and the use of more senior peers may be preferable. It is suggested that involving senior trainees, recent graduates and/or experienced practitioners in the training of post-graduate students may have benefits for both parties, promoting a peer-learning and continuous professional development approach to the development and maintenance of skills in psychological assessment.
Local concurrent error detection and correction in data structures using virtual backpointers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.C.J.; Chen, P.P.; Fuchs, W.K.
1989-11-01
A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.
Daupin, Johanne; Atkinson, Suzanne; Bédard, Pascal; Pelchat, Véronique; Lebel, Denis; Bussières, Jean-François
2016-12-01
The medication-use system in hospitals is very complex. To improve the health professionals' awareness of the risks of errors related to the medication-use system, a simulation of medication errors was created. The main objective was to assess the medical, nursing and pharmacy staffs' ability to identify errors related to the medication-use system using a simulation. The secondary objective was to assess their level of satisfaction. This descriptive cross-sectional study was conducted in a 500-bed mother-and-child university hospital. A multidisciplinary group set up 30 situations and replicated a patient room and a care unit pharmacy. All hospital staff, including nurses, physicians, pharmacists and pharmacy technicians, was invited. Participants had to detect if a situation contained an error and fill out a response grid. They also answered a satisfaction survey. The simulation was held during 100 hours. A total of 230 professionals visited the simulation, 207 handed in a response grid and 136 answered the satisfaction survey. The participants' overall rate of correct answers was 67.5% ± 13.3% (4073/6036). Among the least detected errors were situations involving a Y-site infusion incompatibility, an oral syringe preparation and the patient's identification. Participants mainly considered the simulation as effective in identifying incorrect practices (132/136, 97.8%) and relevant to their practice (129/136, 95.6%). Most of them (114/136; 84.4%) intended to change their practices in view of their exposure to the simulation. We implemented a realistic medication-use system errors simulation in a mother-child hospital, with a wide audience. This simulation was an effective, relevant and innovative tool to raise the health care professionals' awareness of critical processes. © 2016 John Wiley & Sons, Ltd.
AdaBoost-based algorithm for network intrusion detection.
Hu, Weiming; Hu, Wei; Maybank, Steve
2008-04-01
Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passarge, M; Fix, M K; Manser, P
Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less
Measurement-based analysis of error latency. [in computer operating system
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1987-01-01
This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.
Hasni, Nesrine; Ben Hamida, Emira; Ben Jeddou, Khouloud; Ben Hamida, Sarra; Ayadi, Imene; Ouahchi, Zeineb; Marrakchi, Zahra
2016-12-01
The medication iatrogenic risk is quite unevaluated in neonatology Objective: Assessment of errors that occurred during the preparation and administration of injectable medicines in a neonatal unit in order to implement corrective actions to reduce the occurrence of these errors. A prospective, observational study was performed in a neonatal unit over a period of one month. The practice of preparing and administering injectable medications were identified through a standardized data collection form. These practices were compared with summaries of the characteristics of each product (RCP) and the bibliography. One hundred preparations were observed of 13 different drugs. 85 errors during preparations and administration steps were detected. These errors were divided into preparation errors in 59% of cases such as changing the dilution protocol (32%), the use of bad solvent (11%) and administration errors in 41% of cases as errors timing of administration (18%) or omission of administration (9%). This study showed a high rate of errors during stages of preparation and administration of injectable drugs. In order to optimize the care of newborns and reduce the risk of medication errors, corrective actions have been implemented through the establishment of a quality assurance system which consisted of the development of injectable drugs preparation procedures, the introduction of a labeling system and staff training.
Rogel-Castillo, Cristian; Boulton, Roger; Opastpongkarn, Arunwong; Huang, Guangwei; Mitchell, Alyson E
2016-07-27
Concealed damage (CD) is defined as a brown discoloration of the kernel interior (nutmeat) that appears only after moderate to high heat treatment (e.g., blanching, drying, roasting, etc.). Raw almonds with CD have no visible defects before heat treatment. Currently, there are no screening methods available for detecting CD in raw almonds. Herein, the feasibility of using near-infrared (NIR) spectroscopy between 1125 and 2153 nm for the detection of CD in almonds is demonstrated. Almond kernels with CD have less NIR absorbance in the region related with oil, protein, and carbohydrates. With the use of partial least squares discriminant analysis (PLS-DA) and selection of specific wavelengths, three classification models were developed. The calibration models have false-positive and false-negative error rates ranging between 12.4 and 16.1% and between 10.6 and 17.2%, respectively. The percent error rates ranged between 8.2 and 9.2%. Second-derivative preprocessing of the selected wavelength resulted in the most robust predictive model.
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
The NEEDS Data Base Management and Archival Mass Memory System
NASA Technical Reports Server (NTRS)
Bailey, G. A.; Bryant, S. B.; Thomas, D. T.; Wagnon, F. W.
1980-01-01
A Data Base Management System and an Archival Mass Memory System are being developed that will have a 10 to the 12th bit on-line and a 10 to the 13th off-line storage capacity. The integrated system will accept packetized data from the data staging area at 50 Mbps, create a comprehensive directory, provide for file management, record the data, perform error detection and correction, accept user requests, retrieve the requested data files and provide the data to multiple users at a combined rate of 50 Mbps. Stored and replicated data files will have a bit error rate of less than 10 to the -9th even after ten years of storage. The integrated system will be demonstrated to prove the technology late in 1981.
Multiple diagnosis based on photoplethysmography: hematocrit, SpO2, pulse, and respiration
NASA Astrophysics Data System (ADS)
Yoon, Gilwon; Lee, Jong Y.; Jeon, Kye Jin; Park, Kun-Kook; Yeo, Hyung S.; Hwang, Hyun T.; Kim, Hong S.; Hwang, In-Duk
2002-09-01
Photo-plethysmography measures pulsatile blood flow in real-time and non-invasively. One of widely known applications of PPG is the measurement of saturated oxygen in arterial blood(SpO2). In our work, using several wavelengths more than those used in a pulse oximeter, an algorithm and instrument have been developed to measure hematocrit, saturated oxygen, pulse and respiratory rates simultaneously. To predict hematocrit, a dedicated algorithm is developed based on scattering of RBC and a protocol for detecting outlier signals is used to increase accuracy and reliability. Digital filtering techniques are used to extract respiratory rate signals. Utilization of wavelengths under 1000nm and a multi-wavelength LED array chip and digital-oriented electronics enable us to make a compact device. Our preliminary clinical trials show that the achieved percent errors are +/-8.2% for hematocrit when tested with 594 persons, R2 for SpO2 fitting is 0.99985 when tested with a Bi-Tek pulse oximeter simulator and the SpO2 error for in vivo test is +/-2.5% over the range of 75~100%. The error of pulse rates is less than +/-5%. We obtained a positive predictive value of 96% for respiratory rates in qualitative analysis.
Chaves, Sandra; Gadanho, Mário; Tenreiro, Rogério; Cabrita, José
1999-01-01
Metronidazole susceptibility of 100 Helicobacter pylori strains was assessed by determining the inhibition zone diameters by disk diffusion test and the MICs by agar dilution and PDM Epsilometer test (E test). Linear regression analysis was performed, allowing the definition of significant linear relations, and revealed correlations of disk diffusion results with both E-test and agar dilution results (r2 = 0.88 and 0.81, respectively). No significant differences (P = 0.84) were found between MICs defined by E test and those defined by agar dilution, taken as a standard. Reproducibility comparison between E-test and disk diffusion tests showed that they are equivalent and with good precision. Two interpretative susceptibility schemes (with or without an intermediate class) were compared by an interpretative error rate analysis method. The susceptibility classification scheme that included the intermediate category was retained, and breakpoints were assessed for diffusion assay with 5-μg metronidazole disks. Strains with inhibition zone diameters less than 16 mm were defined as resistant (MIC > 8 μg/ml), those with zone diameters equal to or greater than 16 mm but less than 21 mm were considered intermediate (4 μg/ml < MIC ≤ 8 μg/ml), and those with zone diameters of 21 mm or greater were regarded as susceptible (MIC ≤ 4 μg/ml). Error rate analysis applied to this classification scheme showed occurrence frequencies of 1% for major errors and 7% for minor errors, when the results were compared to those obtained by agar dilution. No very major errors were detected, suggesting that disk diffusion might be a good alternative for determining the metronidazole sensitivity of H. pylori strains. PMID:10203543
A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.
Liu, Shuo; Zhang, Lei; Li, Jian
2016-11-24
The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.
Heart rate detection from an electronic weighing scale.
González-Landaeta, R; Casas, O; Pallàs-Areny, R
2008-08-01
We propose a novel technique for beat-to-beat heart rate detection based on the ballistocardiographic (BCG) force signal from a subject standing on a common electronic weighing scale. The detection relies on sensing force variations related to the blood acceleration in the aorta, works even if wearing footwear and does not require any sensors attached to the body because it uses the load cells in the scale. We have devised an approach to estimate the sensitivity and frequency response of three commercial weighing scales to assess their capability to detect the BCG force signal. Static sensitivities ranged from 490 nV V(-1) N(-1) to 1670 nV V(-1) N(-1). The frequency response depended on the subject's mass but it was broad enough for heart rate estimation. We have designed an electronic pulse detection system based on off-the-shelf integrated circuits to sense heart-beat-related force variations of about 0.24 N. The signal-to-noise ratio of the main peaks of the force signal detected was higher than 30 dB. A Bland-Altman plot was used to compare the RR time intervals estimated from the ECG and BCG force signals for 17 volunteers. The error was +/-21 ms, which makes the proposed technique suitable for short-term monitoring of the heart rate.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Self-checking self-repairing computer nodes using the mirror processor
NASA Technical Reports Server (NTRS)
Tamir, Yuval
1992-01-01
Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, C. C.; Chen, P. P.; Fuchs, W. K.
1987-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A
2015-08-01
This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, A; Nyflot, M; Sponseller, P
2014-06-01
Purpose: Radiation treatment planning involves a complex workflow that can make safety improvement efforts challenging. This study utilizes an incident reporting system to identify detection points of near-miss errors, in order to guide our departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or their patterns. Methods: 1377 incidents were analyzed from a departmental nearmiss error reporting system from 3/2012–10/2013. All incidents were prospectively reviewed weekly by a multi-disciplinary team, and assigned a near-miss severity score ranging from 0–4 reflecting potential harm (no harm to critical). A 98-step consensus workflow was usedmore » to determine origination and detection points of near-miss errors, categorized into 7 major steps (patient assessment/orders, simulation, contouring/treatment planning, pre-treatment plan checks, therapist/on-treatment review, post-treatment checks, and equipment issues). Categories were compared using ANOVA. Results: In the 7-step workflow, 23% of near-miss errors were detected within the same step in the workflow, while an additional 37% were detected by the next step in the workflow, and 23% were detected two steps downstream. Errors detected further from origination were more severe (p<.001; Figure 1). The most common source of near-miss errors was treatment planning/contouring, with 476 near misses (35%). Of those 476, only 72(15%) were found before leaving treatment planning, 213(45%) were found at physics plan checks, and 191(40%) were caught at the therapist pre-treatment chart review or on portal imaging. Errors that passed through physics plan checks and were detected by therapists were more severe than other errors originating in contouring/treatment planning (1.81 vs 1.33, p<0.001). Conclusion: Errors caught by radiation treatment therapists tend to be more severe than errors caught earlier in the workflow, highlighting the importance of safety checks in dosimetry and physics. We are utilizing our findings to improve manual and automated checklists for dosimetry and physics.« less
Author Self-disclosure Compared with Pharmaceutical Company Reporting of Physician Payments.
Alhamoud, Hani A; Dudum, Ramzi; Young, Heather A; Choi, Brian G
2016-01-01
Industry manufacturers are required by the Sunshine Act to disclose payments to physicians. These data recently became publicly available, but some manufacturers prereleased their data since 2009. We tested the hypotheses that there would be discrepancies between manufacturers' and physicians' disclosures. The financial disclosures by authors of all 39 American College of Cardiology and American Heart Association guidelines between 2009 and 2012 were matched to the public disclosures of 15 pharmaceutical companies during that same period. Duplicate authors across guidelines were assessed independently. Per the guidelines, payments <$10,000 are modest and ≥$10,000 are significant. Agreement was determined using a κ statistic; Fisher's exact and Mann-Whitney tests were used to detect statistical significance. The overall agreement between author and company disclosure was poor (κ = 0.238). There was a significant difference in error rates of disclosure among companies and authors (P = .019). Of disclosures by authors, companies failed to match them with an error rate of 71.6%. Of disclosures by companies, authors failed to match them with an error rate of 54.7%. Our analysis shows a concerning level of disagreement between guideline authors' and pharmaceutical companies' disclosures. Without ability for physicians to challenge reports, it is unclear whether these discrepancies reflect undisclosed relationships with industry or errors in reporting, and caution should be advised in interpretation of data from the Sunshine Act. Copyright © 2016 Elsevier Inc. All rights reserved.
RATIR Follow-up of LIGO/Virgo Gravitational Wave Events
NASA Astrophysics Data System (ADS)
Golkhou, V. Zach; Butler, Nathaniel R.; Strausbaugh, Robert; Troja, Eleonora; Kutyrev, Alexander; Lee, William H.; Román-Zúñiga, Carlos G.; Watson, Alan M.
2018-04-01
We have recently witnessed the first multi-messenger detection of colliding neutron stars through gravitational waves (GWs) and electromagnetic (EM) waves (GW 170817) thanks to the joint efforts of LIGO/Virgo and Space/Ground-based telescopes. In this paper, we report on the RATIR follow-up observation strategies and show the results for the trigger G194575. This trigger is not of astrophysical interest; however, it is of great interest to the robust design of a follow-up engine to explore large sky-error regions. We discuss the development of an image-subtraction pipeline for the six-color, optical/NIR imaging camera RATIR. Considering a two-band (i and r) campaign in the fall of 2015, we find that the requirement of simultaneous detection in both bands leads to a factor ∼10 reduction in false alarm rate, which can be further reduced using additional bands. We also show that the performance of our proposed algorithm is robust to fluctuating observing conditions, maintaining a low false alarm rate with a modest decrease in system efficiency that can be overcome utilizing repeat visits. Expanding our pipeline to search for either optical or NIR detections (three or more bands), considering separately the optical riZ and NIR YJH bands, should result in a false alarm rate ≈1% and an efficiency ≈90%. RATIR’s simultaneous optical/NIR observations are expected to yield about one candidate transient in the vast 100 deg2 LIGO error region for prioritized follow-up with larger aperture telescopes.
Costas loop lock detection in the advanced receiver
NASA Technical Reports Server (NTRS)
Mileant, A.; Hinedi, S.
1989-01-01
The advanced receiver currently being developed uses a Costas digital loop to demodulate the subcarrier. Previous analyses of lock detector algorithms for Costas loops have ignored the effects of the inherent correlation between the samples of the phase-error process. Accounting for this correlation is necessary to achieve the desired lock-detection probability for a given false-alarm rate. Both analysis and simulations are used to quantify the effects of phase correlation on lock detection for the square-law and the absolute-value type detectors. Results are obtained which depict the lock-detection probability as a function of loop signal-to-noise ratio for a given false-alarm rate. The mathematical model and computer simulation show that the square-law detector experiences less degradation due to phase jitter than the absolute-value detector and that the degradation in detector signal-to-noise ratio is more pronounced for square-wave than for sine-wave signals.
Determination of vigabatrin in plasma by reversed-phase high-performance liquid chromatography.
Tsanaclis, L M; Wicks, J; Williams, J; Richens, A
1991-05-01
A method is described for the determination of vigabatrin in 50 microliters of plasma by isocratic high-performance liquid chromatography using fluorescence detection. The procedure involves protein precipitation with methanol followed by precolumn derivatisation with o-phthaldialdehyde reagent. Separation of the derivatised vigabatrin was achieved on a Microsorb C18 column using a mobile phase of 10 mM orthophosphoric acid:acetonitrile:methanol (6:3:1) at a flow rate of 2.0 ml/min. Assay time is 15 min and chromatograms show no interference from commonly coadministered anticonvulsant drugs. The total analytical error within the range of 0.85-85 micrograms/ml was found to be 7.6% with the within-replicates error of 2.76%. The minimum detection limit was 0.08 micrograms/ml and the minimum quantitation limit was 0.54 micrograms/ml.
Combinatorial pulse position modulation for power-efficient free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, James M.; Vanderaar, M.; Wagner, P.; Bibyk, Steven
1993-01-01
A new modulation technique called combinatorial pulse position modulation (CPPM) is presented as a power-efficient alternative to quaternary pulse position modulation (QPPM) for direct-detection, free-space laser communications. The special case of 16C4PPM is compared to QPPM in terms of data throughput and bit error rate (BER) performance for similar laser power and pulse duty cycle requirements. The increased throughput from CPPM enables the use of forward error corrective (FEC) encoding for a net decrease in the amount of laser power required for a given data throughput compared to uncoded QPPM. A specific, practical case of coded CPPM is shown to reduce the amount of power required to transmit and receive a given data sequence by at least 4.7 dB. Hardware techniques for maximum likelihood detection and symbol timing recovery are presented.
Particle Swarm Optimization approach to defect detection in armour ceramics.
Kesharaju, Manasa; Nagarajah, Romesh
2017-03-01
In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.
An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks
Abba, Sani; Lee, Jeong-A
2015-01-01
We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network. PMID:26295236
An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.
Abba, Sani; Lee, Jeong-A
2015-08-18
We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guiral, P.; Ribouton, J.; Jalade, P.
Purpose: High dose rate brachytherapy (HDR-BT) is widely used to treat gynecologic, anal, prostate, head, neck, and breast cancers. These treatments are typically administered in large dose per fraction (>5 Gy) and with high-gradient-dose-distributions, with serious consequences in case of a treatment delivery error (e.g., on dwell position and dwell time). Thus, quality assurance (QA) or quality control (QC) should be systematically and independently implemented. This paper describes the design and testing of a phantom and an instrumented gynecological applicator for pretreatment QA and in vivo QC, respectively. Methods: The authors have designed a HDR-BT phantom equipped with four GaN-basedmore » dosimeters. The authors have also instrumented a commercial multichannel HDR-BT gynecological applicator by rigid incorporation of four GaN-based dosimeters in four channels. Specific methods based on the four GaN dosimeter responses are proposed for accurate determination of dwell time and dwell position inside phantom or applicator. The phantom and the applicator have been tested for HDR-BT QA in routine over two different periods: 29 and 15 days, respectively. Measurements in dwell position and time are compared to the treatment plan. A modified position–time gamma index is used to monitor the quality of treatment delivery. Results: The HDR-BT phantom and the instrumented applicator have been used to determine more than 900 dwell positions over the different testing periods. The errors between the planned and measured dwell positions are 0.11 ± 0.70 mm (1σ) and 0.01 ± 0.42 mm (1σ), with the phantom and the applicator, respectively. The dwell time errors for these positions do not exhibit significant bias, with a standard deviation of less than 100 ms for both systems. The modified position–time gamma index sets a threshold, determining whether the treatment run passes or fails. The error detectability of their systems has been evaluated through tests on intentionally introduced error protocols. With a detection threshold of 0.7 mm, the error detection rate on dwell position is 22% at 0.5 mm, 96% at 1 mm, and 100% at and beyond 1.5 mm. On dwell time with a dwell time threshold of 0.1 s, it is 90% at 0.2 s and 100% at and beyond 0.3 s. Conclusions: The proposed HDR-BT phantom and instrumented applicator have been tested and their main characteristics have been evaluated. These systems perform unsupervised measurements and analysis without prior treatment plan information. They allow independent verification of dwell position and time with accuracy of measurements comparable with other similar systems reported in the literature.« less
An advanced SEU tolerant latch based on error detection
NASA Astrophysics Data System (ADS)
Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao
2018-05-01
This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).
[Detection and classification of medication errors at Joan XXIII University Hospital].
Jornet Montaña, S; Canadell Vilarrasa, L; Calabuig Mũoz, M; Riera Sendra, G; Vuelta Arce, M; Bardají Ruiz, A; Gallart Mora, M J
2004-01-01
Medication errors are multifactorial and multidisciplinary, and may originate in processes such as drug prescription, transcription, dispensation, preparation and administration. The goal of this work was to measure the incidence of detectable medication errors that arise within a unit dose drug distribution and control system, from drug prescription to drug administration, by means of an observational method confined to the Pharmacy Department, as well as a voluntary, anonymous report system. The acceptance of this voluntary report system's implementation was also assessed. A prospective descriptive study was conducted. Data collection was performed at the Pharmacy Department from a review of prescribed medical orders, a review of pharmaceutical transcriptions, a review of dispensed medication and a review of medication returned in unit dose medication carts. A voluntary, anonymous report system centralized in the Pharmacy Department was also set up to detect medication errors. Prescription errors were the most frequent (1.12%), closely followed by dispensation errors (1.04%). Transcription errors (0.42%) and administration errors (0.69%) had the lowest overall incidence. Voluntary report involved only 4.25% of all detected errors, whereas unit dose medication cart review contributed the most to error detection. Recognizing the incidence and types of medication errors that occur in a health-care setting allows us to analyze their causes and effect changes in different stages of the process in order to ensure maximal patient safety.
Gabriel, Rodney A; Kim, Helen; Sidney, Stephen; McCulloch, Charles E; Singh, Vineeta; Johnston, S Claiborne; Ko, Nerissa U; Achrol, Achal S; Zaroff, Jonathan G; Young, William L
2010-01-01
To evaluate whether increased neuroimaging use is associated with increased brain arteriovenous malformation (BAVM) detection, we examined detection rates in the Kaiser Permanente Medical Care Program of northern California between 1995 and 2004. We reviewed medical records, radiology reports, and administrative databases to identify BAVMs, intracranial aneurysms (IAs: subarachnoid hemorrhage [SAH] and unruptured aneurysms), and other vascular malformations (OVMs: dural fistulas, cavernous malformations, Vein of Galen malformations, and venous malformations). Poisson regression (with robust standard errors) was used to test for trend. Random-effects meta-analysis generated a pooled measure of BAVM detection rate from 6 studies. We identified 401 BAVMs (197 ruptured, 204 unruptured), 570 OVMs, and 2892 IAs (2079 SAHs and 813 unruptured IAs). Detection rates per 100 000 person-years were 1.4 (95% CI, 1.3 to 1.6) for BAVMs, 2.0 (95% CI, 1.8 to 2.3) for OVMs, and 10.3 (95% CI, 9.9 to 10.7) for IAs. Neuroimaging utilization increased 12% per year during the time period (P<0.001). Overall, rates increased for IAs (P<0.001), remained stable for OVMs (P=0.858), and decreased for BAVMs (P=0.001). Detection rates increased 15% per year for unruptured IAs (P<0.001), with no change in SAHs (P=0.903). However, rates decreased 7% per year for unruptured BAVMs (P=0.016) and 3% per year for ruptured BAVMs (P=0.005). Meta-analysis yielded a pooled BAVM detection rate of 1.3 (95% CI, 1.2 to 1.4) per 100 000 person-years, without heterogeneity between studies (P=0.25). Rates for BAVMs, OVMs, and IAs in this large, multiethnic population were similar to those in other series. During 1995 to 2004, a period of increasing neuroimaging utilization, we did not observe an increased rate of detection of unruptured BAVMs, despite increased detection of unruptured IAs.
Robust Crop and Weed Segmentation under Uncontrolled Outdoor Illumination
Jeon, Hong Y.; Tian, Lei F.; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA). PMID:22163954
NASA Technical Reports Server (NTRS)
Liu, Jian; Kim, Junghwan; Kwatra, S. C.; Stevens, Grady H.
1991-01-01
Aspects of error performance of various power and bandwidth efficient modulations for the land mobile satellite systems (LMSS) were investigated under multipath fading and interferences by using Monte-Carlo simulation. A differential detection for 16QAM (quadrature amplitude modulation) was proposed to cope with Ricean fading and Doppler shift. Computer simulation results show that the performance of 16QAM with differential detection is as good as that of 16PSK with coherent detection and 3 dB better than that of 16PSK with differential detection, although it degrades by about 4.5 dB as compared to 16QAM with coherent detection under an additive white Gaussian noise (AWGN) channel. For the nonlinear channels, 16QAM with modified signal constellations is introduced and analyzed. The simulation results show that the modified 16QAM exhibits a gain of 2.5 dB over 16PSK under traveling-wave tube nonlinearity, and about 4 dB gain over 16PSK at the bit error rate of 10 exp -5 under AWGN. Computer simulation results for modified 16 QAM under cochannel interference and adjacent-channel interference are also presented.
The effect of monetary punishment on error evaluation in a Go/No-go task.
Maruo, Yuya; Sommer, Werner; Masaki, Hiroaki
2017-10-01
Little is known about the effects of the motivational significance of errors in Go/No-go tasks. We investigated the impact of monetary punishment on the error-related negativity (ERN) and error positivity (Pe) for both overt errors and partial errors, that is, no-go trials without overt responses but with covert muscle activities. We compared high and low punishment conditions where errors were penalized with 50 or 5 yen, respectively, and a control condition without monetary consequences for errors. Because we hypothesized that the partial-error ERN might overlap with the no-go N2, we compared ERPs between correct rejections (i.e., successful no-go trials) and partial errors in no-go trials. We also expected that Pe amplitudes should increase with the severity of the penalty for errors. Mean error rates were significantly lower in the high punishment than in the control condition. Monetary punishment did not influence the overt-error ERN and partial-error ERN in no-go trials. The ERN in no-go trials did not differ between partial errors and overt errors; in addition, ERPs for correct rejections in no-go trials without partial errors were of the same size as in go-trial. Therefore the overt-error ERN and the partial-error ERN may share similar error monitoring processes. Monetary punishment increased Pe amplitudes for overt errors, suggesting enhanced error evaluation processes. For partial errors an early Pe was observed, presumably representing inhibition processes. Interestingly, even partial errors elicited the Pe, suggesting that covert erroneous activities could be detected in Go/No-go tasks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Signal processing methodologies for an acoustic fetal heart rate monitor
NASA Technical Reports Server (NTRS)
Pretlow, Robert A., III; Stoughton, John W.
1992-01-01
Research and development is presented of real time signal processing methodologies for the detection of fetal heart tones within a noise-contaminated signal from a passive acoustic sensor. A linear predictor algorithm is utilized for detection of the heart tone event and additional processing derives heart rate. The linear predictor is adaptively 'trained' in a least mean square error sense on generic fetal heart tones recorded from patients. A real time monitor system is described which outputs to a strip chart recorder for plotting the time history of the fetal heart rate. The system is validated in the context of the fetal nonstress test. Comparisons are made with ultrasonic nonstress tests on a series of patients. Comparative data provides favorable indications of the feasibility of the acoustic monitor for clinical use.
Optical communication with semiconductor laser diodes
NASA Technical Reports Server (NTRS)
Davidson, F.
1988-01-01
Slot timing recovery in a direct detection optical PPM communication system can be achieved by processing the photodetector waveform with a nonlinear device whose output forms the input to a phase lock group. The choice of a simple transition detector as the nonlinearity is shown to give satisfactory synchronization performance. The rms phase error of the recovered slot clock and the effect of slot timing jitter on the bit error probability were directly measured. The experimental system consisted of an AlGaAs laser diode (lambda = 834 nm) and a silicon avalanche photodiode (APD) photodetector and used Q=4 PPM signaling operated at a source data rate of 25 megabits/second. The mathematical model developed to characterize system performance is shown to be in good agreement with actual performance measurements. The use of the recovered slot clock in the receiver resulted in no degradation in receiver sensitivity compared to a system with perfect slot timing. The system achieved a bit error probability of 10 to the minus 6 power at received signal energies corresponding to an average of less than 60 detected photons per information bit.
Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.
1987-07-01
detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication
Danielson, Patrick; Yang, Limin; Jin, Suming; Homer, Collin G.; Napton, Darrell
2016-01-01
We developed a method that analyzes the quality of the cultivated cropland class mapped in the USA National Land Cover Database (NLCD) 2006. The method integrates multiple geospatial datasets and a Multi Index Integrated Change Analysis (MIICA) change detection method that captures spectral changes to identify the spatial distribution and magnitude of potential commission and omission errors for the cultivated cropland class in NLCD 2006. The majority of the commission and omission errors in NLCD 2006 are in areas where cultivated cropland is not the most dominant land cover type. The errors are primarily attributed to the less accurate training dataset derived from the National Agricultural Statistics Service Cropland Data Layer dataset. In contrast, error rates are low in areas where cultivated cropland is the dominant land cover. Agreement between model-identified commission errors and independently interpreted reference data was high (79%). Agreement was low (40%) for omission error comparison. The majority of the commission errors in the NLCD 2006 cultivated crops were confused with low-intensity developed classes, while the majority of omission errors were from herbaceous and shrub classes. Some errors were caused by inaccurate land cover change from misclassification in NLCD 2001 and the subsequent land cover post-classification process.
Horowitz-Kraus, Tzipi
2016-05-01
The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Avila, F J; Pensado, A; Esteva, C
1996-05-01
To evaluate the accuracy of bibliographic references in REVISTA ESPANOLA DE ANESTESIOLOGIA Y REANIMACION (REDAR) and compare it with other Spanish and international journals. One hundred references were selected at random from those published in REDAR during 1994. A citation was considered correct if there were no differences between it and the original article in any of 6 standard citation times, and if it complied with REDAR citation style. A citation was considered incorrect if there were in fact differences or if REDAR style was not followed. Errors that interfered with direct access to the original were considered serious. Also considered serious were the omission of the first author. Some type of error was detected in 53.9% of the references. Twelve contained a serious error, which on 5 occasions impeded finding the original article and on 6 occasions made direct access difficult. The first author was missing in 1 citation. Errors were found, in order of decreasing frequency, in authors, article titles, journal title, volume, pages and year. A single error was found in 28 citations, 2 were found in 12, 3 were found in 2 and more than 3 were found in 1. REDAR's rate of error in references is comparable to the rates of other Spanish journal, but it is nearly double that of international journals in anesthesiology with higher impact factors (Anesthesiology, Canadian Journal of Anaesthesia). An effort must be made by authors and editors to remedy the situation.
A two-phase sampling design for increasing detections of rare species in occupancy surveys
Pacifici, Krishna; Dorazio, Robert M.; Dorazio, Michael J.
2012-01-01
1. Occupancy estimation is a commonly used tool in ecological studies owing to the ease at which data can be collected and the large spatial extent that can be covered. One major obstacle to using an occupancy-based approach is the complications associated with designing and implementing an efficient survey. These logistical challenges become magnified when working with rare species when effort can be wasted in areas with none or very few individuals. 2. Here, we develop a two-phase sampling approach that mitigates these problems by using a design that places more effort in areas with higher predicted probability of occurrence. We compare our new sampling design to traditional single-season occupancy estimation under a range of conditions and population characteristics. We develop an intuitive measure of predictive error to compare the two approaches and use simulations to assess the relative accuracy of each approach. 3. Our two-phase approach exhibited lower predictive error rates compared to the traditional single-season approach in highly spatially correlated environments. The difference was greatest when detection probability was high (0·75) regardless of the habitat or sample size. When the true occupancy rate was below 0·4 (0·05-0·4), we found that allocating 25% of the sample to the first phase resulted in the lowest error rates. 4. In the majority of scenarios, the two-phase approach showed lower error rates compared to the traditional single-season approach suggesting our new approach is fairly robust to a broad range of conditions and design factors and merits use under a wide variety of settings. 5. Synthesis and applications. Conservation and management of rare species are a challenging task facing natural resource managers. It is critical for studies involving rare species to efficiently allocate effort and resources as they are usually of a finite nature. We believe our approach provides a framework for optimal allocation of effort while maximizing the information content of the data in an attempt to provide the highest conservation value per unit of effort.
Conditional Outlier Detection for Clinical Alerting
Hauskrecht, Milos; Valko, Michal; Batal, Iyad; Clermont, Gilles; Visweswaran, Shyam; Cooper, Gregory F.
2010-01-01
We develop and evaluate a data-driven approach for detecting unusual (anomalous) patient-management actions using past patient cases stored in an electronic health record (EHR) system. Our hypothesis is that patient-management actions that are unusual with respect to past patients may be due to a potential error and that it is worthwhile to raise an alert if such a condition is encountered. We evaluate this hypothesis using data obtained from the electronic health records of 4,486 post-cardiac surgical patients. We base the evaluation on the opinions of a panel of experts. The results support that anomaly-based alerting can have reasonably low false alert rates and that stronger anomalies are correlated with higher alert rates. PMID:21346986
Conditional outlier detection for clinical alerting.
Hauskrecht, Milos; Valko, Michal; Batal, Iyad; Clermont, Gilles; Visweswaran, Shyam; Cooper, Gregory F
2010-11-13
We develop and evaluate a data-driven approach for detecting unusual (anomalous) patient-management actions using past patient cases stored in an electronic health record (EHR) system. Our hypothesis is that patient-management actions that are unusual with respect to past patients may be due to a potential error and that it is worthwhile to raise an alert if such a condition is encountered. We evaluate this hypothesis using data obtained from the electronic health records of 4,486 post-cardiac surgical patients. We base the evaluation on the opinions of a panel of experts. The results support that anomaly-based alerting can have reasonably low false alert rates and that stronger anomalies are correlated with higher alert rates.
IMRT QA: Selecting gamma criteria based on error detection sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org
Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less
Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.
Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J
2018-01-01
Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.
The Bayesian Approach to Association
NASA Astrophysics Data System (ADS)
Arora, N. S.
2017-12-01
The Bayesian approach to Association focuses mainly on quantifying the physics of the domain. In the case of seismic association for instance let X be the set of all significant events (above some threshold) and their attributes, such as location, time, and magnitude, Y1 be the set of detections that are caused by significant events and their attributes such as seismic phase, arrival time, amplitude etc., Y2 be the set of detections that are not caused by significant events, and finally Y be the set of observed detections We would now define the joint distribution P(X, Y1, Y2, Y) = P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2) ; where the last term simply states that Y1 and Y2 are a partitioning of Y. Given the above joint distribution the inference problem is simply to find the X, Y1, and Y2 that maximizes posterior probability P(X, Y1, Y2| Y) which reduces to maximizing P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2). In this expression P(X) captures our prior belief about event locations. P(Y1 | X) captures notions of travel time, residual error distributions as well as detection and mis-detection probabilities. While P(Y2) captures the false detection rate of our seismic network. The elegance of this approach is that all of the assumptions are stated clearly in the model for P(X), P(Y1|X) and P(Y2). The implementation of the inference is merely a by-product of this model. In contrast some of the other methods such as GA hide a number of assumptions in the implementation details of the inference - such as the so called "driver cells." The other important aspect of this approach is that all seismic knowledge including knowledge from other domains such as infrasound and hydroacoustic can be included in the same model. So, we don't need to separately account for misdetections or merge seismic and infrasound events as a separate step. Finally, it should be noted that the objective of automatic association is to simplify the job of humans who are publishing seismic bulletins based on this output. The error metric for association should accordingly count errors such as missed events much higher than spurious events because the former require more work from humans. Furthermore, the error rate needs to be weighted higher during periods of high seismicity such as an aftershock sequence when the human effort tends to increase.
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
High-density force myography: A possible alternative for upper-limb prosthetic control.
Radmand, Ashkan; Scheme, Erik; Englehart, Kevin
2016-01-01
Several multiple degree-of-freedom upper-limb prostheses that have the promise of highly dexterous control have recently been developed. Inadequate controllability, however, has limited adoption of these devices. Introducing more robust control methods will likely result in higher acceptance rates. This work investigates the suitability of using high-density force myography (HD-FMG) for prosthetic control. HD-FMG uses a high-density array of pressure sensors to detect changes in the pressure patterns between the residual limb and socket caused by the contraction of the forearm muscles. In this work, HD-FMG outperforms the standard electromyography (EMG)-based system in detecting different wrist and hand gestures. With the arm in a fixed, static position, eight hand and wrist motions were classified with 0.33% error using the HD-FMG technique. Comparatively, classification errors in the range of 2.2%-11.3% have been reported in the literature for multichannel EMG-based approaches. As with EMG, position variation in HD-FMG can introduce classification error, but incorporating position variation into the training protocol reduces this effect. Channel reduction was also applied to the HD-FMG technique to decrease the dimensionality of the problem as well as the size of the sensorized area. We found that with informed, symmetric channel reduction, classification error could be decreased to 0.02%.
Finding intonational boundaries using acoustic cues related to the voice source
NASA Astrophysics Data System (ADS)
Choi, Jeung-Yoon; Hasegawa-Johnson, Mark; Cole, Jennifer
2005-10-01
Acoustic cues related to the voice source, including harmonic structure and spectral tilt, were examined for relevance to prosodic boundary detection. The measurements considered here comprise five categories: duration, pitch, harmonic structure, spectral tilt, and amplitude. Distributions of the measurements and statistical analysis show that the measurements may be used to differentiate between prosodic categories. Detection experiments on the Boston University Radio Speech Corpus show equal error detection rates around 70% for accent and boundary detection, using only the acoustic measurements described, without any lexical or syntactic information. Further investigation of the detection results shows that duration and amplitude measurements, and, to a lesser degree, pitch measurements, are useful for detecting accents, while all voice source measurements except pitch measurements are useful for boundary detection.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.; Mansuripur, M.
1992-01-01
A commonly used tracking method on pre-grooved magneto-optical (MO) media is the push-pull technique, and the astigmatic method is a popular focus-error detection approach. These two methods are analyzed using DIFFRACT, a general-purpose scalar diffraction modeling program, to observe the effects on the error signals due to focusing lens misalignment, Seidel aberrations, and optical crosstalk (feedthrough) between the focusing and tracking servos. Using the results of the astigmatic/push-pull system as a basis for comparison, a novel focus/track-error detection technique that utilizes a ring toric lens is evaluated as well as the obscuration method (focus error detection only).
Error detection and correction unit with built-in self-test capability for spacecraft applications
NASA Technical Reports Server (NTRS)
Timoc, Constantin
1990-01-01
The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.
ERIC Educational Resources Information Center
Ari, Omer
2009-01-01
Fluency instruction has had limited effects on reading comprehension relative to reading rate and prosodic reading (Dowhower, 1987; Herman, 1985; National Institute of Child Health and Human Development, 2000a). More specific components (i.e., error detection) of comprehension may yield larger effects through exposure to a wider range of materials…
A colorimetric sensor array for identification of toxic gases below permissible exposure limits†
Feng, Liang; Musto, Christopher J.; Kemling, Jonathan W.; Lim, Sung H.; Suslick, Kenneth S.
2010-01-01
A colorimetric sensor array has been developed for the rapid and sensitive detection of 20 toxic industrial chemicals (TICs) at their PELs (permissible exposure limits). The color changes in an array of chemically responsive nanoporous pigments provide facile identification of the TICs with an error rate below 0.7%. PMID:20221484
The Occurrence and the Success Rate of Self-Initiated Self-Repair
ERIC Educational Resources Information Center
Sato, Rintaro; Takatsuka, Shigenobu
2016-01-01
Errors naturally appear in spontaneous speeches and conversations. Particularly in a second or foreign language, it is only natural that mistakes happen as a part of the learning process. After an inappropriate expression is detected, it can be corrected. This act of correcting can be initiated either by the speaker (non-native speaker) or the…
Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output
NASA Astrophysics Data System (ADS)
Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu
2015-07-01
Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in real time. The ARTK positioning accuracy is better and more robust than the combination of phase difference over time (PDOT) and SRTK method at a high rate. The ARTK positioning accuracy is equivalent to SRTK solution when the DLTTD is 0.5 s, and centimeter level accuracy can be achieved even when DLTTD is 15 s.
ERIC Educational Resources Information Center
Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad
2010-01-01
This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…
de Wet, C; Bowie, P
2009-04-01
A multi-method strategy has been proposed to understand and improve the safety of primary care. The trigger tool is a relatively new method that has shown promise in American and secondary healthcare settings. It involves the focused review of a random sample of patient records using a series of "triggers" that alert reviewers to potential errors and previously undetected adverse events. To develop and test a global trigger tool to detect errors and adverse events in primary-care records. Trigger tool development was informed by previous research and content validated by expert opinion. The tool was applied by trained reviewers who worked in pairs to conduct focused audits of 100 randomly selected electronic patient records in each of five urban general practices in central Scotland. Review of 500 records revealed 2251 consultations and 730 triggers. An adverse event was found in 47 records (9.4%), indicating that harm occurred at a rate of one event per 48 consultations. Of these, 27 were judged to be preventable (42%). A further 17 records (3.4%) contained evidence of a potential adverse event. Harm severity was low to moderate for most patients (82.9%). Error and harm rates were higher in those aged > or =60 years, and most were medication-related (59%). The trigger tool was successful in identifying undetected patient harm in primary-care records and may be the most reliable method for achieving this. However, the feasibility of its routine application is open to question. The tool may have greater utility as a research rather than an audit technique. Further testing in larger, representative study samples is required.
Tang, Tao; Chen, Sisi; Huang, Xuanlin; Yang, Tao; Qi, Bo
2018-01-01
High-performance position control can be improved by the compensation of disturbances for a gear-driven control system. This paper presents a mode-free disturbance observer (DOB) based on sensor-fusion to reduce some errors related disturbances for a gear-driven gimbal. This DOB uses the rate deviation to detect disturbances for implementation of a high-gain compensator. In comparison with the angular position signal the rate deviation between load and motor can exhibits the disturbances exiting in the gear-driven gimbal quickly. Due to high bandwidth of the motor rate closed loop, the inverse model of the plant is not necessary to implement DOB. Besides, this DOB requires neither complex modeling of plant nor the use of additive sensors. Without rate sensors providing angular rate, the rate deviation is easily detected by encoders mounted on the side of motor and load, respectively. Extensive experiments are provided to demonstrate the benefits of the proposed algorithm. PMID:29498643
Tang, Tao; Chen, Sisi; Huang, Xuanlin; Yang, Tao; Qi, Bo
2018-03-02
High-performance position control can be improved by the compensation of disturbances for a gear-driven control system. This paper presents a mode-free disturbance observer (DOB) based on sensor-fusion to reduce some errors related disturbances for a gear-driven gimbal. This DOB uses the rate deviation to detect disturbances for implementation of a high-gain compensator. In comparison with the angular position signal the rate deviation between load and motor can exhibits the disturbances exiting in the gear-driven gimbal quickly. Due to high bandwidth of the motor rate closed loop, the inverse model of the plant is not necessary to implement DOB. Besides, this DOB requires neither complex modeling of plant nor the use of additive sensors. Without rate sensors providing angular rate, the rate deviation is easily detected by encoders mounted on the side of motor and load, respectively. Extensive experiments are provided to demonstrate the benefits of the proposed algorithm.
Error-Analysis for Correctness, Effectiveness, and Composing Procedure.
ERIC Educational Resources Information Center
Ewald, Helen Rothschild
The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…
Modulation and coding for throughput-efficient optical free-space links
NASA Technical Reports Server (NTRS)
Georghiades, Costas N.
1993-01-01
Optical direct-detection systems are currently being considered for some high-speed inter-satellite links, where data-rates of a few hundred megabits per second are evisioned under power and pulsewidth constraints. In this paper we investigate the capacity, cutoff-rate and error-probability performance of uncoded and trellis-coded systems for various modulation schemes and under various throughput and power constraints. Modulation schemes considered are on-off keying (OOK), pulse-position modulation (PPM), overlapping PPM (OPPM) and multi-pulse (combinatorial) PPM (MPPM).
Gpm Level 1 Science Requirements: Science and Performance Viewed from the Ground
NASA Technical Reports Server (NTRS)
Petersen, W.; Kirstetter, P.; Wolff, D.; Kidd, C.; Tokay, A.; Chandrasekar, V.; Grecu, M.; Huffman, G.; Jackson, G. S.
2016-01-01
GPM meets Level 1 science requirements for rain estimation based on the strong performance of its radar algorithms. Changes in the V5 GPROF algorithm should correct errors in V4 and will likely resolve GPROF performance issues relative to L1 requirements. L1 FOV Snow detection largely verified but at unknown SWE rate threshold (likely < 0.5 –1 mm/hr/liquid equivalent). Ongoing work to improve SWE rate estimation for both satellite and GV remote sensing.
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
Detecting Image Splicing Using Merged Features in Chroma Space
Liu, Guangjie; Dai, Yuewei
2014-01-01
Image splicing is an image editing method to copy a part of an image and paste it onto another image, and it is commonly followed by postprocessing such as local/global blurring, compression, and resizing. To detect this kind of forgery, the image rich models, a feature set successfully used in the steganalysis is evaluated on the splicing image dataset at first, and the dominant submodel is selected as the first kind of feature. The selected feature and the DCT Markov features are used together to detect splicing forgery in the chroma channel, which is convinced effective in splicing detection. The experimental results indicate that the proposed method can detect splicing forgeries with lower error rate compared to the previous literature. PMID:24574877
Detecting image splicing using merged features in chroma space.
Xu, Bo; Liu, Guangjie; Dai, Yuewei
2014-01-01
Image splicing is an image editing method to copy a part of an image and paste it onto another image, and it is commonly followed by postprocessing such as local/global blurring, compression, and resizing. To detect this kind of forgery, the image rich models, a feature set successfully used in the steganalysis is evaluated on the splicing image dataset at first, and the dominant submodel is selected as the first kind of feature. The selected feature and the DCT Markov features are used together to detect splicing forgery in the chroma channel, which is convinced effective in splicing detection. The experimental results indicate that the proposed method can detect splicing forgeries with lower error rate compared to the previous literature.
NASA Astrophysics Data System (ADS)
Benkler, Erik; Telle, Harald R.
2007-06-01
An improved phase-locked loop (PLL) for versatile synchronization of a sampling pulse train to an optical data stream is presented. It enables optical sampling of the true waveform of repetitive high bit-rate optical time division multiplexed (OTDM) data words such as pseudorandom bit sequences. Visualization of the true waveform can reveal details, which cause systematic bit errors. Such errors cannot be inferred from eye diagrams and require word-synchronous sampling. The programmable direct-digital-synthesis circuit used in our novel PLL approach allows flexible adaption of virtually any problem-specific synchronization scenario, including those required for waveform sampling, for jitter measurements by slope detection, and for classical eye-diagrams. Phase comparison of the PLL is performed at 10-GHz OTDM base clock rate, leading to a residual synchronization jitter of less than 70 fs.
Development and Assessment of a Medication Safety Measurement Program in a Long-Term Care Pharmacy.
Hertig, John B; Hultgren, Kyle E; Parks, Scott; Rondinelli, Rick
2016-02-01
Medication errors continue to be a major issue in the health care system, including in long-term care facilities. While many hospitals and health systems have developed methods to identify, track, and prevent these errors, long-term care facilities historically have not invested in these error-prevention strategies. The objective of this study was two-fold: 1) to develop a set of medication-safety process measures for dispensing in a long-term care pharmacy, and 2) to analyze the data from those measures to determine the relative safety of the process. The study was conducted at In Touch Pharmaceuticals in Valparaiso, Indiana. To assess the safety of the medication-use system, each step was documented using a comprehensive flowchart (process flow map) tool. Once completed and validated, the flowchart was used to complete a "failure modes and effects analysis" (FMEA) identifying ways a process may fail. Operational gaps found during FMEA were used to identify points of measurement. The research identified a set of eight measures as potential areas of failure; data were then collected on each one of these. More than 133,000 medication doses (opportunities for errors) were included in the study during the research time frame (April 1, 2014, and ended on June 4, 2014). Overall, there was an approximate order-entry error rate of 15.26%, with intravenous errors at 0.37%. A total of 21 errors migrated through the entire medication-use system. These 21 errors in 133,000 opportunities resulted in a final check error rate of 0.015%. A comprehensive medication-safety measurement program was designed and assessed. This study demonstrated the ability to detect medication errors in a long-term pharmacy setting, thereby making process improvements measureable. Future, larger, multi-site studies should be completed to test this measurement program.
Correcting false memories: Errors must be noticed and replaced.
Mullet, Hillary G; Marsh, Elizabeth J
2016-04-01
Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.
The Relationship Between Work Commitment, Dynamic, and Medication Error.
Rezaiamin, Abdoolkarim; Pazokian, Marzieh; Zagheri Tafreshi, Mansoureh; Nasiri, Malihe
2017-05-01
Incidence of medication errors in intensive care unit (ICU) can cause irreparable damage for ICU patients. Therefore, it seems necessary to find the causes of medication errors in this section. Work commitment and dynamic might affect the incidence of medication errors in ICU. To assess the mentioned hypothesis, we performed a descriptive-analytical study which was carried out on 117 nurses working in ICU of educational hospitals in Tehran. Minick et al., Salyer et al., and Wakefield et al. scales were used for data gathering on work commitment, dynamic, and medication errors, respectively. Findings of the current study revealed that high work commitment in ICU nurses caused low number of medication errors, including intravenous and nonintravenous. We controlled the effects of confounding variables in detection of this relationship. In contrast, no significant association was found between work dynamic and different types of medication errors. Although the study did not observe any relationship between the dynamics and rate of medication errors, the training of nurses or nursing students to create a dynamic environment in hospitals can increase their interest in the profession and increase job satisfaction in them. Also they must have enough ability in work dynamic so that they don't confused and distracted result in frequent changes of orders, care plans, and procedures.
Sorensen, James P R; Baker, Andy; Cumberland, Susan A; Lapworth, Dan J; MacDonald, Alan M; Pedley, Steve; Taylor, Richard G; Ward, Jade S T
2018-05-01
We assess the use of fluorescent dissolved organic matter at excitation-emission wavelengths of 280nm and 360nm, termed tryptophan-like fluorescence (TLF), as an indicator of faecally contaminated drinking water. A significant logistic regression model was developed using TLF as a predictor of thermotolerant coliforms (TTCs) using data from groundwater- and surface water-derived drinking water sources in India, Malawi, South Africa and Zambia. A TLF threshold of 1.3ppb dissolved tryptophan was selected to classify TTC contamination. Validation of the TLF threshold indicated a false-negative error rate of 15% and a false-positive error rate of 18%. The threshold was unsuccessful at classifying contaminated sources containing <10 TTC cfu per 100mL, which we consider the current limit of detection. If only sources above this limit were classified, the false-negative error rate was very low at 4%. TLF intensity was very strongly correlated with TTC concentration (ρ s =0.80). A higher threshold of 6.9ppb dissolved tryptophan is proposed to indicate heavily contaminated sources (≥100 TTC cfu per 100mL). Current commercially available fluorimeters are easy-to-use, suitable for use online and in remote environments, require neither reagents nor consumables, and crucially provide an instantaneous reading. TLF measurements are not appreciably impaired by common intereferents, such as pH, turbidity and temperature, within typical natural ranges. The technology is a viable option for the real-time screening of faecally contaminated drinking water globally. Copyright © 2017 Natural Environment Research Council (NERC), as represented by the British Geological Survey (BGS. Published by Elsevier B.V. All rights reserved.
Performance Bounds on Two Concatenated, Interleaved Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce; Dolinar, Samuel
2010-01-01
A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).
Design of analytical failure detection using secondary observers
NASA Technical Reports Server (NTRS)
Sisar, M.
1982-01-01
The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.
Latent error detection: A golden two hours for detection.
Saward, Justin R E; Stanton, Neville A
2017-03-01
Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Error Detection/Correction in Collaborative Writing
ERIC Educational Resources Information Center
Pilotti, Maura; Chodorow, Martin
2009-01-01
In the present study, we examined error detection/correction during collaborative writing. Subjects were asked to identify and correct errors in two contexts: a passage written by the subject (familiar text) and a passage written by a person other than the subject (unfamiliar text). A computer program inserted errors in function words prior to the…
ERIC Educational Resources Information Center
Lu, Hui-Chuan; Chu, Yu-Hsin; Chang, Cheng-Yu
2013-01-01
Compared with English learners, Spanish learners have fewer resources for automatic error detection and revision and following the current integrative Computer Assisted Language Learning (CALL), we combined corpus-based approach and CALL to create the System of Error Detection and Revision Suggestion (SEDRS) for learning Spanish. Through…
Computer-Assisted Detection of 90% of EFL Student Errors
ERIC Educational Resources Information Center
Harvey-Scholes, Calum
2018-01-01
Software can facilitate English as a Foreign Language (EFL) students' self-correction of their free-form writing by detecting errors; this article examines the proportion of errors which software can detect. A corpus of 13,644 words of written English was created, comprising 90 compositions written by Spanish-speaking students at levels A2-B2…
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir
2009-06-01
Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
Smartphone-based photoplethysmographic imaging for heart rate monitoring.
Alafeef, Maha
2017-07-01
The purpose of this study is to make use of visible light reflected mode photoplethysmographic (PPG) imaging for heart rate (HR) monitoring via smartphones. The system uses the built-in camera feature in mobile phones to capture video from the subject's index fingertip. The video is processed, and then the PPG signal resulting from the video stream processing is used to calculate the subject's heart rate. Records from 19 subjects were used to evaluate the system's performance. The HR values obtained by the proposed method were compared with the actual HR. The obtained results show an accuracy of 99.7% and a maximum absolute error of 0.4 beats/min where most of the absolute errors lay in the range of 0.04-0.3 beats/min. Given the encouraging results, this type of HR measurement can be adopted with great benefit, especially in the conditions of personal use or home-based care. The proposed method represents an efficient portable solution for HR accurate detection and recording.
[Interpreting change scores of the Behavioural Rating Scale for Geriatric Inpatients (GIP)].
Diesfeldt, H F A
2013-09-01
The Behavioural Rating Scale for Geriatric Inpatients (GIP) consists of fourteen, Rasch modelled subscales, each measuring different aspects of behavioural, cognitive and affective disturbances in elderly patients. Four additional measures are derived from the GIP: care dependency, apathy, cognition and affect. The objective of the study was to determine the reproducibility of the 18 measures. A convenience sample of 56 patients in psychogeriatric day care was assessed twice by the same observer (a professional caregiver). The median time interval between rating occasions was 45 days (interquartile range 34-58 days). Reproducibility was determined by calculating intraclass correlation coefficients (ICC agreement) for test-retest reliability. The minimal detectable difference (MDD) was calculated based on the standard error of measurement (SEM agreement). Test-retest reliability expressed by the ICCs varied from 0.57 (incoherent behaviour) to 0.93 (anxious behaviour). Standard errors of measurement varied from 0.28 (anxious behaviour) to 1.63 (care dependency). The results show how the GIP can be applied when interpreting individual change in psychogeriatric day care participants.
Direct characterization of quantum dynamics with noisy ancilla
Dumitrescu, Eugene F.; Humble, Travis S.
2015-11-23
We present methods for the direct characterization of quantum dynamics (DCQD) in which both the principal and ancilla systems undergo noisy processes. Using a concatenated error detection code, we discriminate between located and unlocated errors on the principal system in what amounts to filtering of ancilla noise. The example of composite noise involving amplitude damping and depolarizing channels is used to demonstrate the method, while we find the rate of noise filtering is more generally dependent on code distance. Furthermore our results indicate the accuracy of quantum process characterization can be greatly improved while remaining within reach of current experimentalmore » capabilities.« less
NASA Technical Reports Server (NTRS)
Boer, M.; Hurley, K.; Pizzichini, G.; Gottardi, M.
1991-01-01
Exosat observations are presented for 3 gamma-ray-burst error boxes, one of which may be associated with an optical flash. No point sources were detected at the 3-sigma level. A comparison with Einstein data (Pizzichini et al., 1986) is made for the March 5b, 1979 source. The data are interpreted in the framework of neutron star models and derive upper limits for the neutron star surface temperatures, accretion rates, and surface densities of an accretion disk. Apart from the March 5b, 1979 source, consistency is found with each model.
Sensor fault detection and recovery in satellite attitude control
NASA Astrophysics Data System (ADS)
Nasrolahi, Seiied Saeed; Abdollahi, Farzaneh
2018-04-01
This paper proposes an integrated sensor fault detection and recovery for the satellite attitude control system. By introducing a nonlinear observer, the healthy sensor measurements are provided. Considering attitude dynamics and kinematic, a novel observer is developed to detect the fault in angular rate as well as attitude sensors individually or simultaneously. There is no limit on type and configuration of attitude sensors. By designing a state feedback based control signal and Lyapunov stability criterion, the uniformly ultimately boundedness of tracking errors in the presence of sensor faults is guaranteed. Finally, simulation results are presented to illustrate the performance of the integrated scheme.
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.
Wonnemann, Meinolf; Frömke, Cornelia; Koch, Armin
2015-01-01
We investigated different evaluation strategies for bioequivalence trials with highly variable drugs on their resulting empirical type I error and empirical power. The classical 'unscaled' crossover design with average bioequivalence evaluation, the Add-on concept of the Japanese guideline, and the current 'scaling' approach of EMA were compared. Simulation studies were performed based on the assumption of a single dose drug administration while changing the underlying intra-individual variability. Inclusion of Add-on subjects following the Japanese concept led to slight increases of the empirical α-error (≈7.5%). For the approach of EMA we noted an unexpected tremendous increase of the rejection rate at a geometric mean ratio of 1.25. Moreover, we detected error rates slightly above the pre-set limit of 5% even at the proposed 'scaled' bioequivalence limits. With the classical 'unscaled' approach and the Japanese guideline concept the goal of reduced subject numbers in bioequivalence trials of HVDs cannot be achieved. On the other hand, widening the acceptance range comes at the price that quite a number of products will be accepted bioequivalent that had not been accepted in the past. A two-stage design with control of the global α therefore seems the better alternative.
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2014-10-01
A diagnostic analysis of the space-time structure of error in Quantitative Precipitation Estimates (QPE) from the Precipitation Radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the Southern Appalachian Mountains, USA since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 V7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA, and missed detection, MD) and magnitude errors (underestimation, UND, and overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter), and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground clutter correction.
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2015-03-01
A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the topography mask used to remove ground-clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to undercatch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and a local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non-uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground-clutter correction.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
A Real-Time Wireless Sweat Rate Measurement System for Physical Activity Monitoring.
Brueck, Andrew; Iftekhar, Tashfin; Stannard, Alicja B; Yelamarthi, Kumar; Kaya, Tolga
2018-02-10
There has been significant research on the physiology of sweat in the past decade, with one of the main interests being the development of a real-time hydration monitor that utilizes sweat. The contents of sweat have been known for decades; sweat provides significant information on the physiological condition of the human body. However, it is important to know the sweat rate as well, as sweat rate alters the concentration of the sweat constituents, and ultimately affects the accuracy of hydration detection. Towards this goal, a calorimetric based flow-rate detection system was built and tested to determine sweat rate in real time. The proposed sweat rate monitoring system has been validated through both controlled lab experiments (syringe pump) and human trials. An Internet of Things (IoT) platform was embedded, with the sensor using a Simblee board and Raspberry Pi. The overall prototype is capable of sending sweat rate information in real time to either a smartphone or directly to the cloud. Based on a proven theoretical concept, our overall system implementation features a pioneer device that can truly measure the rate of sweat in real time, which was tested and validated on human subjects. Our realization of the real-time sweat rate watch is capable of detecting sweat rates as low as 0.15 µL/min/cm², with an average error in accuracy of 18% compared to manual sweat rate readings.
Human Factors Evaluation of Conflict Detection Tool for Terminal Area
NASA Technical Reports Server (NTRS)
Verma, Savita Arora; Tang, Huabin; Ballinger, Deborah; Chinn, Fay Cherie; Kozon, Thomas E.
2013-01-01
A conflict detection and resolution tool, Terminal-area Tactical Separation-Assured Flight Environment (T-TSAFE), is being developed to improve the timeliness and accuracy of alerts and reduce the false alert rate observed with the currently deployed technology. The legacy system in use today, Conflict Alert, relies primarily on a dead reckoning algorithm, whereas T-TSAFE uses intent information to augment dead reckoning. In previous experiments, T-TSAFE was found to reduce the rate of false alerts and increase time between the alert to the controller and a loss of separation over the legacy system. In the present study, T-TSAFE was tested under two meteorological conditions, 1) all aircraft operated under instrument flight regimen, and 2) some aircraft operated under mixed operating conditions. The tool was used to visually alert controllers to predicted Losses of separation throughout the terminal airspace, and show compression errors, on final approach. The performance of T-TSAFE on final approach was compared with Automated Terminal Proximity Alert (ATPA), a tool recently deployed by the FAA. Results show that controllers did not report differences in workload or situational awareness between the T-TSAFE and ATPA cones but did prefer T-TSAFE features over ATPA functionality. T-TSAFE will provide one tool that shows alerts in the data blocks and compression errors via cones on the final approach, implementing all tactical conflict detection and alerting via one tool in TRACON airspace.
Architecture for an artificial immune system.
Hofmeyr, S A; Forrest, S
2000-01-01
An artificial immune system (ARTIS) is described which incorporates many properties of natural immune systems, including diversity, distributed computation, error tolerance, dynamic learning and adaptation, and self-monitoring. ARTIS is a general framework for a distributed adaptive system and could, in principle, be applied to many domains. In this paper, ARTIS is applied to computer security in the form of a network intrusion detection system called LISYS. LISYS is described and shown to be effective at detecting intrusions, while maintaining low false positive rates. Finally, similarities and differences between ARTIS and Holland's classifier systems are discussed.
Fazal, Irfan M; Ahmed, Nisar; Wang, Jian; Yang, Jeng-Yuan; Yan, Yan; Shamee, Bishara; Huang, Hao; Yue, Yang; Dolinar, Sam; Tur, Moshe; Willner, Alan E
2012-11-15
We demonstrate a 2 Tbit/s free-space data link using two orthogonal orbital angular momentum beams each carrying 25 different wavelength-division-multiplexing channels. We measure the performance for different modulation formats, including directly detected 40 Gbit/s nonreturn-to-zero (NRZ) differential phase-shift keying, 40 Gbit/s NRZ on-off keying, and coherently-detected 10 Gbaud NRZ quadrature phase-shift keying, and achieve low bit error rates with penalties less than 5 dB.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
Using Brain Imaging for Lie Detection: Where Science, Law and Research Policy Collide.
Langleben, Daniel D; Moriarty, Jane Campbell
2013-05-01
Progress in the use of functional magnetic resonance imaging (fMRI) of the brain to evaluate deception and differentiate lying from truth-telling has created anticipation of a breakthrough in the search for technology-based methods of lie detection. In the last few years, litigants have attempted to introduce fMRI lie detection evidence in courts. This article weighs in on the interdisciplinary debate about the admissibility of such evidence, identifying the missing pieces of the scientific puzzle that need to be completed if fMRI-based lie detection is to meet the standards of either legal reliability or general acceptance. We believe that the Daubert's "known error rate" is the key concept linking the legal and scientific standards. We posit that properly-controlled clinical trials are the most convincing means to determine the error rates of fMRI-based lie detection and confirm or disprove the relevance of the promising laboratory research on this topic. This article explains the current state of the science and provides an analysis of the case law in which litigants have sought to introduce fMRI lie detection. Analyzing the myriad issues related to fMRI lie detection, the article identifies the key limitations of the current neuroimaging of deception science as expert evidence and explores the problems that arise from using scientific evidence before it is proven scientifically valid and reliable. We suggest that courts continue excluding fMRI lie detection evidence until this potentially useful form of forensic science meets the scientific standards currently required for adoption of a medical test or device. Given a multitude of stakeholders and, the charged and controversial nature and the potential societal impact of this technology, goodwill and collaboration of several government agencies may be required to sponsor impartial and comprehensive clinical trials that will guide the development of forensic fMRI technology.
Convolution neural networks for real-time needle detection and localization in 2D ultrasound.
Mwikirize, Cosmas; Nosher, John L; Hacihaliloglu, Ilker
2018-05-01
We propose a framework for automatic and accurate detection of steeply inserted needles in 2D ultrasound data using convolution neural networks. We demonstrate its application in needle trajectory estimation and tip localization. Our approach consists of a unified network, comprising a fully convolutional network (FCN) and a fast region-based convolutional neural network (R-CNN). The FCN proposes candidate regions, which are then fed to a fast R-CNN for finer needle detection. We leverage a transfer learning paradigm, where the network weights are initialized by training with non-medical images, and fine-tuned with ex vivo ultrasound scans collected during insertion of a 17G epidural needle into freshly excised porcine and bovine tissue at depth settings up to 9 cm and [Formula: see text]-[Formula: see text] insertion angles. Needle detection results are used to accurately estimate needle trajectory from intensity invariant needle features and perform needle tip localization from an intensity search along the needle trajectory. Our needle detection model was trained and validated on 2500 ex vivo ultrasound scans. The detection system has a frame rate of 25 fps on a GPU and achieves 99.6% precision, 99.78% recall rate and an [Formula: see text] score of 0.99. Validation for needle localization was performed on 400 scans collected using a different imaging platform, over a bovine/porcine lumbosacral spine phantom. Shaft localization error of [Formula: see text], tip localization error of [Formula: see text] mm, and a total processing time of 0.58 s were achieved. The proposed method is fully automatic and provides robust needle localization results in challenging scanning conditions. The accurate and robust results coupled with real-time detection and sub-second total processing make the proposed method promising in applications for needle detection and localization during challenging minimally invasive ultrasound-guided procedures.
Selecting Power-Efficient Signal Features for a Low-Power Fall Detector.
Wang, Changhong; Redmond, Stephen J; Lu, Wei; Stevens, Michael C; Lord, Stephen R; Lovell, Nigel H
2017-11-01
Falls are a serious threat to the health of older people. A wearable fall detector can automatically detect the occurrence of a fall and alert a caregiver or an emergency response service so they may deliver immediate assistance, improving the chances of recovering from fall-related injuries. One constraint of such a wearable technology is its limited battery life. Thus, minimization of power consumption is an important design concern, all the while maintaining satisfactory accuracy of the fall detection algorithms implemented on the wearable device. This paper proposes an approach for selecting power-efficient signal features such that the minimum desirable fall detection accuracy is assured. Using data collected in simulated falls, simulated activities of daily living, and real free-living trials, all using young volunteers, the proposed approach selects four features from a set of ten commonly used features, providing a power saving of 75.3%, while limiting the error rate of a binary classification decision tree fall detection algorithm to 7.1%.Falls are a serious threat to the health of older people. A wearable fall detector can automatically detect the occurrence of a fall and alert a caregiver or an emergency response service so they may deliver immediate assistance, improving the chances of recovering from fall-related injuries. One constraint of such a wearable technology is its limited battery life. Thus, minimization of power consumption is an important design concern, all the while maintaining satisfactory accuracy of the fall detection algorithms implemented on the wearable device. This paper proposes an approach for selecting power-efficient signal features such that the minimum desirable fall detection accuracy is assured. Using data collected in simulated falls, simulated activities of daily living, and real free-living trials, all using young volunteers, the proposed approach selects four features from a set of ten commonly used features, providing a power saving of 75.3%, while limiting the error rate of a binary classification decision tree fall detection algorithm to 7.1%.
ERIC Educational Resources Information Center
Finch, Holmes
2011-01-01
Methods of uniform differential item functioning (DIF) detection have been extensively studied in the complete data case. However, less work has been done examining the performance of these methods when missing item responses are present. Research that has been done in this regard appears to indicate that treating missing item responses as…
Näsholm, Erika; Rohlfing, Sarah; Sauer, James D.
2014-01-01
Closed Circuit Television (CCTV) operators are responsible for maintaining security in various applied settings. However, research has largely ignored human factors that may contribute to CCTV operator error. One important source of error is inattentional blindness – the failure to detect unexpected but clearly visible stimuli when attending to a scene. We compared inattentional blindness rates for experienced (84 infantry personnel) and naïve (87 civilians) operators in a CCTV monitoring task. The task-relevance of the unexpected stimulus and the length of the monitoring period were manipulated between participants. Inattentional blindness rates were measured using typical post-event questionnaires, and participants' real-time descriptions of the monitored event. Based on the post-event measure, 66% of the participants failed to detect salient, ongoing stimuli appearing in the spatial field of their attentional focus. The unexpected task-irrelevant stimulus was significantly more likely to go undetected (79%) than the unexpected task-relevant stimulus (55%). Prior task experience did not inoculate operators against inattentional blindness effects. Participants' real-time descriptions revealed similar patterns, ruling out inattentional amnesia accounts. PMID:24465932
Adaptive error correction codes for face identification
NASA Astrophysics Data System (ADS)
Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.
2012-06-01
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuangrod, T; Simpson, J; Greer, P
Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less
NASA Astrophysics Data System (ADS)
Smith, C. J.; Kim, B.; Zhang, Y.; Ng, T. N.; Beck, V.; Ganguli, A.; Saha, B.; Daniel, G.; Lee, J.; Whiting, G.; Meyyappan, M.; Schwartz, D. E.
2015-12-01
We will present our progress on the development of a wireless sensor network that will determine the source and rate of detected methane leaks. The targeted leak detection threshold is 2 g/min with a rate estimation error of 20% and localization error of 1 m within an outdoor area of 100 m2. The network itself is composed of low-cost, high-performance sensor nodes based on printed nanomaterials with expected sensitivity below 1 ppmv methane. High sensitivity to methane is achieved by modifying high surface-area-to-volume-ratio single-walled carbon nanotubes (SWNTs) with materials that adsorb methane molecules. Because the modified SWNTs are not perfectly selective to methane, the sensor nodes contain arrays of variously-modified SWNTs to build diversity of response towards gases with adsorption affinity. Methane selectivity is achieved through advanced pattern-matching algorithms of the array's ensemble response. The system is low power and designed to operate for a year on a single small battery. The SWNT sensing elements consume only microwatts. The largest power consumer is the wireless communication, which provides robust, real-time measurement data. Methane leak localization and rate estimation will be performed by machine-learning algorithms built with the aid of computational fluid dynamics simulations of gas plume formation. This sensor system can be broadly applied at gas wells, distribution systems, refineries, and other downstream facilities. It also can be utilized for industrial and residential safety applications, and adapted to other gases and gas combinations.
Generation and transmission of DPSK signals using a directly modulated passive feedback laser.
Karar, Abdullah S; Gao, Ying; Zhong, Kang Ping; Ke, Jian Hong; Cartledge, John C
2012-12-10
The generation of differential-phase-shift keying (DPSK) signals is demonstrated using a directly modulated passive feedback laser at 10.709-Gb/s, 14-Gb/s and 16-Gb/s. The quality of the DPSK signals is assessed using both noncoherent detection for a bit rate of 10.709-Gb/s and coherent detection with digital signal processing involving a look-up table pattern-dependent distortion compensator. Transmission over a passive link consisting of 100 km of single mode fiber at a bit rate of 10.709-Gb/s is achieved with a received optical power of -45 dBm at a bit-error-ratio of 3.8 × 10(-3) and a 49 dB loss margin.
The Watchdog Task: Concurrent error detection using assertions
NASA Technical Reports Server (NTRS)
Ersoz, A.; Andrews, D. M.; Mccluskey, E. J.
1985-01-01
The Watchdog Task, a software abstraction of the Watchdog-processor, is shown to be a powerful error detection tool with a great deal of flexibility and the advantages of watchdog techniques. A Watchdog Task system in Ada is presented; issues of recovery, latency, efficiency (communication) and preprocessing are discussed. Different applications, one of which is error detection on a single processor, are examined.
A Review of Research on Error Detection. Technical Report No. 540.
ERIC Educational Resources Information Center
Meyer, Linda A.
A review was conducted of the research on error detection studies completed with children, adolescents, and young adults to determine at what age children begin to detect errors in texts. The studies were grouped according to the subjects' ages. The focus of the review was on the following aspects of each study: the hypothesis that guided the…
Localized Glaucomatous Change Detection within the Proper Orthogonal Decomposition Framework
Balasubramanian, Madhusudhanan; Kriegman, David J.; Bowd, Christopher; Holst, Michael; Weinreb, Robert N.; Sample, Pamela A.; Zangwill, Linda M.
2012-01-01
Purpose. To detect localized glaucomatous structural changes using proper orthogonal decomposition (POD) framework with false-positive control that minimizes confirmatory follow-ups, and to compare the results to topographic change analysis (TCA). Methods. We included 167 participants (246 eyes) with ≥4 Heidelberg Retina Tomograph (HRT)-II exams from the Diagnostic Innovations in Glaucoma Study; 36 eyes progressed by stereo-photographs or visual fields. All other patient eyes (n = 210) were non-progressing. Specificities were evaluated using 21 normal eyes. Significance of change at each HRT superpixel between each follow-up and its nearest baseline (obtained using POD) was estimated using mixed-effects ANOVA. Locations with significant reduction in retinal height (red pixels) were determined using Bonferroni, Lehmann-Romano k-family-wise error rate (k-FWER), and Benjamini-Hochberg false discovery rate (FDR) type I error control procedures. Observed positive rate (OPR) in each follow-up was calculated as a ratio of number of red pixels within disk to disk size. Progression by POD was defined as one or more follow-ups with OPR greater than the anticipated false-positive rate. TCA was evaluated using the recently proposed liberal, moderate, and conservative progression criteria. Results. Sensitivity in progressors, specificity in normals, and specificity in non-progressors, respectively, were POD-Bonferroni = 100%, 0%, and 0%; POD k-FWER = 78%, 86%, and 43%; POD-FDR = 78%, 86%, and 43%; POD k-FWER with retinal height change ≥50 μm = 61%, 95%, and 60%; TCA-liberal = 86%, 62%, and 21%; TCA-moderate = 53%, 100%, and 70%; and TCA-conservative = 17%, 100%, and 84%. Conclusions. With a stronger control of type I errors, k-FWER in POD framework minimized confirmatory follow-ups while providing diagnostic accuracy comparable to TCA. Thus, POD with k-FWER shows promise to reduce the number of confirmatory follow-ups required for clinical care and studies evaluating new glaucoma treatments. (ClinicalTrials.gov number, NCT00221897.) PMID:22491406
Computer-aided field editing in DHS: the Turkey experiment.
1995-01-01
A study comparing field editing using a Notebook computer, computer-aided field editing (CAFE), with that done manually in the standard manner, during the 1993 Demographic and Health Survey (DHS) in Turkey, demonstrated that there was less missing data and a lower mean number of errors for teams using CAFE. 6 of 13 teams used CAFE in the Turkey experiment; the computers were equipped with Integrated System for Survey Analysis (ISSA) software for editing the DHS questionnaires. The CAFE teams completed 2466 out of 8619 household questionnaires and 1886 out of 6649 individual questionnaires. The CAFE team editor entered data into the computer and marked any detected errors on the questionnaire; the errors were then corrected by the editor, in the field, based on other responses in the questionnaire, or on corrections made by the interviewer to which the questionnaire was returned. Errors in questionnaires edited manually are not identified until they are sent to the survey office for data processing, when it is too late to ask for clarification from respondents. There was one area where the error rate was higher for CAFE teams; the CAFE editors paid less attention to errors presented as warnings only.
Impaired search for orientation but not color in hemi-spatial neglect.
Wilkinson, David; Ko, Philip; Milberg, William; McGlinchey, Regina
2008-01-01
Patients with hemi-spatial neglect have trouble finding targets defined by a conjunction of visual features. The problem is widely believed to stem from a high-level deficit in attentional deployment, which in turn has led to disagreement over whether the detection of basic features is also disrupted. If one assumes that the detection of salient visual features can be based on the output of spared 'preattentive' processes (Treisman and Gelade, 1980), then feature detection should remain intact. However, if one assumes that all forms of detection require at least a modicum of focused attention (Duncan and Humphreys, 1992), then all forms of search will be disrupted to some degree. Here we measured the detection of feature targets that were defined by either a unique color or orientation. Comparable detection rates were observed in non-neglected space, which indicated that both forms of search placed similar demands on attention. For either of the above accounts to be true, the two targets should therefore be detected with equal efficiency in the neglected field. We found that while the detection rate for color was normal in four of our five patients, all showed an increased reaction time and/or error rate for orientation. This result points to a selective deficit in orientation discrimination, and implies that neglect disrupts specific feature representations. That is, the effects of neglect on visual search are not only attentional but also perceptual.
NASA Technical Reports Server (NTRS)
Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.
2011-01-01
Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.
The fitness cost of mis-splicing is the main determinant of alternative splicing patterns.
Saudemont, Baptiste; Popa, Alexandra; Parmley, Joanna L; Rocher, Vincent; Blugeon, Corinne; Necsulea, Anamaria; Meyer, Eric; Duret, Laurent
2017-10-30
Most eukaryotic genes are subject to alternative splicing (AS), which may contribute to the production of protein variants or to the regulation of gene expression via nonsense-mediated messenger RNA (mRNA) decay (NMD). However, a fraction of splice variants might correspond to spurious transcripts and the question of the relative proportion of splicing errors to functional splice variants remains highly debated. We propose a test to quantify the fraction of AS events corresponding to errors. This test is based on the fact that the fitness cost of splicing errors increases with the number of introns in a gene and with expression level. We analyzed the transcriptome of the intron-rich eukaryote Paramecium tetraurelia. We show that in both normal and in NMD-deficient cells, AS rates strongly decrease with increasing expression level and with increasing number of introns. This relationship is observed for AS events that are detectable by NMD as well as for those that are not, which invalidates the hypothesis of a link with the regulation of gene expression. Our results show that in genes with a median expression level, 92-98% of observed splice variants correspond to errors. We observed the same patterns in human transcriptomes and we further show that AS rates correlate with the fitness cost of splicing errors. These observations indicate that genes under weaker selective pressure accumulate more maladaptive substitutions and are more prone to splicing errors. Thus, to a large extent, patterns of gene expression variants simply reflect the balance between selection, mutation, and drift.
Lee, Nam-Ju; Cho, Eunhee; Bakken, Suzanne
2010-03-01
The purposes of this study were to develop a taxonomy for detection of errors related to hypertension management and to apply the taxonomy to retrospectively analyze the documentation of nurses in Advanced Practice Nurse (APN) training. We developed the Hypertension Diagnosis and Management Error Taxonomy and applied it in a sample of adult patient encounters (N = 15,862) that were documented in a personal digital assistant-based clinical log by registered nurses in APN training. We used Standard Query Language queries to retrieve hypertension-related data from the central database. The data were summarized using descriptive statistics. Blood pressure was documented in 77.5% (n = 12,297) of encounters; 21% had high blood pressure values. Missed diagnosis, incomplete diagnosis and misdiagnosis rates were 63.7%, 6.8% and 7.5% respectively. In terms of treatment, the omission rates were 17.9% for essential medications and 69.9% for essential patient teaching. Contraindicated anti-hypertensive medications were documented in 12% of encounters with co-occurring diagnoses of hypertension and asthma. The Hypertension Diagnosis and Management Error Taxonomy was useful for identifying errors based on documentation in a clinical log. The results provide an initial understanding of the nature of errors associated with hypertension diagnosis and management of nurses in APN training. The information gained from this study can contribute to educational interventions that promote APN competencies in identification and management of hypertension as well as overall patient safety and informatics competencies. Copyright © 2010 Korean Society of Nursing Science. Published by . All rights reserved.
Newman, Frederick L; McGrew, John; Deliberty, Richard N
2009-08-01
The current paper reports on the feasibility of using the HAPI-A, an instrument designed to assess a person's level of functioning in the community: (1) to help determine eligibility to receive behavioral health services, (2) to assign reimbursement case rates; and (3) to provide data for a service provider report card. A 3-year field study of the use of the instrument across an entire state mental health system explored the effectiveness of methods to enhance data accuracy, including annual training and a professional clinical record audit, and the ability of the test to detect differences in improvement rates within risk-adjusted groupings. The combination of training and auditing produced statistically significant, cumulative reductions in data errors across all 3 years of the field test. The HAPI-A also was sensitive in detecting differences among service providers in outcome improvements for six of six risk-adjusted groups rated at the moderate level of impairment and for five of six groups rated at the mild level of impairment, but was inconsistent in detecting outcome differences for persons rated at the severe level of impairment.
Abu-Almaalie, Zina; Ghassemlooy, Zabih; Bhatnagar, Manav R; Le-Minh, Hoa; Aslam, Nauman; Liaw, Shien-Kuei; Lee, It Ee
2016-11-20
Physical layer network coding (PNC) improves the throughput in wireless networks by enabling two nodes to exchange information using a minimum number of time slots. The PNC technique is proposed for two-way relay channel free space optical (TWR-FSO) communications with the aim of maximizing the utilization of network resources. The multipair TWR-FSO is considered in this paper, where a single antenna on each pair seeks to communicate via a common receiver aperture at the relay. Therefore, chip interleaving is adopted as a technique to separate the different transmitted signals at the relay node to perform PNC mapping. Accordingly, this scheme relies on the iterative multiuser technique for detection of users at the receiver. The bit error rate (BER) performance of the proposed system is examined under the combined influences of atmospheric loss, turbulence-induced channel fading, and pointing errors (PEs). By adopting the joint PNC mapping with interleaving and multiuser detection techniques, the BER results show that the proposed scheme can achieve a significant performance improvement against the degrading effects of turbulences and PEs. It is also demonstrated that a larger number of simultaneous users can be supported with this new scheme in establishing a communication link between multiple pairs of nodes in two time slots, thereby improving the channel capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Berrocal, Eduardo; Cappello, Franck
The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less
Effect of Detector Dead Time on the Performance of Optical Direct-Detection Communication Links
NASA Technical Reports Server (NTRS)
Chen, C.-C.
1988-01-01
Avalanche photodiodes (APDs) operating in the Geiger mode can provide a significantly improved single-photon detect ion sensitivity over conventional photodiodes. However, the quenching circuit required to remove the excess charge carriers after each photon event can introduce an undesirable dead time into the detection process. The effect of this detector dead time on the performance of a binary pulse-position-modulted (PPM) channel is studied by analyzing the error probability. It is shown that, when back- ground noise is negligible, the performance of the detector with dead time is similar to that o f a quantum-limited receiver. For systems with increasing background intensities, the error rate of the receiver starts to degrade rapidly with increasing dead time. The power penalty due to detector dead time is also evaluated and shown to depend critically on background intensity as well as dead time. Given the expected background strength in an optical channel, therefore, a constraint must be placed on the bandwidth of the receiver to limit the amount of power penalty due to detector dead time.
Effect of detector dead time on the performance of optical direct-detection communication links
NASA Astrophysics Data System (ADS)
Chen, C.-C.
1988-05-01
Avalanche photodiodes (APDs) operating in the Geiger mode can provide a significantly improved single-photon detection sensitivity over conventional photodiodes. However, the quenching circuit required to remove the excess charge carriers after each photon event can introduce an undesirable dead time into the detection process. The effect of this detector dead time on the performance of a binary pulse-position-modulated (PPM) channel is studied by analyzing the error probability. It is shown that, when background noise is negligible, the performance of the detector with dead time is similar to that of a quantum-limited receiver. For systems with increasing background intensities, the error rate of the receiver starts to degrade rapidly with increasing dead time. The power penalty due to detector dead time is also evaluated and shown to depend critically on badkground intensity as well as dead time. Given the expected background strength in an optical channel, therefore, a constraint must be placed on the bandwidth of the receiver to limit the amount of power penalty due to detector dead time.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
A real-time surface inspection system for precision steel balls based on machine vision
NASA Astrophysics Data System (ADS)
Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen
2016-07-01
Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.
A simple and effective figure caption detection system for old-style documents
NASA Astrophysics Data System (ADS)
Liu, Zongyi; Zhou, Hanning
2011-01-01
Identifying figure captions has wide applications in producing high quality e-books such as kindle books or ipad books. In this paper, we present a rule-based system to detect horizontal figure captions in old-style documents. Our algorithm consists of three steps: (i) segment images into regions of different types such as text and figures, (ii) search the best caption region candidate based on heuristic rules such as region alignments and distances, and (iii) expand caption regions identified in step (ii) with its neighboring text-regions in order to correct oversegmentation errors. We test our algorithm using 81 images collected from old-style books, with each image containing at least one figure area. We show that the approach is able to correctly detect figure captions from images with different layouts, and we also measure its performances in terms of both precision rate and recall rate.
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.
Jones, John W.
2015-01-01
The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE), is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN) was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral instruments. Although no other sites have such an extensive in situ network or long-term records, the broader applicability of this and other candidate DSWE algorithms is being evaluated in other wetlands using this work as a guide. Continued interaction among DSWE producers and potential users will help determine whether the measured accuracies are adequate for practical utility in resource management.
Memory Detection 2.0: The First Web-Based Memory Detection Test
Kleinberg, Bennett; Verschuere, Bruno
2015-01-01
There is accumulating evidence that reaction times (RTs) can be used to detect recognition of critical (e.g., crime) information. A limitation of this research base is its reliance upon small samples (average n = 24), and indications of publication bias. To advance RT-based memory detection, we report upon the development of the first web-based memory detection test. Participants in this research (Study1: n = 255; Study2: n = 262) tried to hide 2 high salient (birthday, country of origin) and 2 low salient (favourite colour, favourite animal) autobiographical details. RTs allowed to detect concealed autobiographical information, and this, as predicted, more successfully so than error rates, and for high salient than for low salient items. While much remains to be learned, memory detection 2.0 seems to offer an interesting new platform to efficiently and validly conduct RT-based memory detection research. PMID:25874966
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
Using video recording to identify management errors in pediatric trauma resuscitation.
Oakley, Ed; Stocker, Sergio; Staubli, Georg; Young, Simon
2006-03-01
To determine the ability of video recording to identify management errors in trauma resuscitation and to compare this method with medical record review. The resuscitation of children who presented to the emergency department of the Royal Children's Hospital between February 19, 2001, and August 18, 2002, for whom the trauma team was activated was video recorded. The tapes were analyzed, and management was compared with Advanced Trauma Life Support guidelines. Deviations from these guidelines were recorded as errors. Fifty video recordings were analyzed independently by 2 reviewers. Medical record review was undertaken for a cohort of the most seriously injured patients, and errors were identified. The errors detected with the 2 methods were compared. Ninety resuscitations were video recorded and analyzed. An average of 5.9 errors per resuscitation was identified with this method (range: 1-12 errors). Twenty-five children (28%) had an injury severity score of >11; there was an average of 2.16 errors per patient in this group. Only 10 (20%) of these errors were detected in the medical record review. Medical record review detected an additional 8 errors that were not evident on the video recordings. Concordance between independent reviewers was high, with 93% agreement. Video recording is more effective than medical record review in detecting management errors in pediatric trauma resuscitation. Management errors in pediatric trauma resuscitation are common and often involve basic resuscitation principles. Resuscitation of the most seriously injured children was associated with fewer errors. Video recording is a useful adjunct to trauma resuscitation auditing.
Harrell-Williams, Leigh; Wolfe, Edward W
2014-01-01
Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.
Optical phase-locked loop (OPLL) for free-space laser communications with heterodyne detection
NASA Technical Reports Server (NTRS)
Win, Moe Z.; Chen, Chien-Chung; Scholtz, Robert A.
1991-01-01
Several advantages of coherent free-space optical communications are outlined. Theoretical analysis is formulated for an OPLL disturbed by shot noise, modulation noise, and frequency noise consisting of a white component, a 1/f component, and a 1/f-squared component. Each of the noise components is characterized by its associated power spectral density. It is shown that the effect of modulation depends only on the ratio of loop bandwidth and data rate, and is negligible for an OPLL with loop bandwidth smaller than one fourth the data rate. Total phase error variance as a function of loop bandwidth is displayed for several values of carrier signal to noise ratio. Optimal loop bandwidth is also calculated as a function of carrier signal to noise ratio. An OPLL experiment is performed, where it is shown that the measured phase error variance closely matches the theoretical predictions.
Interaction of hypertension and age in visual selective attention performance.
Madden, D J; Blumenthal, J A
1998-01-01
Previous research suggests that some aspects of cognitive performance decline as a joint function of age and hypertension. In this experiment, 51 unmedicated individuals with mild essential hypertension and 48 normotensive individuals, 18-78 years of age, performed a visual search task. The estimated time required to identify a display character and shift attention between display positions increased with age. This attention shift time did not differ significantly between hypertensive and normotensive participants, but regression analyses indicated some mediation of the age effect by blood pressure. For individuals less than 60 years of age, the error rate was greater for hypertensive than for normotensive participants. Although the present design could detect effects of only moderate to large size, the results suggest that effects of hypertension may be more evident in a relatively general measure of performance (mean error rate) than in the speed of shifting visual attention.
A goodness-of-fit test for capture-recapture model M(t) under closure
Stanley, T.R.; Burnham, K.P.
1999-01-01
A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.
Simultaneous message framing and error detection
NASA Technical Reports Server (NTRS)
Frey, A. H., Jr.
1968-01-01
Circuitry simultaneously inserts message framing information and detects noise errors in binary code data transmissions. Separate message groups are framed without requiring both framing bits and error-checking bits, and predetermined message sequence are separated from other message sequences without being hampered by intervening noise.
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
A comparison of acoustic montoring methods for common anurans of the northeastern United States
Brauer, Corinne; Donovan, Therese; Mickey, Ruth M.; Katz, Jonathan; Mitchell, Brian R.
2016-01-01
Many anuran monitoring programs now include autonomous recording units (ARUs). These devices collect audio data for extended periods of time with little maintenance and at sites where traditional call surveys might be difficult. Additionally, computer software programs have grown increasingly accurate at automatically identifying the calls of species. However, increased automation may cause increased error. We collected 435 min of audio data with 2 types of ARUs at 10 wetland sites in Vermont and New York, USA, from 1 May to 1 July 2010. For each minute, we determined presence or absence of 4 anuran species (Hyla versicolor, Pseudacris crucifer, Anaxyrus americanus, and Lithobates clamitans) using 1) traditional human identification versus 2) computer-mediated identification with software package, Song Scope® (Wildlife Acoustics, Concord, MA). Detections were compared with a data set consisting of verified calls in order to quantify false positive, false negative, true positive, and true negative rates. Multinomial logistic regression analysis revealed a strong (P < 0.001) 3-way interaction between the ARU recorder type, identification method, and focal species, as well as a trend in the main effect of rain (P = 0.059). Overall, human surveyors had the lowest total error rate (<2%) compared with 18–31% total errors with automated methods. Total error rates varied by species, ranging from 4% for A. americanus to 26% for L. clamitans. The presence of rain may reduce false negative rates. For survey minutes where anurans were known to be calling, the odds of a false negative were increased when fewer individuals of the same species were calling.
Observer detection of image degradation caused by irreversible data compression processes
NASA Astrophysics Data System (ADS)
Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David
1991-05-01
Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.
ECG Signal Analysis and Arrhythmia Detection using Wavelet Transform
NASA Astrophysics Data System (ADS)
Kaur, Inderbir; Rajni, Rajni; Marwaha, Anupma
2016-12-01
Electrocardiogram (ECG) is used to record the electrical activity of the heart. The ECG signal being non-stationary in nature, makes the analysis and interpretation of the signal very difficult. Hence accurate analysis of ECG signal with a powerful tool like discrete wavelet transform (DWT) becomes imperative. In this paper, ECG signal is denoised to remove the artifacts and analyzed using Wavelet Transform to detect the QRS complex and arrhythmia. This work is implemented in MATLAB software for MIT/BIH Arrhythmia database and yields the sensitivity of 99.85 %, positive predictivity of 99.92 % and detection error rate of 0.221 % with wavelet transform. It is also inferred that DWT outperforms principle component analysis technique in detection of ECG signal.
Accuracy and reliability of forensic latent fingerprint decisions
Ulery, Bradford T.; Hicklin, R. Austin; Buscaglia, JoAnn; Roberts, Maria Antonia
2011-01-01
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners’ decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners’ decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion. PMID:21518906
NASA Astrophysics Data System (ADS)
Ryu, Y. H.; Hodzic, A.; Barré, J.; Descombes, G.; Minnis, P.
2017-12-01
Clouds play a key role in radiation and hence O3 photochemistry by modulating photolysis rates and light-dependent emissions of biogenic volatile organic compounds (BVOCs). It is not well known, however, how much of the bias in O3 predictions is caused by inaccurate cloud predictions. This study quantifies the errors in surface O3 predictions associated with clouds in summertime over CONUS using the Weather Research and Forecasting with Chemistry (WRF-Chem) model. Cloud fields used for photochemistry are corrected based on satellite cloud retrievals in sensitivity simulations. It is found that the WRF-Chem model is able to detect about 60% of clouds in the right locations and generally underpredicts cloud optical depths. The errors in hourly O3 due to the errors in cloud predictions can be up to 60 ppb. On average in summertime over CONUS, the errors in 8-h average O3 of 1-6 ppb are found to be attributable to those in cloud predictions under cloudy sky conditions. The contribution of changes in photolysis rates due to clouds is found to be larger ( 80 % on average) than that of light-dependent BVOC emissions. The effects of cloud corrections on O3 are about 2 times larger in VOC-limited than NOx-limited regimes, suggesting that the benefits of accurate cloud predictions would be greater in VOC-limited than NOx-limited regimes.
Accuracy and reliability of forensic latent fingerprint decisions.
Ulery, Bradford T; Hicklin, R Austin; Buscaglia, Joann; Roberts, Maria Antonia
2011-05-10
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners' decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners' decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.
An educational and audit tool to reduce prescribing error in intensive care.
Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D
2008-10-01
To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
NASA Astrophysics Data System (ADS)
Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.
2018-04-01
Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.
Error detection and reduction in blood banking.
Motschman, T L; Moore, S B
1996-12-01
Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.
Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena
2012-01-01
Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID:22395352
Transient Faults in Computer Systems
NASA Technical Reports Server (NTRS)
Masson, Gerald M.
1993-01-01
A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.
E/N effects on K0 values revealed by high precision measurements under low field conditions
NASA Astrophysics Data System (ADS)
Hauck, Brian C.; Siems, William F.; Harden, Charles S.; McHugh, Vincent M.; Hill, Herbert H.
2016-07-01
Ion mobility spectrometry (IMS) is used to detect chemical warfare agents, explosives, and narcotics. While IMS has a low rate of false positives, their occurrence causes the loss of time and money as the alarm is verified. Because numerous variables affect the reduced mobility (K0) of an ion, wide detection windows are required in order to ensure a low false negative response rate. Wide detection windows, however, reduce response selectivity, and interferents with similar K0 values may be mistaken for targeted compounds and trigger a false positive alarm. Detection windows could be narrowed if reference K0 values were accurately known for specific instrumental conditions. Unfortunately, there is a lack of confidence in the literature values due to discrepancies in the reported K0 values and their lack of reported error. This creates the need for the accurate control and measurement of each variable affecting ion mobility, as well as for a central accurate IMS database for reference and calibration. A new ion mobility spectrometer has been built that reduces the error of measurements affecting K0 by an order of magnitude less than ±0.2%. Precise measurements of ±0.002 cm2 V-1 s-1 or better have been produced and, as a result, an unexpected relationship between K0 and the electric field to number density ratio (E/N) has been discovered in which the K0 values of ions decreased as a function of E/N along a second degree polynomial trend line towards an apparent asymptote at approximately 4 Td.
Principal component analysis for the early detection of mastitis and lameness in dairy cows.
Miekley, Bettina; Traulsen, Imke; Krieter, Joachim
2013-08-01
This investigation analysed the applicability of principal component analysis (PCA), a latent variable method, for the early detection of mastitis and lameness. Data used were recorded on the Karkendamm dairy research farm between August 2008 and December 2010. For mastitis and lameness detection, data of 338 and 315 cows in their first 200 d in milk were analysed, respectively. Mastitis as well as lameness were specified according to veterinary treatments. Diseases were defined as disease blocks. The different definitions used (two for mastitis, three for lameness) varied solely in the sequence length of the blocks. Only the days before the treatment were included in the blocks. Milk electrical conductivity, milk yield and feeding patterns (feed intake, number of feeding visits and time at the trough) were used for recognition of mastitis. Pedometer activity and feeding patterns were utilised for lameness detection. To develop and verify the PCA model, the mastitis and the lameness datasets were divided into training and test datasets. PCA extracted uncorrelated principle components (PC) by linear transformations of the raw data so that the first few PCs captured most of the variations in the original dataset. For process monitoring and disease detection, these resulting PCs were applied to the Hotelling's T 2 chart and to the residual control chart. The results show that block sensitivity of mastitis detection ranged from 77·4 to 83·3%, whilst specificity was around 76·7%. The error rates were around 98·9%. For lameness detection, the block sensitivity ranged from 73·8 to 87·8% while the obtained specificities were between 54·8 and 61·9%. The error rates varied from 87·8 to 89·2%. In conclusion, PCA seems to be not yet transferable into practical usage. Results could probably be improved if different traits and more informative sensor data are included in the analysis.
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
Kang, Sinkyu; Hong, Suk Young
2016-01-01
A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km2. The lake area decreased by -9.3% at an annual rate of -53.7 km2 yr-1 during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability. PMID:27007233
NASA Astrophysics Data System (ADS)
Chung, Hye Won; Guha, Saikat; Zheng, Lizhong
2017-07-01
We study the problem of designing optical receivers to discriminate between multiple coherent states using coherent processing receivers—i.e., one that uses arbitrary coherent feedback control and quantum-noise-limited direct detection—which was shown by Dolinar to achieve the minimum error probability in discriminating any two coherent states. We first derive and reinterpret Dolinar's binary-hypothesis minimum-probability-of-error receiver as the one that optimizes the information efficiency at each time instant, based on recursive Bayesian updates within the receiver. Using this viewpoint, we propose a natural generalization of Dolinar's receiver design to discriminate M coherent states, each of which could now be a codeword, i.e., a sequence of N coherent states, each drawn from a modulation alphabet. We analyze the channel capacity of the pure-loss optical channel with a general coherent-processing receiver in the low-photon number regime and compare it with the capacity achievable with direct detection and the Holevo limit (achieving the latter would require a quantum joint-detection receiver). We show compelling evidence that despite the optimal performance of Dolinar's receiver for the binary coherent-state hypothesis test (either in error probability or mutual information), the asymptotic communication rate achievable by such a coherent-processing receiver is only as good as direct detection. This suggests that in the infinitely long codeword limit, all potential benefits of coherent processing at the receiver can be obtained by designing a good code and direct detection, with no feedback within the receiver.
Kang, Sinkyu; Hong, Suk Young
2016-01-01
A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km(2). The lake area decreased by -9.3% at an annual rate of -53.7 km(2) yr(-1) during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability.
Real-time heart rate measurement for multi-people using compressive tracking
NASA Astrophysics Data System (ADS)
Liu, Lingling; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Dong, Liquan; Ma, Feilong; Pang, Zongguang; Cai, Zhi; Zhang, Yachu; Hua, Peng; Yuan, Ruifeng
2017-09-01
The rise of aging population has created a demand for inexpensive, unobtrusive, automated health care solutions. Image PhotoPlethysmoGraphy(IPPG) aids in the development of these solutions by allowing for the extraction of physiological signals from video data. However, the main deficiencies of the recent IPPG methods are non-automated, non-real-time and susceptible to motion artifacts(MA). In this paper, a real-time heart rate(HR) detection method for multiple subjects simultaneously was proposed and realized using the open computer vision(openCV) library, which consists of getting multiple subjects' facial video automatically through a Webcam, detecting the region of interest (ROI) in the video, reducing the false detection rate by our improved Adaboost algorithm, reducing the MA by our improved compress tracking(CT) algorithm, wavelet noise-suppression algorithm for denoising and multi-threads for higher detection speed. For comparison, HR was measured simultaneously using a medical pulse oximetry device for every subject during all sessions. Experimental results on a data set of 30 subjects show that the max average absolute error of heart rate estimation is less than 8 beats per minute (BPM), and the processing speed of every frame has almost reached real-time: the experiments with video recordings of ten subjects under the condition of the pixel resolution of 600× 800 pixels show that the average HR detection time of 10 subjects was about 17 frames per second (fps).
Single event upset in avionics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taber, A.; Normand, E.
1993-04-01
Data from military/experimental flights and laboratory testing indicate that typical non radiation-hardened 64K and 256K static random access memories (SRAMs) can experience a significant soft upset rate at aircraft altitudes due to energetic neutrons created by cosmic ray interactions in the atmosphere. It is suggested that error detection and correction (EDAC) circuitry be considered for all avionics designs containing large amounts of semi-conductor memory.
TR-BREATH: Time-Reversal Breathing Rate Estimation and Detection.
Chen, Chen; Han, Yi; Chen, Yan; Lai, Hung-Quoc; Zhang, Feng; Wang, Beibei; Liu, K J Ray
2018-03-01
In this paper, we introduce TR-BREATH, a time-reversal (TR)-based contact-free breathing monitoring system. It is capable of breathing detection and multiperson breathing rate estimation within a short period of time using off-the-shelf WiFi devices. The proposed system exploits the channel state information (CSI) to capture the miniature variations in the environment caused by breathing. To magnify the CSI variations, TR-BREATH projects CSIs into the TR resonating strength (TRRS) feature space and analyzes the TRRS by the Root-MUSIC and affinity propagation algorithms. Extensive experiment results indoor demonstrate a perfect detection rate of breathing. With only 10 s of measurement, a mean accuracy of can be obtained for single-person breathing rate estimation under the non-line-of-sight (NLOS) scenario. Furthermore, it achieves a mean accuracy of in breathing rate estimation for a dozen people under the line-of-sight scenario and a mean accuracy of in breathing rate estimation of nine people under the NLOS scenario, both with 63 s of measurement. Moreover, TR-BREATH can estimate the number of people with an error around 1. We also demonstrate that TR-BREATH is robust against packet loss and motions. With the prevailing of WiFi, TR-BREATH can be applied for in-home and real-time breathing monitoring.
Is there any electrophysiological evidence for subliminal error processing?
Shalgi, Shani; Deouell, Leon Y
2013-08-29
The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness.
Activity Tracking for Pilot Error Detection from Flight Data
NASA Technical Reports Server (NTRS)
Callantine, Todd J.; Ashford, Rose (Technical Monitor)
2002-01-01
This report presents an application of activity tracking for pilot error detection from flight data, and describes issues surrounding such an application. It first describes the Crew Activity Tracking System (CATS), in-flight data collected from the NASA Langley Boeing 757 Airborne Research Integrated Experiment System aircraft, and a model of B757 flight crew activities. It then presents an example of CATS detecting actual in-flight crew errors.
Mahfuz, Mohammad U; Makrakis, Dimitrios; Mouftah, Hussein T
2014-09-01
In this paper, a comprehensive analysis of the sampling-based optimum signal detection in ideal (i.e., free) diffusion-based concentration-encoded molecular communication (CEMC) system has been presented. A generalized amplitude-shift keying (ASK)-based CEMC system has been considered in diffusion-based noise and intersymbol interference (ISI) conditions. Information is encoded by modulating the amplitude of the transmission rate of information molecules at the TN. The critical issues involved in the sampling-based receiver thus developed are addressed in detail, and its performance in terms of the number of samples per symbol, communication range, and transmission data rate is evaluated. ISI produced by the residual molecules deteriorates the performance of the CEMC system significantly, which further deteriorates when the communication range and/or the transmission data rate increase(s). In addition, the performance of the optimum receiver depends on the receiver's ability to compute the ISI accurately, thus providing a trade-off between receiver complexity and achievable bit error rate (BER). Exact and approximate detection performances have been derived. Finally, it is found that the sampling-based signal detection scheme thus developed can be applied to both binary and multilevel (M-ary) ASK-based CEMC systems, although M-ary systems suffer more from higher BER.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.
2014-10-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.
RF signal detection by a tunable optoelectronic oscillator based on a PS-FBG.
Shao, Yuchen; Han, Xiuyou; Li, Ming; Zhao, Mingshan
2018-03-15
Low-power radio frequency (RF) signal detection is highly desirable for many applications, ranging from wireless communication to radar systems. A tunable optoelectronic oscillator (OEO) based on a phase-shifted fiber Bragg grating for detecting low-power RF signals is proposed and experimentally demonstrated. When the frequency of the input RF signal is matched with the potential oscillation mode of the OEO, it is detected and amplified. The frequency of the RF signal under detection can be estimated simultaneously by scanning the wavelength of the laser source. The RF signals from 1.5 to 5 GHz as low as -91 dBm are detected with a gain of about 10 dB, and the frequency is estimated with an error of ±100 MHz. The performance of the OEO system for detecting an RF signal with different modulation rates is also investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J Zwan, B; Central Coast Cancer Centre, Gosford, NSW; Colvill, E
2016-06-15
Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3)more » field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm{sup 2} (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.« less
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Survey of Anomaly Detection Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, B
This survey defines the problem of anomaly detection and provides an overview of existing methods. The methods are categorized into two general classes: generative and discriminative. A generative approach involves building a model that represents the joint distribution of the input features and the output labels of system behavior (e.g., normal or anomalous) then applies the model to formulate a decision rule for detecting anomalies. On the other hand, a discriminative approach aims directly to find the decision rule, with the smallest error rate, that distinguishes between normal and anomalous behavior. For each approach, we will give an overview ofmore » popular techniques and provide references to state-of-the-art applications.« less
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points
Detection of dust impacts by the Voyager planetary radio astronomy experiment
NASA Technical Reports Server (NTRS)
Evans, David R.
1993-01-01
The Planetary Radio Astronomy (PRA) instrument detected large numbers of dust particles during the Voyager 2 encounter with Neptune. The signatures of these impacts are analyzed in some detail. The major conclusions are described. PRA detects impacts from all over the spacecraft body, not just the PRA antennas. The signatures of individual impacts last substantially longer than was expected from complementary Plasma Wave Subsystem (PWS) data acquired by another Voyager experiment. The signatures of individual impacts demonstrate very rapid fluctuations in signal strength, so fast that the data are limited by the speed of response of the instrument. The PRA detects events at a rate consistently lower than does the Plasma Wave subsystem. Even so, the impact rate is so great near the inbound crossing of the ring plane that no reliable estimate of impact rate can be made for this period. The data are consistent with the presence of electrons accelerated by ions within an expanding plasma cloud from the point of impact. An ancillary conclusion is that the anomalous appearance of data acquired at 900 kHz appears to be due to an error in processing the PRA data prior to their delivery rather than due to overload of the PRA instrument.
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
NASA Astrophysics Data System (ADS)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Experimental results of 5-Gbps free-space coherent optical communications with adaptive optics
NASA Astrophysics Data System (ADS)
Chen, Mo; Liu, Chao; Rui, Daoman; Xian, Hao
2018-07-01
In a free-space optical communication system with fiber optical components, the received signal beam must be coupled into a single-mode fiber (SMF) before being amplified and detected. The impacts analysis of tracking errors and wavefront distortion on SMF coupling show that under the condition of relatively strong turbulence, only the tracking errors compensation is not enough, and turbulence wavefront aberration is required to be corrected. Based on our previous study and design of SMF coupling system with a 137-element continuous surface deformable mirror AO unit, we perform an experiment of a 5-Gbps Free-space Coherent Optical Communication (FSCOC) system, in which the eye pattern and Bit-error Rate (BER) are displayed. The comparative results are shown that the influence of the atmospheric is fatal in FSCOC systems. The BER of coherent communication is under 10-6 with AO compensation, which drops significantly compared with the BER without AO correction.
Low target prevalence is a stubborn source of errors in visual search tasks
Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour
2009-01-01
In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575
Development and implementation of a human accuracy program in patient foodservice.
Eden, S H; Wood, S M; Ptak, K M
1987-04-01
For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.
A Real-Time Wireless Sweat Rate Measurement System for Physical Activity Monitoring
Brueck, Andrew; Iftekhar, Tashfin; Stannard, Alicja B.; Kaya, Tolga
2018-01-01
There has been significant research on the physiology of sweat in the past decade, with one of the main interests being the development of a real-time hydration monitor that utilizes sweat. The contents of sweat have been known for decades; sweat provides significant information on the physiological condition of the human body. However, it is important to know the sweat rate as well, as sweat rate alters the concentration of the sweat constituents, and ultimately affects the accuracy of hydration detection. Towards this goal, a calorimetric based flow-rate detection system was built and tested to determine sweat rate in real time. The proposed sweat rate monitoring system has been validated through both controlled lab experiments (syringe pump) and human trials. An Internet of Things (IoT) platform was embedded, with the sensor using a Simblee board and Raspberry Pi. The overall prototype is capable of sending sweat rate information in real time to either a smartphone or directly to the cloud. Based on a proven theoretical concept, our overall system implementation features a pioneer device that can truly measure the rate of sweat in real time, which was tested and validated on human subjects. Our realization of the real-time sweat rate watch is capable of detecting sweat rates as low as 0.15 µL/min/cm2, with an average error in accuracy of 18% compared to manual sweat rate readings. PMID:29439398
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Design and scheduling for periodic concurrent error detection and recovery in processor arrays
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent
1992-01-01
Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.
Yunus, Zabedah Md; Rahman, Salina Abdul; Choy, Yew Sing; Keng, Wee Teik; Ngu, Lock Hock
2016-09-01
The aim of this study was to determine the feasibility of performing newborn screening (NBS) of inborn errors of metabolism (IEMs) using tandem mass spectrometry (TMS) and the impact on its detection rate in Malaysia. During the study period between June 2006 and December 2008, 30,247 newborns from 11 major public hospitals in Malaysia were screened for 27 inborn errors of amino acid, organic acid and fatty acid metabolism by TMS. Dried blood spot (DBS) samples were collected between 24 h and 7 days with parental consent. Samples with abnormal results were repeated and the babies were recalled to confirm the diagnosis with follow-up testing. Cut-off values for amino acids and acylcarnitines were established. Eight newborns were confirmed to have IEM: two newborns with Maple syrup urine disease (MSUD), two with methylmalonic aciduria (MMA) one with ethylmalonic aciduria, two with argininosuccinic aciduria and one with isovaleric aciduria. Diagnosis was missed in two newborns. The detection rate of IEMs in this study was one in 2916 newborns. The sensitivity and specificity of TMS were 80% and 99%, respectively. IEMs are common in Malaysia. NBS of IEMs by TMS is a valuable preventive strategy by enabling the diagnosis and early treatment of IEM before the onset of symptoms aiming at prevention of mental retardation and physical handicap. A number of shortcomings warrant further solution so that in near future NBS for IEMs will become a standard of care for all babies in Malaysia in tandem with the developed world.
Detecting trends in raptor counts: power and type I error rates of various statistical tests
Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.
1996-01-01
We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.
Comparative analysis of methods for detecting interacting loci
2011-01-01
Background Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. Results We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. Conclusion This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list. PMID:21729295
Comparative analysis of methods for detecting interacting loci.
Chen, Li; Yu, Guoqiang; Langefeld, Carl D; Miller, David J; Guy, Richard T; Raghuram, Jayaram; Yuan, Xiguo; Herrington, David M; Wang, Yue
2011-07-05
Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list.
Advanced Water Vapor Lidar Detection System
NASA Technical Reports Server (NTRS)
Elsayed-Ali, Hani
1998-01-01
In the present water vapor lidar system, the detected signal is sent over long cables to a waveform digitizer in a CAMAC crate. This has the disadvantage of transmitting analog signals for a relatively long distance, which is subjected to pickup noise, leading to a decrease in the signal to noise ratio. Generally, errors in the measurement of water vapor with the DIAL method arise from both random and systematic sources. Systematic errors in DIAL measurements are caused by both atmospheric and instrumentation effects. The selection of the on-line alexandrite laser with a narrow linewidth, suitable intensity and high spectral purity, and its operation at the center of the water vapor lines, ensures minimum influence in the DIAL measurement that are caused by the laser spectral distribution and avoid system overloads. Random errors are caused by noise in the detected signal. Variability of the photon statistics in the lidar return signal, noise resulting from detector dark current, and noise in the background signal are the main sources of random error. This type of error can be minimized by maximizing the signal to noise ratio. The increase in the signal to noise ratio can be achieved by several ways. One way is to increase the laser pulse energy, by increasing its amplitude or the pulse repetition rate. Another way, is to use a detector system with higher quantum efficiency and lower noise, on the other hand, the selection of a narrow band optical filter that rejects most of the day background light and retains high optical efficiency is an important issue. Following acquisition of the lidar data, we minimize random errors in the DIAL measurement by averaging the data, but this will result in the reduction of the vertical and horizontal resolutions. Thus, a trade off is necessary to achieve a balance between the spatial resolution and the measurement precision. Therefore, the main goal of this research effort is to increase the signal to noise ratio by a factor of 10 over the current system, using a newly evaluated, very low noise avalanche photo diode detector and constructing a 10 MHz waveform digitizer which will replace the current CAMAC system.
Research of misalignment between dithered ring laser gyro angle rate input axis and dither axis
NASA Astrophysics Data System (ADS)
Li, Geng; Wu, Wenqi; FAN, Zhenfang; LU, Guangfeng; Hu, Shaomin; Luo, Hui; Long, Xingwu
2014-12-01
The strap-down inertial navigation system (SINS), especially the SINS composed by dithered ring laser gyroscope (DRLG) is a kind of equipment, which providing high reliability and performance for moving vehicles. However, the mechanical dither which is used to eliminate the "Lock-In" effect can cause vibration disturbance to the INS and lead to dithering coupling problem in the inertial measurement unit (IMU) gyroscope triad, so its further application is limited. Among DRLG errors between the true gyro rotation rate and the measured rotation rate, the frequently considered one is the input axis misalignment between input reference axis which is perpendicular to the mounting surface and gyro angular rate input axis. But the misalignment angle between DRLG dither axis and gyro angular rate input axis is often ignored by researchers, which is amplified by dither coupling problem and that would lead to negative effects especially in high accuracy SINS. In order to study the problem more clearly, the concept of misalignment between DRLG dither axis and gyro angle rate input axis is researched. Considering the error of misalignment is of the order of 10-3 rad. or even smaller, the best way to measure it is using DRLG itself by means of an angle exciter as an auxiliary. In this paper, the concept of dither axis misalignment is explained explicitly firstly, based on this, the frequency of angle exciter is induced as reference parameter, when DRLG is mounted on the angle exciter in a certain angle, the projections of angle exciter rotation rate and mechanical oscillation rate on the gyro input axis are both sensed by DRLG. If the dither axis has misalignment error with the gyro input axis, there will be four major frequencies detected: the frequency of angle exciter, the dither mechanical frequency, sum and difference frequencies of the former two frequencies. Then the amplitude spectrum of DRLG output signal obtained by the using LabVIEW program. if there are only angle exciter and the dither mechanical frequencies, the misalignment may be too small to be detected, otherwise, the amplitude of the sum and difference frequencies will show the misalignment angle between the gyro angle rate input axis and the dither axis. Finally, some related parameters such as frequency and amplitude of the angle exciter and sample rate are calculated and the results are analyzed. The simulation and experiment result prove the effectiveness of the proposed method..
NASA Astrophysics Data System (ADS)
Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.
2017-12-01
The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.
On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A
NASA Astrophysics Data System (ADS)
Blake, J. B.; Mandel, R.
1987-02-01
The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.
The influence of the structure and culture of medical group practices on prescription drug errors.
Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer
2005-08-01
This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.
Giglhuber, Katrin; Maurer, Stefanie; Zimmer, Claus; Meyer, Bernhard; Krieg, Sandro M
2017-02-01
In clinical practice, repetitive navigated transcranial magnetic stimulation (rTMS) is of particular interest for non-invasive mapping of cortical language areas. Yet, rTMS studies try to detect further cortical functions. Damage to the underlying network of visuospatial attention function can result in visual neglect-a severe neurological deficit and influencing factor for a significantly reduced functional outcome. This investigation aims to evaluate the use of rTMS for evoking visual neglect in healthy volunteers and the potential of specifically locating cortical areas that can be assigned for the function of visuospatial attention. Ten healthy, right-handed subjects underwent rTMS visual neglect mapping. Repetitive trains of 5 Hz and 10 pulses were applied to 52 pre-defined cortical spots on each hemisphere; each cortical spot was stimulated 10 times. Visuospatial attention was tested time-locked to rTMS pulses by a landmark task. Task pictures were displayed tachistoscopically for 50 ms. The subjects' performance was analyzed by video, and errors were referenced to cortical spots. We observed visual neglect-like deficits during the stimulation of both hemispheres. Errors were categorized into leftward, rightward, and no response errors. Rightward errors occurred significantly more often during stimulation of the right hemisphere than during stimulation of the left hemisphere (mean rightward error rate (ER) 1.6 ± 1.3 % vs. 1.0 ± 1.0 %, p = 0.0141). Within the left hemisphere, we observed predominantly leftward errors rather than rightward errors (mean leftward ER 2.0 ± 1.3 % vs. rightward ER 1.0 ± 1.0 %; p = 0.0005). Visual neglect can be elicited non-invasively by rTMS, and cortical areas eloquent for visuospatial attention can be detected. Yet, the correlation of this approach with clinical findings has to be shown in upcoming steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellefson, S; Department of Human Oncology, University of Wisconsin, Madison, WI; Culberson, W
Purpose: Discrepancies in absolute dose values have been detected between the ViewRay treatment planning system and ArcCHECK readings when performing delivery quality assurance on the ViewRay system with the ArcCHECK-MR diode array (SunNuclear Corporation). In this work, we investigate whether these discrepancies are due to errors in the ViewRay planning and/or delivery system or due to errors in the ArcCHECK’s readings. Methods: Gamma analysis was performed on 19 ViewRay patient plans using the ArcCHECK. Frequency analysis on the dose differences was performed. To investigate whether discrepancies were due to measurement or delivery error, 10 diodes in low-gradient dose regions weremore » chosen to compare with ion chamber measurements in a PMMA phantom with the same size and shape as the ArcCHECK, provided by SunNuclear. The diodes chosen all had significant discrepancies in absolute dose values compared to the ViewRay TPS. Absolute doses to PMMA were compared between the ViewRay TPS calculations, ArcCHECK measurements, and measurements in the PMMA phantom. Results: Three of the 19 patient plans had 3%/3mm gamma passing rates less than 95%, and ten of the 19 plans had 2%/2mm passing rates less than 95%. Frequency analysis implied a non-random error process. Out of the 10 diode locations measured, ion chamber measurements were all within 2.2% error relative to the TPS and had a mean error of 1.2%. ArcCHECK measurements ranged from 4.5% to over 15% error relative to the TPS and had a mean error of 8.0%. Conclusion: The ArcCHECK performs well for quality assurance on the ViewRay under most circumstances. However, under certain conditions the absolute dose readings are significantly higher compared to the planned doses. As the ion chamber measurements consistently agree with the TPS, it can be concluded that the discrepancies are due to ArcCHECK measurement error and not TPS or delivery system error. This work was funded by the Bhudatt Paliwal Professorship and the University of Wisconsin Medical Radiation Research Center.« less
Technology research for strapdown inertial experiment and digital flight control and guidance
NASA Technical Reports Server (NTRS)
Carestia, R. A.; Cottrell, D. E.
1985-01-01
A helicopter flight-test program to evaluate the performance of Honeywell's Tetrad - a strapdown, laser gyro, inertial navitation system is discussed. The results of 34 flights showed a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n.mi., with a standard deviation of 1.48 n.m.; and a modeled mean-position-error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. Tetrad's four-ring laser gyros provided reliable and accurate angular rate sensing during the test program and on sensor failures were detected during the evaluation. Criteria suitable for investigating cockpit systems in rotorcraft were developed. This criteria led to the development of two basic simulators. The first was a standard simulator which could be used to obtain baseline information for studying pilot workload and interactions. The second was an advanced simulator which integrated the RODAAS developed by Honeywell into this simulator. The second area also included surveying the aerospace industry to determine the level of use and impact of microcomputers and related components on avionics systems.
NASA Technical Reports Server (NTRS)
Palmer, Michael T.; Abbott, Kathy H.
1994-01-01
This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.