Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passarge, M; Fix, M K; Manser, P
Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less
Psychrometric Measurement of Leaf Water Potential: Lack of Error Attributable to Leaf Permeability.
Barrs, H D
1965-07-02
A report that low permeability could cause gross errors in psychrometric determinations of water potential in leaves has not been confirmed. No measurable error from this source could be detected for either of two types of thermocouple psychrometer tested on four species, each at four levels of water potential. No source of error other than tissue respiration could be demonstrated.
Procedural error monitoring and smart checklists
NASA Technical Reports Server (NTRS)
Palmer, Everett
1990-01-01
Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
DC-Compensated Current Transformer.
Ripka, Pavel; Draxler, Karel; Styblíková, Renata
2016-01-20
Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component.
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
NASA Astrophysics Data System (ADS)
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.
Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B
2013-09-01
To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.
1977-06-10
characterize. To detect distortion related to phonemic perception, spectral distance measures seem most important. Since the pitch contour plays such an...only gross gain errors should be detected. 10 In the caeas oi wavoform coders, the distortions are not so ea ily related to percoptlon. Pitci...e• ctral distanco moa.sures and related Lt measures were studied in this project. Let V(O), -1Tesw, be the short time power spectral envelope for a
DC-Compensated Current Transformer †
Ripka, Pavel; Draxler, Karel; Styblíková, Renata
2016-01-01
Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component. PMID:26805830
Identifying model error in metabolic flux analysis - a generalized least squares approach.
Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G
2016-09-13
The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.
A strategy for reducing gross errors in the generalized Born models of implicit solvation
Onufriev, Alexey V.; Sigalov, Grigori
2011-01-01
The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947
Kevin Schaefer; Christopher R. Schwalm; Chris Williams; M. Altaf Arain; Alan Barr; Jing M. Chen; Kenneth J. Davis; Dimitre Dimitrov; Timothy W. Hilton; David Y. Hollinger; Elyn Humphreys; Benjamin Poulter; Brett M. Raczka; Andrew D. Richardson; Alok Sahoo; Peter Thornton; Rodrigo Vargas; Hans Verbeeck; Ryan Anderson; Ian Baker; T. Andrew Black; Paul Bolstad; Jiquan Chen; Peter S. Curtis; Ankur R. Desai; Michael Dietze; Danilo Dragoni; Christopher Gough; Robert F. Grant; Lianhong Gu; Atul Jain; Chris Kucharik; Beverly Law; Shuguang Liu; Erandathie Lokipitiya; Hank A. Margolis; Roser Matamala; J. Harry McCaughey; Russ Monson; J. William Munger; Walter Oechel; Changhui Peng; David T. Price; Dan Ricciuto; William J. Riley; Nigel Roulet; Hanqin Tian; Christina Tonitto; Margaret Torn; Ensheng Weng; Xiaolu Zhou
2012-01-01
Accurately simulating gross primary productivity (GPP) in terrestrial ecosystem models is critical because errors in simulated GPP propagate through the model to introduce additional errors in simulated biomass and other fluxes. We evaluated simulated, daily average GPP from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States...
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on June 12, 2013. Representatives from the U.S. Nuclear Regulatory Commission (NRC) and the Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross betamore » analyses, and Table 1 presents the comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER ≤ 3 indicates at a 99% confidence interval that split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty (ANSI N42.22). The NFS split sample report specifies 95% confidence level of reported uncertainties (NFS 2013). Therefore, standard two sigma reporting values were divided by 1.96. In conclusion, most DER values were less than 3 and results are consistent with low (e.g., background) concentrations. The gross beta result for sample 5198W0014 was the exception. The ORAU gross beta result of 6.30 ± 0.65 pCi/L from location NRD is well above NFS's non-detected result of 1.56 ± 0.59 pCi/L. NFS's data package includes no detected result for any radionuclide at location NRD. At NRC's request, ORAU performed gamma spectroscopic analysis of sample 5198W0014 to identify analytes contributing to the relatively elevated gross beta results. This analysis identified detected amounts of naturally-occurring constituents, most notably Ac-228 from the thorium decay series, and does not suggest the presence of site-related contamination.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Z.; Pike, R.W.; Hertwig, T.A.
An effective approach for source reduction in chemical plants has been demonstrated using on-line optimization with flowsheeting (ASPEN PLUS) for process optimization and parameter estimation and the Tjao-Biegler algorithm implemented in a mathematical programming language (GAMS/MINOS) for data reconciliation and gross error detection. Results for a Monsanto sulfuric acid plant with a Bailey distributed control system showed a 25% reduction in the sulfur dioxide emissions and a 17% improvement in the profit over the current operating conditions. Details of the methods used are described.
WE-D-BRA-04: Online 3D EPID-Based Dose Verification for Optimum Patient Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spreeuw, H; Rozendaal, R; Olaciregui-Ruiz, I
2015-06-15
Purpose: To develop an online 3D dose verification tool based on EPID transit dosimetry to ensure optimum patient safety in radiotherapy treatments. Methods: A new software package was developed which processes EPID portal images online using a back-projection algorithm for the 3D dose reconstruction. The package processes portal images faster than the acquisition rate of the portal imager (∼ 2.5 fps). After a portal image is acquired, the software seeks for “hot spots” in the reconstructed 3D dose distribution. A hot spot is in this study defined as a 4 cm{sup 3} cube where the average cumulative reconstructed dose exceedsmore » the average total planned dose by at least 20% and 50 cGy. If a hot spot is detected, an alert is generated resulting in a linac halt. The software has been tested by irradiating an Alderson phantom after introducing various types of serious delivery errors. Results: In our first experiment the Alderson phantom was irradiated with two arcs from a 6 MV VMAT H&N treatment having a large leaf position error or a large monitor unit error. For both arcs and both errors the linac was halted before dose delivery was completed. When no error was introduced, the linac was not halted. The complete processing of a single portal frame, including hot spot detection, takes about 220 ms on a dual hexacore Intel Xeon 25 X5650 CPU at 2.66 GHz. Conclusion: A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for various kinds of gross delivery errors. The detection of hot spots was proven to be effective for the timely detection of these errors. Current work is focused on hot spot detection criteria for various treatment sites and the introduction of a clinical pilot program with online verification of hypo-fractionated (lung) treatments.« less
NASA Astrophysics Data System (ADS)
Ji, S.; Yuan, X.
2016-06-01
A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.
NASA Astrophysics Data System (ADS)
Prószyński, Witold; Kwaśniak, Mieczysław
2016-12-01
The paper presents the results of investigating the effect of increase of observation correlations on detectability and identifiability of a single gross error, the outlier test sensitivity and also the response-based measures of internal reliability of networks. To reduce in a research a practically incomputable number of possible test options when considering all the non-diagonal elements of the correlation matrix as variables, its simplest representation was used being a matrix with all non-diagonal elements of equal values, termed uniform correlation. By raising the common correlation value incrementally, a sequence of matrix configurations could be obtained corresponding to the increasing level of observation correlations. For each of the measures characterizing the above mentioned features of network reliability the effect is presented in a diagram form as a function of the increasing level of observation correlations. The influence of observation correlations on sensitivity of the w-test for correlated observations (Förstner 1983, Teunissen 2006) is investigated in comparison with the original Baarda's w-test designated for uncorrelated observations, to determine the character of expected sensitivity degradation of the latter when used for correlated observations. The correlation effects obtained for different reliability measures exhibit mutual consistency in a satisfactory extent. As a by-product of the analyses, a simple formula valid for any arbitrary correlation matrix is proposed for transforming the Baarda's w-test statistics into the w-test statistics for correlated observations.
Spatial compression impairs prism adaptation in healthy individuals.
Scriven, Rachel J; Newport, Roger
2013-01-01
Neglect patients typically present with gross inattention to one side of space following damage to the contralateral hemisphere. While prism-adaptation (PA) is effective in ameliorating some neglect behaviors, the mechanisms involved and their relationship to neglect remain unclear. Recent studies have shown that conscious strategic control (SC) processes in PA may be impaired in neglect patients, who are also reported to show extraordinarily long aftereffects compared to healthy participants. Determining the underlying cause of these effects may be the key to understanding therapeutic benefits. Alternative accounts suggest that reduced SC might result from a failure to detect prism-induced reaching errors properly either because (a) the size of the error is underestimated in compressed visual space or (b) pathologically increased error-detection thresholds reduce the requirement for error correction. The purpose of this study was to model these two alternatives in healthy participants and to examine whether SC and subsequent aftereffects were abnormal compared to standard PA. Each participant completed three PA procedures within a MIRAGE mediated reality environment with direction errors recorded before, during and after adaptation. During PA, visual feedback of the reach could be compressed, perturbed by noise, or represented veridically. Compressed visual space significantly reduced SC and aftereffects compared to control and noise conditions. These results support recent observations in neglect patients, suggesting that a distortion of spatial representation may successfully model neglect and explain neglect performance while adapting to prisms.
Adaptive Trajectory Prediction Algorithm for Climbing Flights
NASA Technical Reports Server (NTRS)
Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz
2012-01-01
Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.
Validating the Rett Syndrome Gross Motor Scale.
Downs, Jenny; Stahlhut, Michelle; Wong, Kingsley; Syhler, Birgit; Bisgaard, Anne-Marie; Jacoby, Peter; Leonard, Helen
2016-01-01
Rett syndrome is a pervasive neurodevelopmental disorder associated with a pathogenic mutation on the MECP2 gene. Impaired movement is a fundamental component and the Rett Syndrome Gross Motor Scale was developed to measure gross motor abilities in this population. The current study investigated the validity and reliability of the Rett Syndrome Gross Motor Scale. Video data showing gross motor abilities supplemented with parent report data was collected for 255 girls and women registered with the Australian Rett Syndrome Database, and the factor structure and relationships between motor scores, age and genotype were investigated. Clinical assessment scores for 38 girls and women with Rett syndrome who attended the Danish Center for Rett Syndrome were used to assess consistency of measurement. Principal components analysis enabled the calculation of three factor scores: Sitting, Standing and Walking, and Challenge. Motor scores were poorer with increasing age and those with the p.Arg133Cys, p.Arg294* or p.Arg306Cys mutation achieved higher scores than those with a large deletion. The repeatability of clinical assessment was excellent (intraclass correlation coefficient for total score 0.99, 95% CI 0.93-0.98). The standard error of measurement for the total score was 2 points and we would be 95% confident that a change 4 points in the 45-point scale would be greater than within-subject measurement error. The Rett Syndrome Gross Motor Scale could be an appropriate measure of gross motor skills in clinical practice and clinical trials.
Confusion—specimen mix-up in dermatopathology and measures to prevent and detect it
Weyers, Wolfgang
2014-01-01
Maintaining patient identity throughout the biopsy pathway is critical for the practice of dermatology and dermatopathology. From the biopsy procedure to the acquisition of the pathology report, a specimen may pass through the hands of more than twenty individuals in several workplaces. The risk of a mix-up is considerable and may account for more serious mistakes than diagnostic errors. To prevent specimen mix-up, work processes should be standardized and automated wherever possible, e.g., by strict order in the operating room and in the laboratory and by adoption of a bar code system to identify specimens and corresponding request forms. Mutual control of clinicians, technicians, histopathologists, and secretaries, both simultaneously and downstream, is essential to detect errors. The most vulnerable steps of the biopsy pathway, namely, labeling of specimens and request forms and accessioning of biopsy specimens in the laboratory, should be carried out by two persons simultaneously. In preceding work steps, clues must be provided that allow a mix-up to be detected later on, such as information about clinical diagnosis, biopsy technique, and biopsy site by the clinician, and a sketch of the specimen by the technician grossing it. Awareness of the danger of specimen mix-up is essential for preventing and detecting it. The awareness can be heightened by documentation of any error in the biopsy pathway. In case of suspicion, a mix-up of specimens from different patients can be confirmed by DNA analysis. PMID:24520511
Confusion-specimen mix-up in dermatopathology and measures to prevent and detect it.
Weyers, Wolfgang
2014-01-01
Maintaining patient identity throughout the biopsy pathway is critical for the practice of dermatology and dermatopathology. From the biopsy procedure to the acquisition of the pathology report, a specimen may pass through the hands of more than twenty individuals in several workplaces. The risk of a mix-up is considerable and may account for more serious mistakes than diagnostic errors. To prevent specimen mix-up, work processes should be standardized and automated wherever possible, e.g., by strict order in the operating room and in the laboratory and by adoption of a bar code system to identify specimens and corresponding request forms. Mutual control of clinicians, technicians, histopathologists, and secretaries, both simultaneously and downstream, is essential to detect errors. The most vulnerable steps of the biopsy pathway, namely, labeling of specimens and request forms and accessioning of biopsy specimens in the laboratory, should be carried out by two persons simultaneously. In preceding work steps, clues must be provided that allow a mix-up to be detected later on, such as information about clinical diagnosis, biopsy technique, and biopsy site by the clinician, and a sketch of the specimen by the technician grossing it. Awareness of the danger of specimen mix-up is essential for preventing and detecting it. The awareness can be heightened by documentation of any error in the biopsy pathway. In case of suspicion, a mix-up of specimens from different patients can be confirmed by DNA analysis.
Air and smear sample calculational tool for Fluor Hanford Radiological control
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAUMANN, B.L.
2003-07-11
A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF-13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alphamore » and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample.« less
Improving the quality of marine geophysical track line data: Along-track analysis
NASA Astrophysics Data System (ADS)
Chandler, Michael T.; Wessel, Paul
2008-02-01
We have examined 4918 track line geophysics cruises archived at the U.S. National Geophysical Data Center (NGDC) using comprehensive error checking methods. Each cruise was checked for observation outliers, excessive gradients, metadata consistency, and general agreement with satellite altimetry-derived gravity and predicted bathymetry grids. Thresholds for error checking were determined empirically through inspection of histograms for all geophysical values, gradients, and differences with gridded data sampled along ship tracks. Robust regression was used to detect systematic scale and offset errors found by comparing ship bathymetry and free-air anomalies to the corresponding values from global grids. We found many recurring error types in the NGDC archive, including poor navigation, inappropriately scaled or offset data, excessive gradients, and extended offsets in depth and gravity when compared to global grids. While ˜5-10% of bathymetry and free-air gravity records fail our conservative tests, residual magnetic errors may exceed twice this proportion. These errors hinder the effective use of the data and may lead to mistakes in interpretation. To enable the removal of gross errors without over-writing original cruise data, we developed an errata system that concisely reports all errors encountered in a cruise. With such errata files, scientists may share cruise corrections, thereby preventing redundant processing. We have implemented these quality control methods in the modified MGD77 supplement to the Generic Mapping Tools software suite.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.
Klumpp, John; Brandl, Alexander
2015-03-01
A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.
The sensitivity of derived estimates to the measurment quality objectives for independent variables
Francis A. Roesch
2002-01-01
The effect of varying the allowed measurement error for individual tree variables upon county estimates of gross cubic-foot volume was examined. Measurement Quality Ob~ectives (MQOs) for three forest tree variables (biological identity, diameter, and height) used in individual tree gross cubic-foot volume equations were varied from the current USDA Forest Service...
The Sensitivity of Derived Estimates to the Measurement Quality Objectives for Independent Variables
Francis A. Roesch
2005-01-01
The effect of varying the allowed measurement error for individual tree variables upon county estimates of gross cubic-foot volume was examined. Measurement Quality Objectives (MQOs) for three forest tree variables (biological identity, diameter, and height) used in individual tree gross cubic-foot volume equations were varied from the current USDA Forest Service...
ERIC Educational Resources Information Center
Barr, Helen M.; And Others
1990-01-01
Multiple regression analyses of data from 449 children indicated statistically significant relationships between moderate levels of prenatal alcohol exposure and increased errors, increased latency, and increased total time on the Wisconsin Fine Motor Steadiness Battery and poorer balance on the Gross Motor Scale. (RH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on March 20, 2013. Representatives from the U.S. Nuclear Regulatory Commission and the Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross beta analyses,more » and Table 1 presents the comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER {<=} 3 indicates that at a 99% confidence interval, split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty (ANSI N42.22). The NFS split sample report does not specify the confidence level of reported uncertainties (NFS 2013). Therefore, standard two sigma reporting is assumed and uncertainty values were divided by 1.96. In conclusion, most DER values were less than 3 and results are consistent with low (e.g., background) concentrations. The gross beta result for sample 5198W0012 was the exception. The ORAU result of 9.23 ± 0.73 pCi/L from location MCD is well above NFS's result of -0.567 ± 0.63 pCi/L (non-detected). NFS's data package included a detected result for U-233/234, but no other uranium or plutonium detection, and nothing that would suggest the presence of beta-emitting radionuclides. The ORAU laboratory reanalyzed sample 5198W0012 using the remaining portion of the sample volume and a result of 11.3 ± 1.1 pCi/L was determined. As directed, the laboratory also counted the filtrate using gamma spectrometry analysis and identified only naturally occurring or ubiquitous man-made constituents, including beta emitters that are presumably responsible for the elevated gross beta values.« less
Jani, Shyam S; Low, Daniel A; Lamb, James M
2015-01-01
To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Prevention of gross setup errors in radiotherapy with an efficient automatic patient safety system.
Yan, Guanghua; Mittauer, Kathryn; Huang, Yin; Lu, Bo; Liu, Chihray; Li, Jonathan G
2013-11-04
Treatment of the wrong body part due to incorrect setup is among the leading types of errors in radiotherapy. The purpose of this paper is to report an efficient automatic patient safety system (PSS) to prevent gross setup errors. The system consists of a pair of charge-coupled device (CCD) cameras mounted in treatment room, a single infrared reflective marker (IRRM) affixed on patient or immobilization device, and a set of in-house developed software. Patients are CT scanned with a CT BB placed over their surface close to intended treatment site. Coordinates of the CT BB relative to treatment isocenter are used as reference for tracking. The CT BB is replaced with an IRRM before treatment starts. PSS evaluates setup accuracy by comparing real-time IRRM position with reference position. To automate system workflow, PSS synchronizes with the record-and-verify (R&V) system in real time and automatically loads in reference data for patient under treatment. Special IRRMs, which can permanently stick to patient face mask or body mold throughout the course of treatment, were designed to minimize therapist's workload. Accuracy of the system was examined on an anthropomorphic phantom with a designed end-to-end test. Its performance was also evaluated on head and neck as well as abdominalpelvic patients using cone-beam CT (CBCT) as standard. The PSS system achieved a seamless clinic workflow by synchronizing with the R&V system. By permanently mounting specially designed IRRMs on patient immobilization devices, therapist intervention is eliminated or minimized. Overall results showed that the PSS system has sufficient accuracy to catch gross setup errors greater than 1 cm in real time. An efficient automatic PSS with sufficient accuracy has been developed to prevent gross setup errors in radiotherapy. The system can be applied to all treatment sites for independent positioning verification. It can be an ideal complement to complex image-guidance systems due to its advantages of continuous tracking ability, no radiation dose, and fully automated clinic workflow.
Kaur, Maninderjit; M Srinivasan, Sudha; N Bhat, Anjana
2018-01-01
Children with Autism Spectrum Disorder (ASD) have basic motor impairments in balance, gait, and coordination as well as autism-specific impairments in praxis/motor planning and interpersonal synchrony. Majority of the current literature focuses on isolated motor behaviors or domains. Additionally, the relationship between cognition, symptom severity, and motor performance in ASD is unclear. We used a comprehensive set of measures to compare gross and fine motor, praxis/imitation, motor coordination, and interpersonal synchrony skills across three groups of children between 5 and 12 years of age: children with ASD with high IQ (HASD), children with ASD with low IQ (LASD), and typically developing (TD) children. We used the Bruininks-Oseretsky Test of Motor Proficiency and the Bilateral Motor Coordination subtest of the Sensory Integration and Praxis Tests to assess motor performance and praxis skills respectively. Children were also examined while performing simple and complex rhythmic upper and lower limb actions on their own (solo context) and with a social partner (social context). Both ASD groups had lower gross and fine motor scores, greater praxis errors in total and within various error types, lower movement rates, greater movement variability, and weaker interpersonal synchrony compared to the TD group. In addition, the LASD group had lower gross motor scores and greater mirroring errors compared to the HASD group. Overall, a variety of motor impairments are present across the entire spectrum of children with ASD, regardless of their IQ scores. Both, fine and gross motor performance significantly correlated with IQ but not with autism severity; however, praxis errors (mainly, total, overflow, and rhythmicity) strongly correlated with autism severity and not IQ. Our study findings highlight the need for clinicians and therapists to include motor evaluations and interventions in the standard-of-care of children with ASD and for the broader autism community to recognize dyspraxia as an integral part of the definition of ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lamb, James M; Agazaryan, Nzhde; Low, Daniel A
2013-10-01
To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments. Copyright © 2013 Elsevier Inc. All rights reserved.
1980-03-01
interpreting/smoothing data containing a significant percentage of gross errors, and thus is ideally suited for applications in automated image ... analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of the paper describes the application of
Gamma-ray detector guidance of breast cancer therapy
NASA Astrophysics Data System (ADS)
Ravi, Ananth
2009-12-01
Breast cancer is the most common form of cancer in women. Over 75% of breast cancer patients are eligible for breast conserving therapy. Breast conserving therapy involves a lumpectomy to excise the gross tumour, followed by adjuvant radiation therapy to eradicate residual microscopic disease. Recent advances in the understanding of breast cancer biology and recurrence have presented the opportunity to improve breast conserving therapy techniques. This thesis has explored the potential of gamma-ray detecting technology to improve guidance of both surgical and adjuvant radiation therapy aspects of breast conserving therapy. The task of accurately excising the gross tumour during breast conserving surgery (BCS) is challenging, due to the limited guidance currently available to surgeons. Radioimmuno guided surgery (RIGS) has been investigated to determine its potential to delineate the gross tumour intraoperatively. The effects of varying a set of user controllable parameters on the ability of RIGS to detect and delineate model breast tumours was determined. The parameters studied were: Radioisotope, blood activity concentration, collimator height and energy threshold. The most sensitive combination of parameters was determined to be an 111Indium labelled radiopharmaceutical with a gamma-ray detecting probe collimated to a height of 5 mm and an energy threshold at the Compton backscatter peak. Using these parameters it was found that, for the breast tumour model used, the minimum tumour-to-background ratio required to delineate the tumour edge accurately was 5.2+/-0.4 at a blood activity concentration of 5 kBq/ml. Permanent breast seed implantation (PBSI) is a form of accelerated partial breast irradiation that dramatically reduces the treatment burden of adjuvant radiation therapy on patients. Unfortunately, it is currently difficult to localize the implanted brachytherapy seeds, making it difficult to perform a correction in the event that seeds have been misplaced. One method to provide intraoperative seed localization is through the use of a gamma-camera system. Monte Carlo simulations were conducted of a Cadmium Zinc Telluride (CZT) gamma-camera system and a realistic model of a breast with 3 layers of seeds distributed according to the pre-implant treatment plan of a typical patient. The simulations showed that a gamma-camera was able to localize the seeds with a maximum error of 2.0 mm within 20 seconds. An experimental prototype was designed and constructed to validate these promising Monte Carlo results. Using a 64 pixel linear array CZT detector fitted with a custom built brass collimator, images were acquired of a physical phantom similar to the model used in the Monte Carlo simulations. The experimental prototype was able to reliably detect the seeds within 30 seconds with a median error in localization of 1 mm. The results from this thesis suggest that gamma-ray detecting technology may be able to provide significant improvements in guidance of breast cancer therapies and, thus, potentially improved therapeutic outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jani, S; Low, D; Lamb, J
2015-06-15
Purpose: To develop a system that can automatically detect patient identification and positioning errors using 3D computed tomography (CT) setup images and kilovoltage CT (kVCT) planning images. Methods: Planning kVCT images were collected for head-and-neck (H&N), pelvis, and spine treatments with corresponding 3D cone-beam CT (CBCT) and megavoltage CT (MVCT) setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. Positioning errors were simulated by misaligning the setup image by 1cm to 5cm in the six anatomical directions for H&N and pelvis patients. Misalignments for spine treatments weremore » simulated by registering the setup image to adjacent vertebral bodies on the planning kVCT. A body contour of the setup image was used as an initial mask for image comparison. Images were pre-processed by image filtering and air voxel thresholding, and image pairs were assessed using commonly-used image similarity metrics as well as custom -designed metrics. A linear discriminant analysis classifier was trained and tested on the datasets, and misclassification error (MCE), sensitivity, and specificity estimates were generated using 10-fold cross validation. Results: Our workflow produced MCE estimates of 0.7%, 1.7%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivities and specificities ranged from 98.0% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 96.2% and 98.4%. MCEs for 1cm H&N/pelvis misalignments were 1.3/5.1% and 9.1/8.6% for TomoTherapy and TrueBeam images, respectively. 2cm MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. Vertebral misalignment MCEs were 4.8% and 4.9% for TomoTherapy and TrueBeam images, respectively. Conclusion: Patient identification and gross misalignment errors can be robustly and automatically detected using 3D setup images of two imaging modalities across three commonly-treated anatomical sites.« less
Error rates in forensic DNA analysis: definition, numbers, impact and communication.
Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid
2014-09-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Error analysis on spinal motion measurement using skin mounted sensors.
Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond
2008-01-01
Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.
A simplified gross thrust computing technique for an afterburning turbofan engine
NASA Technical Reports Server (NTRS)
Hamer, M. J.; Kurtenbach, F. J.
1978-01-01
A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.
Arnedillo-Sánchez, Inmaculada; Boyle, Bryan; Bossavit, Benoît
2017-01-01
MotorSense is a motion detection and tracking technology that can be implemented across a range of environments to assist in detecting delays in gross-motor skills development. The system utilises the motion tracking functionality of Microsoft's Kinect™. It features games that require children to perform graded gross-motor tasks matched with their chronological and developmental ages. This paper describes the rationale for MotorSense, provides an overview of the functionality of the system and illustrates sample activities.
Vast Portfolio Selection with Gross-exposure Constraints*
Fan, Jianqing; Zhang, Jingjin; Yu, Ke
2012-01-01
We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404
Qu, Zhenhong; Ghorbani, Rhonda P; Li, Hongyan; Hunter, Robert L; Hannah, Christina D
2007-03-01
Gross examination, encompassing description, dissection, and sampling, is a complex task and an essential component of surgical pathology. Because of the complexity of the task, standardized protocols to guide the gross examination often become a bulky manual that is difficult to use. This problem is further compounded by the high specimen volume and biohazardous nature of the task. As a result, such a manual is often underused, leading to errors that are potentially harmful and time consuming to correct-a common chronic problem affecting many pathology laboratories. To combat this problem, we have developed a simple method that incorporates complex text and graphic information of a typical procedure manual and yet allows easy access to any intended instructive information in the manual. The method uses the Object-Linking-and-Embedding function of Microsoft Word (Microsoft, Redmond, WA) to establish hyperlinks among different contents, and then it uses the touch screen technology to facilitate navigation through the manual on a computer screen installed at the cutting bench with no need for a physical keyboard or a mouse. It takes less than 4 seconds to reach any intended information in the manual by 3 to 4 touches on the screen. A 3-year follow-up study shows that this method has increased use of the manual and has improved the quality of gross examination. The method is simple and can be easily tailored to different formats of instructive information, allowing flexible organization, easy access, and quick navigation. Increased compliance to instructive information reduces errors at the grossing bench and improves work efficiency.
Why Three Heads Are a Better Bet than Four: A Reply to Sun, Tweney, and Wang (2010)
ERIC Educational Resources Information Center
Hahn, Ulrike; Warren, Paul A.
2010-01-01
We (Hahn & Warren, 2009) recently proposed a new account of the systematic errors and biases that appear to be present in people's perception of randomly generated events. In a comment on that article, Sun, Tweney, and Wang (2010) critiqued our treatment of the gambler's fallacy. We had argued that this fallacy was less gross an error than it…
2007-04-25
the coders. Figure 1 la shows the basic Analyzer screen before any specific template is selected. Eft , *NN __ N O... ............... .. ]. . iiii...eyes and comers of the mouth, and reductions in gesturing or other gross body movements like foot tapping . D-DIMS captures facial and gross body
Code of Federal Regulations, 2010 CFR
2010-10-01
... computing the amount of the loss for which the carrier will pay there will be deducted from the gross amount... discrepancy is due to defective scales or other shipper facilities, or to inaccurate weighing or other error...
Detect, correct, retract: How to manage incorrect structural models.
Wlodawer, Alexander; Dauter, Zbigniew; Porebski, Przemyslaw J; Minor, Wladek; Stanfield, Robyn; Jaskolski, Mariusz; Pozharski, Edwin; Weichenberger, Christian X; Rupp, Bernhard
2018-02-01
The massive technical and computational progress of biomolecular crystallography has generated some adverse side effects. Most crystal structure models, produced by crystallographers or well-trained structural biologists, constitute useful sources of information, but occasional extreme outliers remind us that the process of structure determination is not fail-safe. The occurrence of severe errors or gross misinterpretations raises fundamental questions: Why do such aberrations emerge in the first place? How did they evade the sophisticated validation procedures which often produce clear and dire warnings, and why were severe errors not noticed by the depositors themselves, their supervisors, referees and editors? Once detected, what can be done to either correct, improve or eliminate such models? How do incorrect models affect the underlying claims or biomedical hypotheses they were intended, but failed, to support? What is the long-range effect of the propagation of such errors? And finally, what mechanisms can be envisioned to restore the validity of the scientific record and, if necessary, retract publications that are clearly invalidated by the lack of experimental evidence? We suggest that cognitive bias and flawed epistemology are likely at the root of the problem. By using examples from the published literature and from public repositories such as the Protein Data Bank, we provide case summaries to guide correction or improvement of structural models. When strong claims are unsustainable because of a deficient crystallographic model, removal of such a model and even retraction of the affected publication are necessary to restore the integrity of the scientific record. © 2017 Federation of European Biochemical Societies.
NASA Astrophysics Data System (ADS)
Lannutti, E.; Lenzano, M. G.; Toth, C.; Lenzano, L.; Rivera, A.
2016-06-01
In this work, we assessed the feasibility of using optical flow to obtain the motion estimation of a glacier. In general, former investigations used to detect glacier changes involve solutions that require repeated observations which are many times based on extensive field work. Taking into account glaciers are usually located in geographically complex and hard to access areas, deploying time-lapse imaging sensors, optical flow may provide an efficient solution at good spatial and temporal resolution to describe mass motion. Several studies in computer vision and image processing community have used this method to detect large displacements. Therefore, we carried out a test of the proposed Large Displacement Optical Flow method at the Viedma Glacier, located at South Patagonia Icefield, Argentina. We collected monoscopic terrestrial time-lapse imagery, acquired by a calibrated camera at every 24 hour from April 2014 until April 2015. A filter based on temporal correlation and RGB color discretization between the images was applied to minimize errors related to changes in lighting, shadows, clouds and snow. This selection allowed discarding images that do not follow a sequence of similarity. Our results show a flow field in the direction of the glacier movement with acceleration in the terminus. We analyzed the errors between image pairs, and the matching generally appears to be adequate, although some areas show random gross errors related to the presence of changes in lighting. The proposed technique allowed the determination of glacier motion during one year, providing accurate and reliable motion data for subsequent analysis.
1 λ × 1.44 Tb/s free-space IM-DD transmission employing OAM multiplexing and PDM.
Zhu, Yixiao; Zou, Kaiheng; Zheng, Zhennan; Zhang, Fan
2016-02-22
We report the experimental demonstration of single wavelength terabit free-space intensity modulation direct detection (IM-DD) system employing both orbital angular momentum (OAM) multiplexing and polarization division multiplexing (PDM). In our experiment, 12 OAM modes with two orthogonal polarization states are used to generate 24 channels for transmission. Each channel carries 30 Gbaud Nyquist PAM-4 signal. Therefore an aggregate gross capacity record of 1.44 Tb/s (12 × 2 × 30 × 2 Gb/s) is acheived with a modulation efficiency of 48 bits/symbol. After 0.8m free-space transmission, the bit error rates (BERs) of all the channels are below the 20% hard-decision forward error correction (HD-FEC) threshold of 1.5 × 10(-2). After applying the decision directed recursive least square (DD-RLS) based filter and post filter, the BERs of two polarizations can be reduced from 5.3 × 10(-3) and 7.3 × 10(-3) to 2.2 × 10(-3) and 3.4 × 10(-3), respectively.
Mihm, F G; Feeley, T W; Jamieson, S W
1987-01-01
The thermal dye double indicator dilution technique for estimating lung water was compared with gravimetric analyses in nine human subjects who were organ donors. As observed in animal studies, the thermal dye measurement of extravascular thermal volume (EVTV) consistently overestimated gravimetric extravascular lung water (EVLW), the mean (SEM) difference being 3.43 (0.59) ml/kg. In eight of the nine subjects the EVTV -3.43 ml/kg would yield an estimate of EVLW that would be from 3.23 ml/kg under to 3.37 ml/kg over the actual value EVLW at the 95% confidence limits. Reproducibility, assessed with the standard error of the mean percentage, suggested that a 15% change in EVTV can be reliably detected with repeated measurements. One subject was excluded from analysis because the EVTV measurement grossly underestimated its actual EVLW. This error was associated with regional injury observed on gross examination of the lung. Experimental and clinical evidence suggest that the thermal dye measurement provides a reliable estimate of lung water in diffuse pulmonary oedema states. PMID:3616974
In vivo TLD dose measurements in catheter-based high-dose-rate brachytherapy.
Adlienė, Diana; Jakštas, Karolis; Urbonavičius, Benas Gabrielis
2015-07-01
Routine in vivo dosimetry is well established in external beam radiotherapy; however, it is restricted mainly to detection of gross errors in high-dose-rate (HDR) brachytherapy due to complicated measurements in the field of steep dose gradients in the vicinity of radioactive source and high uncertainties. The results of in vivo dose measurements using TLD 100 mini rods and TLD 'pin worms' in catheter-based HDR brachytherapy are provided in this paper alongside with their comparison with corresponding dose values obtained using calculation algorithm of the treatment planning system. Possibility to perform independent verification of treatment delivery in HDR brachytherapy using TLDs is discussed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lam, Simon C; Lui, Andrew K F; Lee, Linda Y K; Lee, Joseph K L; Wong, K F; Lee, Cathy N Y
2016-05-01
The use of N95 respirators prevents spread of respiratory infectious agents, but leakage hampers its protection. Manufacturers recommend a user seal check to identify on-site gross leakage. However, no empirical evidence is provided. Therefore, this study aims to examine validity of a user seal check on gross leakage detection in commonly used types of N95 respirators. A convenience sample of 638 nursing students was recruited. On the wearing of 3 different designs of N95 respirators, namely 3M-1860s, 3M-1862, and Kimberly-Clark 46827, the standardized user seal check procedure was carried out to identify gross leakage. Repeated testing of leakage was followed by the use of a quantitative fit testing (QNFT) device in performing normal breathing and deep breathing exercises. Sensitivity, specificity, predictive values, and likelihood ratios were calculated accordingly. As indicated by QNFT, prevalence of actual gross leakage was 31.0%-39.2% with the 3M respirators and 65.4%-65.8% with the Kimberly-Clark respirator. Sensitivity and specificity of the user seal check for identifying actual gross leakage were approximately 27.7% and 75.5% for 3M-1860s, 22.1% and 80.5% for 3M-1862, and 26.9% and 80.2% for Kimberly-Clark 46827, respectively. Likelihood ratios were close to 1 (range, 0.89-1.51) for all types of respirators. The results did not support user seal checks in detecting any actual gross leakage in the donning of N95 respirators. However, such a check might alert health care workers that donning a tight-fitting respirator should be performed carefully. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
Agiovlasitis, Stamatis; Motl, Robert W
2016-01-01
An equation for predicting the gross oxygen uptake (gross-VO2) during walking for persons with multiple sclerosis (MS) has been developed. Predictors included walking speed and total score from the 12-Item Multiple Sclerosis Walking Scale (MSWS-12). This study examined the validity of this prediction equation in another sample of persons with MS. Participants were 18 persons with MS with limited mobility problems (42 ± 13 years; 14 women). Participants completed the MSWS-12. Gross-VO2 was measured with open-circuit spirometry during treadmill walking at 2.0, 3.0, and 4.0 mph (0.89, 1.34, and 1.79 m·s(-1)). Absolute percent error was small: 8.3 ± 6.1% , 8.0 ± 5.6% , and 12.2 ± 9.0% at 2.0, 3.0, and 4.0 mph, respectively. Actual gross-VO2 did not differ significantly from predicted gross-VO2 at 2.0 and 3.0 mph, but was significantly higher than predicted gross-VO2 at 4.0 mph (p < 0.001). Bland-Altman plots indicated nearly zero mean difference between actual and predicted gross-VO2 with modest 95% confidence intervals at 2.0 and 3.0 mph, but there was some underestimation at 4.0 mph. Speed and MSWS-12 score provide valid prediction of gross-VO2 during treadmill walking at slow and moderate speeds in ambulatory persons with MS. However, there is a possibility of small underestimation for walking at 4.0 mph.
Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.
Li, Yan; Gu, Leon; Kanade, Takeo
2011-09-01
Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.
Albuquerque, Plínio Luna de; Guerra, Miriam Queiroz de Farias; Lima, Marília de Carvalho; Eickmann, Sophie Helena
2017-05-24
To investigate the concurrent validity of AIMS in relation to the gross motor subtest of the Bayley Scale III/GM in preterm infants. A total of 159 gross motor development assessments were performed with the AIMS and Bayley-III/GM. Linear regression was used to assess the correlation between AIMS and Bayley-III/GM scores. The intra-class correlation coefficient (ICC) and the Bland-Altman plot were used to analyze intra- and inter-rater reliability. There was a prevalence of delayed gross motor development of 20.8% according to the Bayley-III/GM, and 11.9% for the 5th percentile and 21.4% for the 10th percentile of AIMS. A good correlation of AIMS with Bayley-III/GM scores and intra- and inter-rater reliability was encountered in this study. AIMS proved very capable of detecting delayed gross motor development in preterm infants when compared with the Bayley-III/GM. The 10th percentile of AIMS provided the best combination of indicators, with greater specificity.
ERIC Educational Resources Information Center
Griswold, John S.
2009-01-01
"Spectacular error" sounds euphemistic compared to "devastating," "catastrophic," or "meltdown"--terms more commonly summoned to describe the credit crisis and ensuing global economic carnage. Whatever they are labeled, gross miscalculations on Wall Street are having a deleterious effect on college campuses across the country, with many…
Innovative procedure for the determination of gross-alpha/gross-beta activities in drinking water.
Wisser, S; Frenzel, E; Dittmer, M
2006-03-01
An alternative sample preparation method for the determination of gross-alpha/beta activity concentrations in drinking water is introduced in this paper. After the freeze-drying of tap water samples, determination by liquid scintillation counting can be applied utilizing alpha/beta separation. It has been shown that there is no adsorption or loss of solid radionuclides during the freeze-drying procedure. However, the samples have to be measured quickly after the preparation since the ingrowth of daughter isotopes negatively effects the measurement. The limits of detection for gross-alpha and gross-beta activity are in the range 25-210 mBq/l, respectively, for a measurement time of only 8-9 h.
Lee, Jinhyung; Choi, Jae-Young
2016-04-05
The benefits of health information technology (IT) adoption have been reported in the literature, but whether health IT investment increases revenue generation remains an important research question. Texas hospital data obtained from the American Hospital Association (AHA) for 2007-2010 were used to investigate the association of health IT expenses and hospital revenue. The generalized estimation equation (GEE) with an independent error component was used to model the data controlling for cluster error within hospitals. We found that health IT expenses were significantly and positively associated with hospital revenue. Our model predicted that a 100% increase in health IT expenditure would result in an 8% increase in total revenue. The effect of health IT was more associated with gross outpatient revenue than gross inpatient revenue. Increased health IT expenses were associated with greater hospital revenue. Future research needs to confirm our findings with a national sample of hospitals.
Computer-aided head film analysis: the University of California San Francisco method.
Baumrind, S; Miller, D M
1980-07-01
Computer technology is already assuming an important role in the management of orthodontic practices. The next 10 years are likely to see expansion in computer usage into the areas of diagnosis, treatment planning, and treatment-record keeping. In the areas of diagnosis and treatment planning, one of the first problems to be attacked will be the automation of head film analysis. The problems of constructing computer-aided systems for this purpose are considered herein in the light of the authors' 10 years of experience in developing a similar system for research purposes. The need for building in methods for automatic detection and correction of gross errors is discussed and the authors' method for doing so is presented. The construction of a rudimentary machine-readable data base for research and clinical purposes is described.
Singh, Omkar; Sunkaria, Ramesh Kumar
2017-12-01
This paper presents a novel technique to identify heartbeats in multimodal data using electrocardiogram (ECG) and arterial blood pressure (ABP) signals. Multiple physiological signals such as ECG, ABP, and Respiration are often recorded in parallel from the activity of heart. These signals generally possess related information as they are generated by the same physical system. The ECG and ABP correspond to the same phenomenon of contraction and relaxation activity of heart. Multiple signals acquired from various sensors are generally processed independently, thus discarding the information from other measurements. In the estimation of heart rate and heart rate variability, the R peaks are generally identified from ECG signal. Efficient detection of R-peaks in electrocardiogram (ECG) is a key component in the estimation of clinically relevant parameters from ECG. However, when the signal is severely affected by undesired artifacts, this becomes a challenging task. Sometimes in clinical environment, other physiological signals reflecting the cardiac activity such as ABP signal are also acquired simultaneously. Under the availability of such multimodal signals, the accuracy of R peak detection methods can be improved using sensor-fusion techniques. In the proposed method, the sample entropy (SampEn) is used as a metric for assessing the noise content in the physiological signal and the R peaks in ECG and the systolic peaks in ABP signals are fused together to enhance the efficiency of heartbeat detection. The proposed method was evaluated on the 100 records from the computing in cardiology challenge 2014 training data set. The performance parameters are: sensitivity (Se) and positive predictivity (PPV). The unimodal R peaks detector achieved: Se gross = 99.40%, PPV gross = 99.29%, Se average = 99.37%, PPV average = 99.29%. Similarly unimodal BP delineator achieved Se gross = 99.93%, PPV gross = 99.99%, Se average = 99.93%, PPV average = 99.99% whereas, the proposed multimodal beat detector achieved: Se gross = 99.65%, PPV gross = 99.91%, Se average = 99.68%, PPV average = 99.91%.
Aminiahidashti, Hamed; Hosseininejad, Seyed Mohammad; Montazer, Hosein; Bozorgi, Farzad; Goli Khatir, Iraj; Jahanian, Fateme; Raee, Behnaz
2014-01-01
Spontaneous bacterial peritonitis (SBP) as a monomicrobial infection of ascites fluid is one of the most important causes of morbidity and mortality in cirrhotic patients. This study was aimed to determine the diagnostic accuracy of ascites fluid color in detection of SBP in cirrhotic cases referred to the emergency department. Cirrhotic patients referred to the ED for the paracentesis of ascites fluid were enrolled. For all studied patients, the results of laboratory analysis and gross appearance of ascites fluid registered and reviewed by two emergency medicine specialists. The sensitivity, specificity, positive and negative predictive value, and positive and negative likelihood ration of the ascites fluid gross appearance in detection of SBP were measured with 95% confidence interval. The present project was performed in 80 cirrhotic patients with ascites (52.5 female). The mean of the subjects' age was 56.25±12.21 years (35-81). Laboratory findings revealed SBP in 23 (29%) cases. Fifty nine (73%) cases had transparent ascites fluid appearance of whom 17 (29%) ones suffered from SBP. From 21 (26%) cases with opaque ascites appearance, 15 (71%) had SBP. The sensitivity and specificity of the ascites fluid appearance in detection of SBP were 46.88% (Cl: 30.87-63.55) and 87.50% (95% Cl: 75.3-94.14), respectively. It seems that the gross appearance of ascites fluid had poor diagnostic accuracy in detection of SBP and considering its low sensitivity, it could not be used as a good screening tool for this propose.
Reliable absolute analog code retrieval approach for 3D measurement
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun
2017-11-01
The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.
Alotaibi, Madawi; Long, Toby; Kennedy, Elizabeth; Bavishi, Siddhi
2014-01-01
The purpose of this study was to review published research on the use of the Gross Motor Function Measure (GMFM-88) and (GMFM-66) as outcome measures to determine if these tools detect changes in gross motor function in children with cerebral palsy (CP) undergoing interventions. A comprehensive literature search was conducted using Medline and PubMed to identify studies published from January 2000 through January 2011 that reported the accuracy of GMFM-88 and GMFM-66 to measure changes over time in children with CP undergoing interventions. The keywords used for the search were "GMFM" and "CP". Two of the authors (M.A. and S.B.) reviewed the titles and abstracts found in the databases. The methodological quality of the studies was assessed by using the Critical Review Form-Quantitative Studies. Of 62 papers initially identified, 21 studies fulfilled the inclusion criteria. These articles consist of three longitudinal studies, six randomized controlled trials, four repeated measure design, six pre-post test design, a case series and one non-randomized prospective study. The included studies were generally of moderate to high methodological quality. The studies included children from a wide age range of 10 months to 16 years. According to the National Health and Medical Research Council, the study designs were level II, III-2, III-3 and IV. The review suggests that the GMFM-88 and GMFM-66 are useful as outcome measures to detect changes in gross motor function in children with CP undergoing interventions. Implications for Rehabilitation Accurate measurement of change in gross motor skill acquisition is important to determine effectiveness of intervention programs in children with cerebral palsy (CP). The Gross Motor Function Measure (GMFM-88 and GMFM-66) are common tools used by rehabilitation specialists to measure gross motor function in children with CP. The GMFM appears to be an effective outcome tool for measuring change in gross motor function according to a small number of randomized control studies utilizing participant populations of convenience.
Result from a new air pollution model were tested against data from the Southern California Air Quality Study (SCAQS) period of 26-29 August 1987. Gross errors for sulfate, sodium, light absorption, temperatures, surface solar radiation, sulfur dioxide gas, formaldehyde gas, and ...
21 CFR 58.130 - Conduct of a nonclinical laboratory study.
Code of Federal Regulations, 2010 CFR
2010-04-01
... specimen in a manner that precludes error in the recording and storage of data. (d) Records of gross... that specimen histopathologically. (e) All data generated during the conduct of a nonclinical laboratory study, except those that are generated by automated data collection systems, shall be recorded...
21 CFR 58.130 - Conduct of a nonclinical laboratory study.
Code of Federal Regulations, 2013 CFR
2013-04-01
... specimen in a manner that precludes error in the recording and storage of data. (d) Records of gross... that specimen histopathologically. (e) All data generated during the conduct of a nonclinical laboratory study, except those that are generated by automated data collection systems, shall be recorded...
21 CFR 58.130 - Conduct of a nonclinical laboratory study.
Code of Federal Regulations, 2011 CFR
2011-04-01
... specimen in a manner that precludes error in the recording and storage of data. (d) Records of gross... that specimen histopathologically. (e) All data generated during the conduct of a nonclinical laboratory study, except those that are generated by automated data collection systems, shall be recorded...
Mixture modeling of multi-component data sets with application to ion-probe zircon ages
NASA Astrophysics Data System (ADS)
Sambridge, M. S.; Compston, W.
1994-12-01
A method is presented for detecting multiple components in a population of analytical observations for zircon and other ages. The procedure uses an approach known as mixture modeling, in order to estimate the most likely ages, proportions and number of distinct components in a given data set. Particular attention is paid to estimating errors in the estimated ages and proportions. At each stage of the procedure several alternative numerical approaches are suggested, each having their own advantages in terms of efficency and accuracy. The methodology is tested on synthetic data sets simulating two or more mixed populations of zircon ages. In this case true ages and proportions of each population are known and compare well with the results of the new procedure. Two examples are presented of its use with sets of SHRIMP U-238 - Pb-206 zircon ages from Palaeozoic rocks. A published data set for altered zircons from bentonite at Meishucun, South China, previously treated as a single-component population after screening for gross alteration effects, can be resolved into two components by the new procedure and their ages, proportions and standard errors estimated. The older component, at 530 +/- 5 Ma (2 sigma), is our best current estimate for the age of the bentonite. Mixture modeling of a data set for unaltered zircons from a tonalite elsewhere defines the magmatic U-238 - Pb-206 age at high precision (2 sigma +/- 1.5 Ma), but one-quarter of the 41 analyses detect hidden and significantly older cores.
Aminiahidashti, Hamed; Hosseininejad, Seyed Mohammad; Montazer, Hosein; Bozorgi, Farzad; Goli Khatir, Iraj; Jahanian, Fateme; Raee, Behnaz
2014-01-01
Introduction: Spontaneous bacterial peritonitis (SBP) as a monomicrobial infection of ascites fluid is one of the most important causes of morbidity and mortality in cirrhotic patients. This study was aimed to determine the diagnostic accuracy of ascites fluid color in detection of SBP in cirrhotic cases referred to the emergency department. Methods: Cirrhotic patients referred to the ED for the paracentesis of ascites fluid were enrolled. For all studied patients, the results of laboratory analysis and gross appearance of ascites fluid registered and reviewed by two emergency medicine specialists. The sensitivity, specificity, positive and negative predictive value, and positive and negative likelihood ration of the ascites fluid gross appearance in detection of SBP were measured with 95% confidence interval. Results: The present project was performed in 80 cirrhotic patients with ascites (52.5 female). The mean of the subjects’ age was 56.25±12.21 years (35-81). Laboratory findings revealed SBP in 23 (29%) cases. Fifty nine (73%) cases had transparent ascites fluid appearance of whom 17 (29%) ones suffered from SBP. From 21 (26%) cases with opaque ascites appearance, 15 (71%) had SBP. The sensitivity and specificity of the ascites fluid appearance in detection of SBP were 46.88% (Cl: 30.87-63.55) and 87.50% (95% Cl: 75.3-94.14), respectively. Conclusion: It seems that the gross appearance of ascites fluid had poor diagnostic accuracy in detection of SBP and considering its low sensitivity, it could not be used as a good screening tool for this propose. PMID:26495366
Natural radionuclides in waste water discharged from coal-fired power plants in Serbia.
Janković, Marija M; Todorović, Dragana J; Sarap, Nataša B; Krneta Nikolić, Jelena D; Rajačić, Milica M; Pantelić, Gordana K
2016-12-01
Investigation of the natural radioactivity levels in water around power plants, as well as in plants, coal, ash, slag and soil, and to assess the associated radiation hazard is becoming an emerging and interesting topic. This paper is focused on the results of the radioactivity analysis in waste water samples from five coal-fired power plants in Serbia (Nikola Tesla A, Nikola Tesla B, Kolubara, Morava and Kostolac), which were analyzed in the period 2003-2015. River water samples taken upstream and downstream from the power plants, drain water and overflow water were analyzed. In the water samples gamma spectrometry analysis was performed as well as determination of gross alpha and beta activity. Natural radionuclide 40 K was detected by gamma spectrometry, while the concentrations of other radionuclides, 226 Ra, 235 U and 238 U, usually were below the minimum detection activity (MDA). 232 Th and artificial radionuclide 137 Cs were not detected in these samples. Gross alpha and beta activities were determined by the α/β low level proportional counter Thermo Eberline FHT 770 T. In the analyzed samples, gross alpha activity ranged from MDA to 0.47 Bq L - 1 , while the gross beta activity ranged from MDA to 1.55 Bq L - 1 .
Direction Dependent Effects In Widefield Wideband Full Stokes Radio Imaging
NASA Astrophysics Data System (ADS)
Jagannathan, Preshanth; Bhatnagar, Sanjay; Rau, Urvashi; Taylor, Russ
2015-01-01
Synthesis imaging in radio astronomy is affected by instrumental and atmospheric effects which introduce direction dependent gains.The antenna power pattern varies both as a function of time and frequency. The broad band time varying nature of the antenna power pattern when not corrected leads to gross errors in full stokes imaging and flux estimation. In this poster we explore the errors that arise in image deconvolution while not accounting for the time and frequency dependence of the antenna power pattern. Simulations were conducted with the wideband full stokes power pattern of the Very Large Array(VLA) antennas to demonstrate the level of errors arising from direction-dependent gains. Our estimate is that these errors will be significant in wide-band full-pol mosaic imaging as well and algorithms to correct these errors will be crucial for many up-coming large area surveys (e.g. VLASS)
Sitaraman, Shivakumar; Ham, Young S.; Gharibyan, Narek; ...
2017-03-27
Here, fuel assemblies in the spent fuel pool are stored by suspending them in two vertically stacked layers at the Atucha Unit 1 nuclear power plant (Atucha-I). This introduces the unique problem of verifying the presence of fuel in either layer without physically moving the fuel assemblies. Given that the facility uses both natural uranium and slightly enriched uranium at 0.85 wt% 235U and has been in operation since 1974, a wide range of burnups and cooling times can exist in any given pool. A gross defect detection tool, the spent fuel neutron counter (SFNC), has been used at themore » site to verify the presence of fuel up to burnups of 8000 MWd/t. At higher discharge burnups, the existing signal processing software of the tool was found to fail due to nonlinearity of the source term with burnup.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Shivakumar; Ham, Young S.; Gharibyan, Narek
Here, fuel assemblies in the spent fuel pool are stored by suspending them in two vertically stacked layers at the Atucha Unit 1 nuclear power plant (Atucha-I). This introduces the unique problem of verifying the presence of fuel in either layer without physically moving the fuel assemblies. Given that the facility uses both natural uranium and slightly enriched uranium at 0.85 wt% 235U and has been in operation since 1974, a wide range of burnups and cooling times can exist in any given pool. A gross defect detection tool, the spent fuel neutron counter (SFNC), has been used at themore » site to verify the presence of fuel up to burnups of 8000 MWd/t. At higher discharge burnups, the existing signal processing software of the tool was found to fail due to nonlinearity of the source term with burnup.« less
Welch, Alan H.
1995-01-01
Gross-beta activity has been used as an indicator of beta-emitting isotopes in water since at least the early 1950s. Originally designed for detection of radioactive releases from nuclear facilities and weapons tests, analysis of gross-beta activity is widely used in studies of naturally occurring radioactivity in ground water. Analyses of about 800 samples from 5 ground-water regions of the United States provide a basis for evaluating the utility of this measurement. The data suggest that measured gross-beta activities are due to (1) long-lived radionuclides in ground water, and (2) ingrowth of beta-emitting radionuclides during holding times between collection of samples and laboratory measurements.Although40K and228Ra appear to be the primary sources of beta activity in ground water, the sum of40K plus228Ra appears to be less than the measured gross-beta activity in most ground-water samples. The difference between the contribution from these radionuclides and gross-beta activity is most pronounced in ground water with gross-beta activities > 10 pCi/L, where these 2 radionuclides account for less than one-half the measured ross-beta activity. One exception is groundwater from the Coastal Plain of New Jersey, where40K plus228Ra generally contribute most of the gross-beta activity. In contrast,40K and228Ra generally contribute most of beta activity in ground water with gross-beta activities < 1 pCi/L.The gross-beta technique does not measure all beta activity in ground water. Although3H contributes beta activity to some ground water, it is driven from the sample before counting and therefore is not detected by gross-beta measurements. Beta-emitting radionuclides with half-lives shorter than a few days can decay to low values between sampling and counting. Although little is known about concentrations of most short-lived beta-emitting radionuclides in environmental ground water (water unaffected by direct releases from nuclear facilities and weapons tests), their activities are expected to be low.Ingrowth of beta-emitting radionuclides during sample holding times can contribute to gross-beta activity, particularly in ground water with gross-beta activities > 10 pCi/L. Ingrowth of beta-emitting progeny of238U, specifically234Pa and234Th, contributes much of the measured gross-beta activity in ground water from 4 of the 5 areas studied. Consequently, gross-beta activity measurements commonly overestimate the abundance of beta-emitting radionuclides actually present in ground water. Differing sample holding times before analysis lead to differing amounts of ingrowth of the two progeny. Therefore, holding times can affect observed gross-beta measurements, particularly in ground water with238U activities that are moderate to high compared with the activity of40K plus228Ra. Uncertainties associated with counting efficiencies for beta particles with different energies further complicate the interpretation of gross-beta measurements.
Brain Research: The Necessity for Separating Sites, Actions and Functions.
ERIC Educational Resources Information Center
Meeker, Mary
Educators, as applied scientists, must work in partnership with investigative scientists who are researching brain functions in order to reach a better understanding of gifted students and students who are intelligent but do not learn. Improper understanding of brain functions can cause gross errors in educational placement. Until recently, the…
a Method of Generating dem from Dsm Based on Airborne Insar Data
NASA Astrophysics Data System (ADS)
Lu, W.; Zhang, J.; Xue, G.; Wang, C.
2018-04-01
Traditional methods of terrestrial survey to acquire DEM cannot meet the requirement of acquiring large quantities of data in real time, but the DSM can be quickly obtained by using the dual antenna synthetic aperture radar interferometry and the DEM generated by the DSM is more fast and accurate. Therefore it is most important to acquire DEM from DSM based on airborne InSAR data. This paper aims to the method that generate DEM from DSM accurately. Two steps in this paper are applied to acquire accurate DEM. First of all, when the DSM is generated by interferometry, unavoidable factors such as overlay and shadow will produce gross errors to affect the data accuracy, so the adaptive threshold segmentation method is adopted to remove the gross errors and the threshold is selected according to the coherence of the interferometry. Secondly DEM will be generated by the progressive triangulated irregular network densification filtering algorithm. Finally, experimental results are compared with the existing high-precision DEM results. The results show that this method can effectively filter out buildings, vegetation and other objects to obtain the high-precision DEM.
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
Improved assessment of gross and net primary productivity of Canada's landmass
NASA Astrophysics Data System (ADS)
Gonsamo, Alemu; Chen, Jing M.; Price, David T.; Kurz, Werner A.; Liu, Jane; Boisvenue, Céline; Hember, Robbie A.; Wu, Chaoyang; Chang, Kuo-Hsien
2013-12-01
assess Canada's gross primary productivity (GPP) and net primary productivity (NPP) using boreal ecosystem productivity simulator (BEPS) at 250 m spatial resolution with improved input parameter and driver fields and phenology and nutrient release parameterization schemes. BEPS is a process-based two-leaf enzyme kinetic terrestrial ecosystem model designed to simulate energy, water, and carbon (C) fluxes using spatial data sets of meteorology, remotely sensed land surface variables, soil properties, and photosynthesis and respiration rate parameters. Two improved key land surface variables, leaf area index (LAI) and land cover type, are derived at 250 m from Moderate Resolution Imaging Spectroradiometer sensor. For diagnostic error assessment, we use nine forest flux tower sites where all measured C flux, meteorology, and ancillary data sets are available. The errors due to input drivers and parameters are then independently corrected for Canada-wide GPP and NPP simulations. The optimized LAI use, for example, reduced the absolute bias in GPP from 20.7% to 1.1% for hourly BEPS simulations. Following the error diagnostics and corrections, daily GPP and NPP are simulated over Canada at 250 m spatial resolution, the highest resolution simulation yet for the country or any other comparable region. Total NPP (GPP) for Canada's land area was 1.27 (2.68) Pg C for 2008, with forests contributing 1.02 (2.2) Pg C. The annual comparisons between measured and simulated GPP show that the mean differences are not statistically significant (p > 0.05, paired t test). The main BEPS simulation error sources are from the driver fields.
Raff, Lester J; Engel, George; Beck, Kenneth R; O'Brien, Andrea S; Bauer, Meagan E
2009-02-01
The elimination or reduction of medical errors has been a main focus of health care enterprises in the United States since the year 2000. Elimination of errors in patient and specimen identification is a key component of this focus and is the number one goal in the Joint Commission's 2008 National Patient Safety Goals Laboratory Services Program. To evaluate the effectiveness of using permanent inks to maintain specimen identity in sequentially submitted prostate needle biopsies. For a 12-month period, a grossing technician stained each prostate core with permanent ink developed for inking of pathology specimens. A different color was used for each patient, with all the prostate cores from all vials for a particular patient inked with the same color. Five colors were used sequentially: green, blue, yellow, orange, and black. The ink was diluted with distilled water to a consistency that allowed application of a thin, uniform coating of ink along the edges of the prostate core. The time required to ink patient specimens comprising different numbers of vials and prostate biopsies was timed. The number and type of inked specimen discrepancies were evaluated. The identified discrepancy rate for prostate biopsy patients was 0.13%. The discrepancy rate in terms of total number of prostate blocks was 0.014%. Diluted inks adhered to biopsy contours throughout tissue processing. The tissue showed no untoward reactions to the inks. Inking did not affect staining (histochemical or immunohistochemical) or pathologic evaluation. On average, inking prostate needle biopsies increases grossing time by 20%. Inking of all prostate core biopsies with colored inks, in sequential order, is an aid in maintaining specimen identity. It is a simple and effective method of addressing Joint Commission patient safety goals by maintaining specimen identity during processing of similar types of gross specimens. This technique may be applicable in other specialty laboratories and high-volume laboratories, where many similar tissue specimens are processed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-20
... procedures to follow to ensure that a fuel filter impending bypass condition due to gross fuel contamination... fuel filter impending bypass condition due to gross fuel contamination is detected in a timely manner... flight crew of a left engine fuel filter contamination and imminent bypass condition, which may indicate...
Interaction of Language Processing and Motor Skill in Children with Specific Language Impairment
ERIC Educational Resources Information Center
DiDonato Brumbach, Andrea C.; Goffman, Lisa
2014-01-01
Purpose: To examine how language production interacts with speech motor and gross and fine motor skill in children with specific language impairment (SLI). Method: Eleven children with SLI and 12 age-matched peers (4-6 years) produced structurally primed sentences containing particles and prepositions. Utterances were analyzed for errors and for…
Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills
ERIC Educational Resources Information Center
Waggoner, Dori T.
2011-01-01
This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…
Anastasiou, Ioannis; Pournaras, Christos; Mitropoulos, Dionysios; Constantinides, Constantinos A.
2013-01-01
Macroscopic hematuria regards the 4% to 20% of all urological visits. Renal artery aneurysms (RAAs) are detected in approximately 0.01%–1% of the general population, while intraparenchymal renal artery aneurysms (IPRAAs) are even more rarely detected in less than 10% of patients with RAAs. We present a case of a 58-year-old woman that came into the emergency room (ER) complaining of a gross hematuria during the last four days. Although in the ER room the first urine sample was clear after a cough episode, a severe gross hematuria began which led to a hemodynamically unstable patient. Finally, a radical nephrectomy was performed, and an IPRAA was the final diagnosis. A cough deteriorating hematuria could be attributed to a ruptured intraparenchymal renal artery aneurysm, which even though constitutes a rare entity, it is a life-threatening medical emergency. PMID:23864981
Natural radioactivity of riverbank sediments of the Maritza and Tundja Rivers in Turkey.
Aytas, Sule; Yusan, Sabriye; Aslani, Mahmoud A A; Karali, Turgay; Turkozu, D Alkim; Gok, Cem; Erenturk, Sema; Gokce, Melis; Oguz, K Firat
2012-01-01
This article represents the first results of the natural radionuclides in the Maritza and Tundja river sediments, in the vicinity of Edirne city, Turkey. The aim of the article is to describe the natural radioactivity concentrations as a baseline for further studies and to obtain the distribution patterns of radioactivity in trans-boundary river sediments of the Maritza and Tundja, which are shared by Turkey, Bulgaria and Greece. Sediment samples were collected during the period of August 2007-April 2010. The riverbank sediment samples were analyzed firstly for their pH, organic matter content and soil texture. The gross alpha/beta and (238)U, (232)Th and (40)K activity concentrations were then investigated in the collected sediment samples. The mean and standard error of mean values of gross alpha and gross beta activity concentrations were found as 91 ± 11, 410 ± 69 Bq/kg and 86 ± 11, 583 ± 109 Bq/kg for the Maritza and Tundja river sediments, respectively. Moreover, the mean and standard error of mean values of (238)U, (232)Th and (40)K activity concentrations were determined as 219 ± 68, 128 ± 55, 298 ± 13 and as 186 ± 98, 121 ± 68, 222 ± 30 Bq/kg for the Maritza and Tundja River, respectively. Absorbed dose rates (D) and annual effective dose equivalent s have been calculated for each sampling point. The average value of adsorbed dose rate and effective dose equivalent were found as 191 and 169 nGy/h; 2 and 2 mSv/y for the Maritza and the Tundja river sediments, respectively.
REPORT FOR COMMERCIAL GRADE NICKEL CHARACTERIZATION AND BENCHMARKING
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-12-20
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, has completed the collection, sample analysis, and review of analytical results to benchmark the concentrations of gross alpha-emitting radionuclides, gross beta-emitting radionuclides, and technetium-99 in commercial grade nickel. This report presents methods, change management, observations, and statistical analysis of materials procured from sellers representing nine countries on four continents. The data suggest there is a low probability of detecting alpha- and beta-emitting radionuclides in commercial nickel. Technetium-99 was not detected in any samples, thus suggesting it is not present in commercial nickel.
NASA Technical Reports Server (NTRS)
Snow, Frank; Harman, Richard; Garrick, Joseph
1988-01-01
The Gamma Ray Observatory (GRO) spacecraft needs a highly accurate attitude knowledge to achieve its mission objectives. Utilizing the fixed-head star trackers (FHSTs) for observations and gyroscopes for attitude propagation, the discrete Kalman Filter processes the attitude data to obtain an onboard accuracy of 86 arc seconds (3 sigma). A combination of linear analysis and simulations using the GRO Software Simulator (GROSS) are employed to investigate the Kalman filter for stability and the effects of corrupted observations (misalignment, noise), incomplete dynamic modeling, and nonlinear errors on Kalman filter. In the simulations, on-board attitude is compared with true attitude, the sensitivity of attitude error to model errors is graphed, and a statistical analysis is performed on the residuals of the Kalman Filter. In this paper, the modeling and sensor errors that degrade the Kalman filter solution beyond mission requirements are studied, and methods are offered to identify the source of these errors.
Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten
2013-01-01
Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315
DOE Office of Scientific and Technical Information (OSTI.GOV)
David A. King, CHP, PMP
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on August 22, 2012. Representatives from the U.S. Nuclear Regulatory Commission and Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross beta analyses. Themore » comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER ≤ 3 indicates that, at a 99% confidence interval, split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty. The NFS split sample report does not specify the confidence level of reported uncertainties. Therefore, standard two sigma reporting is assumed and uncertainty values were divided by 1.96. A comparison of split sample results, using the DER equation, indicates one set with a DER greater than 3. A DER of 3.1 is calculated for gross alpha results from ORAU sample 5198W0003 and NFS sample MCU-310212003. The ORAU result is 0.98 ± 0.30 pCi/L (value ± 2 sigma) compared to the NFS result of -0.08 ± 0.60 pCi/L. Relatively high DER values are not unexpected for low (e.g., background) analyte concentrations analyzed by separate laboratories, as is the case here. It is noted, however, NFS uncertainties are at least twice the ORAU uncertainties, which contributes to the elevated DER value. Differences in ORAU and NFS minimum detectable activities are even more pronounced. comparison of ORAU and NFS split samples produces reasonably consistent results for low (e.g., background) concentrations.« less
Martini, Matheus C; Caserta, Leonardo C; Dos Santos, Marcia M A B; Barnabé, Ana C S; Durães-Carvalho, Ricardo; Padilla, Marina A; Simão, Raphael M; Rizotto, Laís S; Simas, Paulo V M; Bastos, Juliana C S; Cardoso, Tereza C; Felippe, Paulo A N; Ferreira, Helena L; Arns, Clarice W
2018-06-01
The detection of avian coronaviruses (AvCoV) in wild birds and the emergence of new AvCoV have increased in the past few years. In the present study, the pathogenicity of three AvCoV isolates was investigated in day-old chicks. One AvCoV isolated from a pigeon, which clustered with the Massachusetts vaccine serotype, and two AvCoV isolated from chickens, which grouped with a Brazilian genotype lineage, were used. Clinical signs, gross lesions, histopathological changes, ciliary activity, viral RNA detection, and serology were evaluated during 42 days post infection. All AvCoV isolates induced clinical signs, gross lesions in the trachea, moderate histopathological changes in the respiratory tract, and mild changes in other tissues. AvCoV isolated from the pigeon sample caused complete tracheal ciliostasis over a longer time span. Specific viral RNA was detected in all tissues, but the highest RNA loads were detected in the digestive tract (cloacal swabs and ileum). The highest antibody levels were also detected in the group infected with an isolate from the pigeon. These results confirm the pathogenicity of Brazilian variants, which can cause disease and induce gross lesions and histopathological changes in chickens. Our results suggest that non-Galliformes birds can also play a role in the ecology of AvCoV.
Hypervitaminosis D associated with a vitamin D dispensing error.
Jacobsen, Ryan B; Hronek, Brett W; Schmidt, Ginelle A; Schilling, Margo L
2011-10-01
To report a case of hypervitaminosis D resulting in hypercalcemia and acute kidney injury in a 70-year-old female who was prescribed a standard dose of vitamin D but given a toxic dose of vitamin D 50,000 IU (1.25 mg) daily resulting from a dispensing error. A 70-year-old female in her usual state of health was instructed to begin supplementation with vitamin D 1000 IU daily. Three months later she developed confusion, slurred speech, unstable gait, and increased fatigue. She was hospitalized for hypercalcemia and acute kidney injury secondary to hypervitaminosis D. All vitamin D supplementation was discontinued and 5 months after discharge, the patient's serum calcium and vitamin D concentrations, as well as renal function, had returned to baseline values. Upon review of the patient's records, it was discovered that she had been taking vitamin D 50,000 IU daily. There is an increased interest in vitamin D, resulting in more health care providers recommending--and patients taking--supplemental vitamin D. Hypervitaminosis D is rarely reported and generally only in the setting of gross excess of vitamin D. This report highlights a case of hypervitaminosis D in the setting of a prescribed standard dose of vitamin D that resulted in toxic ingestion of vitamin D 50,000 IU daily due to a dispensing error. As more and more people use vitamin D supplements, it is important to recognize that, while rare, hypervitaminosis D is a possibility and dosage conversion of vitamin D units can result in errors. Health care providers and patients should be educated on the advantages and risks associated with vitamin D supplementation and be informed of safety measures to avoid hypervitaminosis D. In addition, health care providers should understand dosage conversion regarding vitamin D and electronic prescribing and dispensing software should be designed to detect such errors.
NASA Technical Reports Server (NTRS)
Holt, James M.; Clanton, Stephen E.
1999-01-01
Results of the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS) gross leakage analysis are presented for evaluating total leakage flowrates and volume discharge caused by a gross leakage event (i.e. open boundary condition). A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) thermal hydraulic mathematical model (THMM) representing the Node 2 IATCS was developed to simulate system performance under steady-state nominal conditions as well as the transient flow effects resulting from an open line exposed to ambient. The objective of the analysis was to determine the adequacy of the leak detection software in limiting the quantity of fluid lost during a gross leakage event to within an acceptable level.
NASA Technical Reports Server (NTRS)
Holt, James M.; Clanton, Stephen E.
2001-01-01
Results of the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS) gross leakage analysis are presented for evaluating total leakage flow rates and volume discharge caused by a gross leakage event (i.e. open boundary condition). A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA85/FLUINT) thermal hydraulic mathematical model (THMM) representing the Node 2 IATCS was developed to simulate system performance under steady-state nominal conditions as well as the transient flow effect resulting from an open line exposed to ambient. The objective of the analysis was to determine the adequacy of the leak detection software in limiting the quantity of fluid lost during a gross leakage event to within an acceptable level.
Occurrence and Nonoccurrence of Random Sequences: Comment on Hahn and Warren (2009)
ERIC Educational Resources Information Center
Sun, Yanlong; Tweney, Ryan D.; Wang, Hongbin
2010-01-01
On the basis of the statistical concept of waiting time and on computer simulations of the "probabilities of nonoccurrence" (p. 457) for random sequences, Hahn and Warren (2009) proposed that given people's experience of a finite data stream from the environment, the gambler's fallacy is not as gross an error as it might seem. We deal with two…
Can a sample of Landsat sensor scenes reliably estimate the global extent of tropical deforestation?
R. L. Czaplewski
2003-01-01
Tucker and Townshend (2000) conclude that wall-to-wall coverage is needed to avoid gross errors in estimations of deforestation rates' because tropical deforestation is concentrated along roads and rivers. They specifically question the reliability of the 10% sample of Landsat sensor scenes used in the global remote sensing survey conducted by the Food and...
Harnessing Sparse and Low-Dimensional Structures for Robust Clustering of Imagery Data
ERIC Educational Resources Information Center
Rao, Shankar Ramamohan
2009-01-01
We propose a robust framework for clustering data. In practice, data obtained from real measurement devices can be incomplete, corrupted by gross errors, or not correspond to any assumed model. We show that, by properly harnessing the intrinsic low-dimensional structure of the data, these kinds of practical problems can be dealt with in a uniform…
NASA Technical Reports Server (NTRS)
Federhofer, J. A.
1974-01-01
Laboratory data verifying the pulse quaternary modulation (PQM) theoretical predictions is presented. The first laboratory PQM laser communication system was successfully fabricated, integrated, tested and demonstrated. System bit error rate tests were performed and, in general, indicated approximately a 2 db degradation from the theoretically predicted results. These tests indicated that no gross errors were made in the initial theoretical analysis of PQM. The relative ease with which the entire PQM laboratory system was integrated and tested indicates that PQM is a viable candidate modulation scheme for an operational 400 Mbps baseband laser communication system.
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015
ERIC Educational Resources Information Center
Hallin, Anna Eva; Reuterskiöld, Christina
2017-01-01
Purpose: The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies. Method:…
Identification of deficiencies in seasonal rainfall simulated by CMIP5 climate models
NASA Astrophysics Data System (ADS)
Dunning, Caroline M.; Allan, Richard P.; Black, Emily
2017-11-01
An objective technique for analysing seasonality, in terms of regime, progression and timing of the wet seasons, is applied in the evaluation of CMIP5 simulations across continental Africa. Atmosphere-only and coupled integrations capture the gross observed patterns of seasonal progression and give mean onset/cessation dates within 18 days of the observational dates for 11 of the 13 regions considered. Accurate representation of seasonality over central-southern Africa and West Africa (excluding the southern coastline) adds credence for future projected changes in seasonality here. However, coupled simulations exhibit timing biases over the Horn of Africa, with the long rains 20 days late on average. Although both sets of simulations detect biannual rainfall seasonal cycles for East and Central Africa, coupled simulations fail to capture the biannual regime over the southern West African coastline. This is linked with errors in the Gulf of Guinea sea surface temperature (SST) and deficient representation of the SST/rainfall relationship.
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
What errors do peer reviewers detect, and does training improve their ability to detect them?
Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard
2008-10-01
To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.
Gross Motor Development in Children Aged 3-5 Years, United States 2012.
Kit, Brian K; Akinbami, Lara J; Isfahani, Neda Sarafrazi; Ulrich, Dale A
2017-07-01
Objective Gross motor development in early childhood is important in fostering greater interaction with the environment. The purpose of this study is to describe gross motor skills among US children aged 3-5 years using the Test of Gross Motor Development (TGMD-2). Methods We used 2012 NHANES National Youth Fitness Survey (NNYFS) data, which included TGMD-2 scores obtained according to an established protocol. Outcome measures included locomotor and object control raw and age-standardized scores. Means and standard errors were calculated for demographic and weight status with SUDAAN using sample weights to calculate nationally representative estimates, and survey design variables to account for the complex sampling methods. Results The sample included 339 children aged 3-5 years. As expected, locomotor and object control raw scores increased with age. Overall mean standardized scores for locomotor and object control were similar to the mean value previously determined using a normative sample. Girls had a higher mean locomotor, but not mean object control, standardized score than boys (p < 0.05). However, the mean locomotor standardized scores for both boys and girls fell into the range categorized as "average." There were no other differences by age, race/Hispanic origin, weight status, or income in either of the subtest standardized scores (p > 0.05). Conclusions In a nationally representative sample of US children aged 3-5 years, TGMD-2 mean locomotor and object control standardized scores were similar to the established mean. These results suggest that standardized gross motor development among young children generally did not differ by demographic or weight status.
A Dynamic Game on Network Topology for Counterinsurgency Applications
2015-03-26
scenario. This study creates a dynamic game on network topology to provide insight into the effec- tiveness of offensive targeting strategies determined by...focused upon the diffusion of thoughts and innovations throughout complex social networks. Coleman et al. (1966) and Ryan & Gross (1950) investigated...free networks make them extremely resilient against errors but very vulnerable to attack. Most interest- ingly, a determined attacker can remove well
Use of the One-Minute Preceptor as a Teaching Tool in the Gross Anatomy Laboratory
ERIC Educational Resources Information Center
Chan, Lap Ki; Wiseman, Jeffrey
2011-01-01
The one-minute preceptor (OMP) is a time-efficient technique used for teaching in busy clinical settings. It consists of five microskills: (1) get a commitment from the student, (2) probe for supporting evidence, (3) reinforce what was done right, (4) correct errors and fill in omissions, and (5) teach a general rule. It can also be used to…
26 CFR 1.669(c)-2A - Computation of the beneficiary's income and tax for a prior taxable year.
Code of Federal Regulations, 2010 CFR
2010-04-01
... either the exact method or the short-cut method shall be determined by reference to the information... shows a mathematical error on its face which resulted in the wrong amount of tax being paid for such... amounts in such gross income, shall be based upon the return after the correction of such mathematical...
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Silva, T; Ketcha, M; Siewerdsen, J H
Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperativemore » mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such registration capability could offer valuable assistance in target localization without disruption of clinical workflow. G. Kleinszig and S. Vogt are employees of Siemens Healthcare.« less
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Uric acid, an important screening tool to detect inborn errors of metabolism: a case series.
Jasinge, Eresha; Kularatnam, Grace Angeline Malarnangai; Dilanthi, Hewa Warawitage; Vidanapathirana, Dinesha Maduri; Jayasena, Kandana Liyanage Subhashinie Priyadarshika Kapilani Menike; Chandrasiri, Nambage Dona Priyani Dhammika; Indika, Neluwa Liyanage Ruwan; Ratnayake, Pyara Dilani; Gunasekara, Vindya Nandani; Fairbanks, Lynette Dianne; Stiburkova, Blanka
2017-09-06
Uric acid is the metabolic end product of purine metabolism in humans. Altered serum and urine uric acid level (both above and below the reference ranges) is an indispensable marker in detecting rare inborn errors of metabolism. We describe different case scenarios of 4 Sri Lankan patients related to abnormal uric acid levels in blood and urine. CASE 1: A one-and-half-year-old boy was investigated for haematuria and a calculus in the bladder. Xanthine crystals were seen in microscopic examination of urine sediment. Low uric acid concentrations in serum and low urinary fractional excretion of uric acid associated with high urinary excretion of xanthine and hypoxanthine were compatible with xanthine oxidase deficiency. CASE 2: An 8-month-old boy presented with intractable seizures, feeding difficulties, screaming episodes, microcephaly, facial dysmorphism and severe neuro developmental delay. Low uric acid level in serum, low fractional excretion of uric acid and radiological findings were consistent with possible molybdenum cofactor deficiency. Diagnosis was confirmed by elevated levels of xanthine, hypoxanthine and sulfocysteine levels in urine. CASE 3: A 3-year-10-month-old boy presented with global developmental delay, failure to thrive, dystonia and self-destructive behaviour. High uric acid levels in serum, increased fractional excretion of uric acid and absent hypoxanthine-guanine phosphoribosyltransferase enzyme level confirmed the diagnosis of Lesch-Nyhan syndrome. CASE 4: A 9-year-old boy was investigated for lower abdominal pain, gross haematuria and right renal calculus. Low uric acid level in serum and increased fractional excretion of uric acid pointed towards hereditary renal hypouricaemia which was confirmed by genetic studies. Abnormal uric acid level in blood and urine is a valuable tool in screening for clinical conditions related to derangement of the nucleic acid metabolic pathway.
Integrated analysis of error detection and recovery
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1985-01-01
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.
Challenges of primate embryonic stem cell research.
Bavister, Barry D; Wolf, Don P; Brenner, Carol A
2005-01-01
Embryonic stem (ES) cells hold great promise for treating degenerative diseases, including diabetes, Parkinson's, Alzheimer's, neural degeneration, and cardiomyopathies. This research is controversial to some because producing ES cells requires destroying embryos, which generally means human embryos. However, some of the surplus human embryos available from in vitro fertilization (IVF) clinics may have a high rate of genetic errors and therefore would be unsuitable for ES cell research. Although gross chromosome errors can readily be detected in ES cells, other anomalies such as mitochondrial DNA defects may have gone unrecognized. An insurmountable problem is that there are no human ES cells derived from in vivo-produced embryos to provide normal comparative data. In contrast, some monkey ES cell lines have been produced using in vivo-generated, normal embryos obtained from fertile animals; these can represent a "gold standard" for primate ES cells. In this review, we argue a need for strong research programs using rhesus monkey ES cells, conducted in parallel with studies on human ES and adult stem cells, to derive the maximum information about the biology of normal stem cells and to produce technical protocols for their directed differentiation into safe and functional replacement cells, tissues, and organs. In contrast, ES cell research using only human cell lines is likely to be incomplete, which could hinder research progress, and delay or diminish the effective application of ES cell technology to the treatment of human diseases.
Sensitivity in error detection of patient specific QA tools for IMRT plans
NASA Astrophysics Data System (ADS)
Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.
2016-03-01
The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.
Tokamak plasma current disruption infrared control system
Kugel, H.W.; Ulrickson, M.
1984-04-16
This invention is directed to the diagnosis and detection of gross or macroinstabilities in a magnetically-confined fusion plasma device. Detection is performed in real time, and is prompt such that correction of the instability can be initiated in a timely fashion.
A Mechanism for Error Detection in Speeded Response Time Tasks
ERIC Educational Resources Information Center
Holroyd, Clay B.; Yeung, Nick; Coles, Michael G. H.; Cohen, Jonathan D.
2005-01-01
The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors…
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
1982-09-01
IDirecting Work 7. DifrngSt 19. Laspector Improperly Conditions StpigWr 8. Changes in Zecs . -l 20. Fraud, Latent Defects, -I or Gross Errors 9. Challenges...1979. 346 11. Doyle, Peter G. "What the Contractor Expects of the Architect," Building Design and Construction, May 1978, pp. 94-9d. 12. "Exam Required
General subspace learning with corrupted training data via graph embedding.
Bao, Bing-Kun; Liu, Guangcan; Hong, Richang; Yan, Shuicheng; Xu, Changsheng
2013-11-01
We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, David A.
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on August 21, 2013. Representatives from the U.S. Nuclear Regulatory Commission (NRC) and the Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross betamore » analyses, and the comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference, are tabulated. All DER values were less than 3 and results are consistent with low (e.g., background) concentrations.« less
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less
Ko, Jooyeon; Kim, MinYoung
2013-03-01
The Gross Motor Function Measure (GMFM-88) is commonly used in the evaluation of gross motor function in children with cerebral palsy (CP). The relative reliability of GMFM-88 has been assessed in children with CP. However, little information is available regarding the absolute reliability or responsiveness of GMFM-88. The purpose of this study was to determine the absolute and relative reliability and the responsiveness of the GMFM-88 in evaluating gross motor function in children with CP. A clinical measurement design was used. Ten raters scored the GMFM-88 in 84 children (mean age=3.7 years, SD=1.9, range=10 months to 9 years 9 months) from video records across all Gross Motor Function Classification System (GMFCS) levels to establish interrater reliability. Two raters participated to assess intrarater reliability. Responsiveness was determined from 3 additional assessments after the baseline assessment. The interrater and intrarater intraclass correlation coefficients (ICCs) with 95% confidence intervals, standard error of measurement (SEM), smallest real difference (SRD), effect size (ES), and standardized response mean (SRM) were calculated. The relative reliability of the GMFM was excellent (ICCs=.952-1.000). The SEM and SRD for total score of the GMFM were acceptable (1.60 and 3.14, respectively). Additionally, the ES and SRM of the dimension goal scores increased gradually in the 3 follow-up assessments (GMFCS levels I and II: ES=0.5, 0.6, and 0.8 and SRM=1.3, 1.8, and 2.0; GMFCS levels III-V: ES=0.4, 0.7, and 0.9 and SRM=1.5, 1.7, and 2.0). Children over 10 years of age with CP were not included in this study, so the results should not be generalized to all children with CP. Both the reliability and the responsiveness of the GMFM-88 are reasonable for measuring gross motor function in children with CP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Kalet, A; Smith, W
2016-06-15
Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less
Linear error analysis of slope-area discharge determinations
Kirby, W.H.
1987-01-01
The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.
Tadayon, Saeid; Smith, C.F.
1994-01-01
Data were collected on physical properties and chemistry of 4 surface water, l4 ground water, and 4 bottom sediment sites in the Rillito Creek basin where artificial recharge of surface runoff is being considered. Concentrations of suspended sediment in streams generally increased with increases in streamflow and were higher during the summer. The surface water is a calcium and bicarbonate type, and the ground water is calcium sodium and bicarbonate type. Total trace ek=nents in surface water that exceeded the U.S. Environmental Protection Agency primary maximum contaminant levels for drinking-water standards were barium, beryllium, cadmium, chromium, lead, mercury and nickel. Most unfiltered samples for suspended gross alpha as uranium, and unadjusted gross alpha plus gross beta in surface water exceeded the U.S. Environmental Protection Agency and the State of Arizona drinking-water standards. Comparisons of trace- element concentrations in bottom sediment with those in soils of the western conterminous United States generally indicate similar concentrations for most of the trace elements, with the exceptions of scandium and tin. The maximum concentration of total nitrite plus nitrate as nitrogen in three ground- samples and total lead in one ground-water sample exceeded U.S. Environmental Protection Agency primary maximum contaminant levels for drinking- water standards, respectively. Seven organochlorine pesticides were detected in surface-water samples and nine in bottom-sediment samples. Three priority pollutants were detected in surface water, two were detected in ground water, and eleven were detected in bottom sediment. Low concentrations of oil and grease were detected in surface-water and bottom- sediment samples.
NASA Technical Reports Server (NTRS)
Buechler, W.; Tucker, A. G.
1981-01-01
Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.
An investigation of reports of Controlled Flight Toward Terrain (CFTT)
NASA Technical Reports Server (NTRS)
Porter, R. F.; Loomis, J. P.
1981-01-01
Some 258 reports from more than 23,000 documents in the files of the Aviation Safety Reporting System (ASRS) were found to be to the hazard of flight into terrain with no prior awareness by the crew of impending disaster. Examination of the reports indicate that human error was a casual factor in 64% of the incidents in which some threat of terrain conflict was experienced. Approximately two-thirds of the human errors were attributed to controllers, the most common discrepancy being a radar vector below the Minimum Vector Altitude (MVA). Errors by pilots were of a much diverse nature and include a few instances of gross deviations from their assigned altitudes. The ground proximity warning system and the minimum safe altitude warning equipment were the initial recovery factor in some 18 serious incidents and were apparently the sole warning in six reported instances which otherwise would most probably have ended in disaster.
Permanence analysis of a concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.; Kasami, T.
1983-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.
Probability of undetected error after decoding for a concatenated coding scheme
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
NASA Astrophysics Data System (ADS)
Hertwig, Denise; Burgin, Laura; Gan, Christopher; Hort, Matthew; Jones, Andrew; Shaw, Felicia; Witham, Claire; Zhang, Kathy
2015-12-01
Transboundary smoke haze caused by biomass burning frequently causes extreme air pollution episodes in maritime and continental Southeast Asia. With millions of people being affected by this type of pollution every year, the task to introduce smoke haze related air quality forecasts is urgent. We investigate three severe haze episodes: June 2013 in Maritime SE Asia, induced by fires in central Sumatra, and March/April 2013 and 2014 on mainland SE Asia. Based on comparisons with surface measurements of PM10 we demonstrate that the combination of the Lagrangian dispersion model NAME with emissions derived from satellite-based active-fire detection provides reliable forecasts for the region. Contrasting two fire emission inventories shows that using algorithms to account for fire pixel obscuration by cloud or haze better captures the temporal variations and observed persistence of local pollution levels. Including up-to-date representations of fuel types in the area and using better conversion and emission factors is found to more accurately represent local concentration magnitudes, particularly for peat fires. With both emission inventories the overall spatial and temporal evolution of the haze events is captured qualitatively, with some error attributed to the resolution of the meteorological data driving the dispersion process. In order to arrive at a quantitative agreement with local PM10 levels, the simulation results need to be scaled. Considering the requirements of operational forecasts, we introduce a real-time bias correction technique to the modeling system to address systematic and random modeling errors, which successfully improves the results in terms of reduced normalized mean biases and fractional gross errors.
Arrhythmia Evaluation in Wearable ECG Devices
Sadrawi, Muammar; Lin, Chien-Hung; Hsieh, Yita; Kuo, Chia-Chun; Chien, Jen Chien; Haraikawa, Koichi; Abbod, Maysam F.; Shieh, Jiann-Shing
2017-01-01
This study evaluates four databases from PhysioNet: The American Heart Association database (AHADB), Creighton University Ventricular Tachyarrhythmia database (CUDB), MIT-BIH Arrhythmia database (MITDB), and MIT-BIH Noise Stress Test database (NSTDB). The ANSI/AAMI EC57:2012 is used for the evaluation of the algorithms for the supraventricular ectopic beat (SVEB), ventricular ectopic beat (VEB), atrial fibrillation (AF), and ventricular fibrillation (VF) via the evaluation of the sensitivity, positive predictivity and false positive rate. Sample entropy, fast Fourier transform (FFT), and multilayer perceptron neural network with backpropagation training algorithm are selected for the integrated detection algorithms. For this study, the result for SVEB has some improvements compared to a previous study that also utilized ANSI/AAMI EC57. In further, VEB sensitivity and positive predictivity gross evaluations have greater than 80%, except for the positive predictivity of the NSTDB database. For AF gross evaluation of MITDB database, the results show very good classification, excluding the episode sensitivity. In advanced, for VF gross evaluation, the episode sensitivity and positive predictivity for the AHADB, MITDB, and CUDB, have greater than 80%, except for MITDB episode positive predictivity, which is 75%. The achieved results show that the proposed integrated SVEB, VEB, AF, and VF detection algorithm has an accurate classification according to ANSI/AAMI EC57:2012. In conclusion, the proposed integrated detection algorithm can achieve good accuracy in comparison with other previous studies. Furthermore, more advanced algorithms and hardware devices should be performed in future for arrhythmia detection and evaluation. PMID:29068369
Reducing errors benefits the field-based learning of a fundamental movement skill in children.
Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W
2013-03-01
Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.
Groundwater quality of the Gulf Coast aquifer system, Houston, Texas, 2010
Oden, Jeannette H.; Brown, Dexter W.; Oden, Timothy D.
2011-01-01
Gross alpha-particle activities and beta-particle activities for all 47 samples were analyzed at 72 hours after sample collection and again at 30 days after sample collection, allowing for the measurement of the activity of short-lived isotopes. Gross alpha-particle activities reported in this report were not adjusted for activity contributions by radon or uranium and, therefore, are conservatively high estimates if compared to the U.S. Environmental Protection Agency National Primary Drinking Water Regulation for adjusted gross alpha-particle activity. The gross alpha-particle activities at 30 days in the samples ranged from R0.60 to 25.5 picocuries per liter and at 72 hours ranged from 2.58 to 39.7 picocuries per liter, and the "R" preceding the value of 0.60 picocuries per liter refers to a nondetected result less than the sample-specific critical level. Gross beta-particle activities measured at 30 days ranged from 1.17 to 14.4 picocuries per liter and at 72 hours ranged from 1.97 to 4.4 picocuries per liter. Filtered uranium was detected in quantifiable amounts in all of the 47 wells sampled. The uranium concentrations ranged from 0.03 to 42.7 micrograms per liter. One sample was analyzed for carbon-14, and the amount of modern atmospheric carbon was reported as 0.2 percent. Six source-water samples collected from municipal supply wells were analyzed for radium-226, and all of the concentrations were considered detectable concentrations (greater than their associated sample-specific critical level). Three source-water samples collected were analyzed for radon-222, and all of the concentrations were substantially greater than the associated sample-specific critical level.
Passarge, Michelle; Fix, Michael K; Manser, Peter; Stampanoni, Marco F M; Siebers, Jeffrey V
2017-04-01
To develop a robust and efficient process that detects relevant dose errors (dose errors of ≥5%) in external beam radiation therapy and directly indicates the origin of the error. The process is illustrated in the context of electronic portal imaging device (EPID)-based angle-resolved volumetric-modulated arc therapy (VMAT) quality assurance (QA), particularly as would be implemented in a real-time monitoring program. A Swiss cheese error detection (SCED) method was created as a paradigm for a cine EPID-based during-treatment QA. For VMAT, the method compares a treatment plan-based reference set of EPID images with images acquired over each 2° gantry angle interval. The process utilizes a sequence of independent consecutively executed error detection tests: an aperture check that verifies in-field radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment check to examine if rotation, scaling, and translation are within tolerances; pixel intensity check containing the standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each check were determined. To test the SCED method, 12 different types of errors were selected to modify the original plan. A series of angle-resolved predicted EPID images were artificially generated for each test case, resulting in a sequence of precalculated frames for each modified treatment plan. The SCED method was applied multiple times for each test case to assess the ability to detect introduced plan variations. To compare the performance of the SCED process with that of a standard gamma analysis, both error detection methods were applied to the generated test cases with realistic noise variations. Averaged over ten test runs, 95.1% of all plan variations that resulted in relevant patient dose errors were detected within 2° and 100% within 14° (<4% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 89.1% were detected by the SCED method within 2°. Based on the type of check that detected the error, determination of error sources was achieved. With noise ranging from no random noise to four times the established noise value, the averaged relevant dose error detection rate of the SCED method was between 94.0% and 95.8% and that of gamma between 82.8% and 89.8%. An EPID-frame-based error detection process for VMAT deliveries was successfully designed and tested via simulations. The SCED method was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of relevant dose errors. Compared to a typical (3%, 3 mm) gamma analysis, the SCED method produced a higher detection rate for all introduced dose errors, identified errors in an earlier stage, displayed a higher robustness to noise variations, and indicated the error source. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, D; Dyer, B; Kumaran Nair, C
Purpose: The Integral Quality Monitor (IQM), developed by iRT Systems GmbH (Koblenz, Germany) is a large-area, linac-mounted ion chamber used to monitor photon fluence during patient treatment. Our previous work evaluated the change of the ion chamber’s response to deviations from static 1×1 cm2 and 10×10 cm2 photon beams and other characteristics integral to use in external beam detection. The aim of this work is to simulate two external beam radiation delivery errors, quantify the detection of simulated errors and evaluate the reduction in patient harm resulting from detection. Methods: Two well documented radiation oncology delivery errors were selected formore » simulation. The first error was recreated by modifying a wedged whole breast treatment, removing the physical wedge and calculating the planned dose with Pinnacle TPS (Philips Radiation Oncology Systems, Fitchburg, WI). The second error was recreated by modifying a static-gantry IMRT pharyngeal tonsil plan to be delivered in 3 unmodulated fractions. A radiation oncologist evaluated the dose for simulated errors and predicted morbidity and mortality commiserate with the original reported toxicity, indicating that reported errors were approximately simulated. The ion chamber signal of unmodified treatments was compared to the simulated error signal and evaluated in Pinnacle TPS again with radiation oncologist prediction of simulated patient harm. Results: Previous work established that transmission detector system measurements are stable within 0.5% standard deviation (SD). Errors causing signal change greater than 20 SD (10%) were considered detected. The whole breast and pharyngeal tonsil IMRT simulated error increased signal by 215% and 969%, respectively, indicating error detection after the first fraction and IMRT segment, respectively. Conclusion: The transmission detector system demonstrated utility in detecting clinically significant errors and reducing patient toxicity/harm in simulated external beam delivery. Future work will evaluate detection of other smaller magnitude delivery errors.« less
Effects of Contextual Sight-Singing and Aural Skills Training on Error-Detection Abilities.
ERIC Educational Resources Information Center
Sheldon, Deborah A.
1998-01-01
Examines the effects of contextual sight-singing and ear training on pitch and rhythm error detection abilities among undergraduate instrumental music education majors. Shows that additional training produced better error detection, particularly with rhythm errors and in one-part examples. Maintains that differences attributable to texture were…
NASA Astrophysics Data System (ADS)
Adziz, Mohd Izwan Abdul; Siong, Khoo Kok
2018-04-01
Recently, the Long Term Storage Facility (LTSF) in Bukit Kledang, Perak, Malaysia, has been upgraded to repository facility upon the completion of decontamination and decommissioning (D&D) process. Thorium waste and contaminated material that may contain some minor amounts of thorium hydroxide were disposed in this facility. This study is conducted to determine the concentrations of gross alpha and gross beta radioactivities in soil samples collected around the repository facility. A total of 12 soil samples were collected consisting 10 samples from around the facility and 2 samples from selected residential area near the facility. In addition, the respective dose rates were measured 5 cm and 1 m above the ground by using survey meter with Geiger Muller (GM) detector and Sodium Iodide (NaI) detector. Soil samples were collected using hand auger and then were taken back to the laboratory for further analysis. Samples were cleaned, dried, pulverized and sieved prior to analysis. Gross alpha and gross beta activity measurements were carried out using gas flow proportional counter, Canberra Series 5 XLB - Automatic Low Background Alpha and Beta Counting System. The obtained results show that, the gross alpha and gross beta activity concentration ranged from 1.55 to 5.34 Bq/g with a mean value of 3.47 ± 0.09 Bq/g and 1.64 to 5.78 Bq/g with a mean value of 3.49 ± 0.09 Bq/g, respectively. These results can be used as an additional data to represent terrestrial radioactivity baseline data for Malaysia environment. This estimation will also serve as baseline for detection of any future related activities of contamination especially around the repository facility area.
Sanchez-Cabeza, J A; Pujol, L
1995-05-01
The radiological examination of water requires a rapid screening technique that permits the determination of the gross alpha and beta activities of each sample in order to decide if further radiological analyses are necessary. In this work, the use of a low background liquid scintillation system (Quantulus 1220) is proposed to simultaneously determine the gross activities in water samples. Liquid scintillation is compared to more conventional techniques used in most monitoring laboratories. In order to determine the best counting configuration of the system, pulse shape discrimination was optimized for 6 scintillant/vial combinations. It was concluded that the best counting configuration was obtained with the scintillation cocktail Optiphase Hisafe 3 in Zinsser low diffusion vials. The detection limits achieved were 0.012 Bq L-1 and 0.14 Bq L-1 for gross alpha and beta activity respectively, after a 1:10 concentration process by simple evaporation and for a counting time of only 360 min. The proposed technique is rapid, gives spectral information, and is adequate to determine gross activities according to the World Health Organization (WHO) guideline values.
Smith, Maxwell L; Wilkerson, Trent; Grzybicki, Dana M; Raab, Stephen S
2012-09-01
Few reports have documented the effectiveness of Lean quality improvement in changing anatomic pathology patient safety. We used Lean methods of education; hoshin kanri goal setting and culture change; kaizen events; observation of work activities, hand-offs, and pathways; A3-problem solving, metric development, and measurement; and frontline work redesign in the accessioning and gross examination areas of an anatomic pathology laboratory. We compared the pre- and post-Lean implementation proportion of near-miss events and changes made in specific work processes. In the implementation phase, we documented 29 individual A3-root cause analyses. The pre- and postimplementation proportions of process- and operator-dependent near-miss events were 5.5 and 1.8 (P < .002) and 0.6 and 0.6, respectively. We conclude that through culture change and implementation of specific work process changes, Lean implementation may improve pathology patient safety.
Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan
2018-02-01
The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael
2002-01-01
Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
Form Overrides Meaning When Bilinguals Monitor for Errors
Ivanova, Iva; Ferreira, Victor S.; Gollan, Tamar H.
2016-01-01
Bilinguals rarely produce unintended language switches, which may in part be because switches are detected and corrected by an internal monitor. But are language switches easier or harder to detect than within-language semantic errors? To approximate internal monitoring, bilinguals listened (Experiment 1) or read aloud (Experiment 2) stories, and detected language switches (translation equivalents or semantically unrelated to expected words) and within-language errors (semantically related or unrelated to expected words). Bilinguals detected semantically related within-language errors most slowly and least accurately, language switches more quickly and accurately than within-language errors, and (in Experiment 2), translation equivalents as quickly and accurately as unrelated language switches. These results suggest that internal monitoring of form (which can detect mismatches in language membership) completes earlier than, and is independent of, monitoring of meaning. However, analysis of reading times prior to error detection revealed meaning violations to be more disruptive for processing than language violations. PMID:28649169
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surovchak, Scott; Miller, Michele
The 2008 Long-Term Surveillance Plan [LTSP] for the Decommissioned Hallam Nuclear Power Facility, Hallam, Nebraska (http://www.lm.doe.gov/Hallam/Documents.aspx) requires groundwater monitoring once every 2 years. Seventeen monitoring wells at the Hallam site were sampled during this event as specified in the plan. Planned monitoring locations are shown in Attachment 1, Sampling and Analysis Work Order. Water levels were measured at all sampled wells and at two additional wells (6A and 6B) prior to the start of sampling. Additionally, water levels of each sampled well were measured at the beginning of sampling. See Attachment 2, Trip Report, for additional details. Sampling and analysismore » were conducted as specified in Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated, http://energy.gov/lm/downloads/sampling-and-analysis-plan-us-department- energy-office-legacy-management-sites). Gross alpha and gross beta are the only parameters that were detected at statistically significant concentrations. Time/concentration graphs of the gross alpha and gross beta data are included in Attachment 3, Data Presentation. The gross alpha and gross beta activity concentrations observed are consistent with values previously observed and are attributed to naturally occurring radionuclides (e.g., uranium and uranium decay chain products) in the groundwater.« less
Spatial interpolation of solar global radiation
NASA Astrophysics Data System (ADS)
Lussana, C.; Uboldi, F.; Antoniazzi, C.
2010-09-01
Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
Local concurrent error detection and correction in data structures using virtual backpointers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.C.J.; Chen, P.P.; Fuchs, W.K.
1989-11-01
A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.
Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R
2016-03-01
This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.
Oftedal, O T; Eisert, R; Barrell, G K
2014-01-01
Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing sugar methods for total sugar-appear likely to produce substantial error in marine mammal milks. It is important that alternative analytical methods be properly validated against a reference method before being used, especially for mammalian milks that differ greatly from cow milk in analyte characteristics and concentrations. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Coordinating robot motion, sensing, and control in plans. LDRD project final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xavier, P.G.; Brown, R.G.; Watterberg, P.A.
1997-08-01
The goal of this project was to develop a framework for robotic planning and execution that provides a continuum of adaptability with respect to model incompleteness, model error, and sensing error. For example, dividing robot motion into gross-motion planning, fine-motion planning, and sensor-augmented control had yielded productive research and solutions to individual problems. Unfortunately, these techniques could only be combined by hand with ad hoc methods and were restricted to systems where all kinematics are completely modeled in planning. The original intent was to develop methods for understanding and autonomously synthesizing plans that coordinate motion, sensing, and control. The projectmore » considered this problem from several perspectives. Results included (1) theoretical methods to combine and extend gross-motion and fine-motion planning; (2) preliminary work in flexible-object manipulation and an implementable algorithm for planning shortest paths through obstacles for the free-end of an anchored cable; (3) development and implementation of a fast swept-body distance algorithm; and (4) integration of Sandia`s C-Space Toolkit geometry engine and SANDROS motion planer and improvements, which yielded a system practical for everyday motion planning, with path-segment planning at interactive speeds. Results (3) and (4) have either led to follow-on work or are being used in current projects, and they believe that (2) will eventually be also.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, AL; Bhagwat, MS; Buzurovic, I
Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less
46 CFR 182.480 - Flammable vapor detection systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 100 GROSS TONS) MACHINERY INSTALLATION Specific Machinery Requirements § 182.480 Flammable vapor... permit calibration in a vapor free atmosphere. (g) Electrical connections, wiring, and components for a...
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.
2010-01-01
The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.
Liu, Shuo; Zhang, Lei; Li, Jian
2016-11-24
The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Self-checking self-repairing computer nodes using the mirror processor
NASA Technical Reports Server (NTRS)
Tamir, Yuval
1992-01-01
Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.
Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems
NASA Astrophysics Data System (ADS)
El-Ghandour, Osama M.; Saha, Debabrata
1991-05-01
A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, C. C.; Chen, P. P.; Fuchs, W. K.
1987-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, A; Nyflot, M; Sponseller, P
2014-06-01
Purpose: Radiation treatment planning involves a complex workflow that can make safety improvement efforts challenging. This study utilizes an incident reporting system to identify detection points of near-miss errors, in order to guide our departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or their patterns. Methods: 1377 incidents were analyzed from a departmental nearmiss error reporting system from 3/2012–10/2013. All incidents were prospectively reviewed weekly by a multi-disciplinary team, and assigned a near-miss severity score ranging from 0–4 reflecting potential harm (no harm to critical). A 98-step consensus workflow was usedmore » to determine origination and detection points of near-miss errors, categorized into 7 major steps (patient assessment/orders, simulation, contouring/treatment planning, pre-treatment plan checks, therapist/on-treatment review, post-treatment checks, and equipment issues). Categories were compared using ANOVA. Results: In the 7-step workflow, 23% of near-miss errors were detected within the same step in the workflow, while an additional 37% were detected by the next step in the workflow, and 23% were detected two steps downstream. Errors detected further from origination were more severe (p<.001; Figure 1). The most common source of near-miss errors was treatment planning/contouring, with 476 near misses (35%). Of those 476, only 72(15%) were found before leaving treatment planning, 213(45%) were found at physics plan checks, and 191(40%) were caught at the therapist pre-treatment chart review or on portal imaging. Errors that passed through physics plan checks and were detected by therapists were more severe than other errors originating in contouring/treatment planning (1.81 vs 1.33, p<0.001). Conclusion: Errors caught by radiation treatment therapists tend to be more severe than errors caught earlier in the workflow, highlighting the importance of safety checks in dosimetry and physics. We are utilizing our findings to improve manual and automated checklists for dosimetry and physics.« less
Error-Related Psychophysiology and Negative Affect
ERIC Educational Resources Information Center
Hajcak, G.; McDonald, N.; Simons, R.F.
2004-01-01
The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
An advanced SEU tolerant latch based on error detection
NASA Astrophysics Data System (ADS)
Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao
2018-05-01
This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).
[Detection and classification of medication errors at Joan XXIII University Hospital].
Jornet Montaña, S; Canadell Vilarrasa, L; Calabuig Mũoz, M; Riera Sendra, G; Vuelta Arce, M; Bardají Ruiz, A; Gallart Mora, M J
2004-01-01
Medication errors are multifactorial and multidisciplinary, and may originate in processes such as drug prescription, transcription, dispensation, preparation and administration. The goal of this work was to measure the incidence of detectable medication errors that arise within a unit dose drug distribution and control system, from drug prescription to drug administration, by means of an observational method confined to the Pharmacy Department, as well as a voluntary, anonymous report system. The acceptance of this voluntary report system's implementation was also assessed. A prospective descriptive study was conducted. Data collection was performed at the Pharmacy Department from a review of prescribed medical orders, a review of pharmaceutical transcriptions, a review of dispensed medication and a review of medication returned in unit dose medication carts. A voluntary, anonymous report system centralized in the Pharmacy Department was also set up to detect medication errors. Prescription errors were the most frequent (1.12%), closely followed by dispensation errors (1.04%). Transcription errors (0.42%) and administration errors (0.69%) had the lowest overall incidence. Voluntary report involved only 4.25% of all detected errors, whereas unit dose medication cart review contributed the most to error detection. Recognizing the incidence and types of medication errors that occur in a health-care setting allows us to analyze their causes and effect changes in different stages of the process in order to ensure maximal patient safety.
NASA Astrophysics Data System (ADS)
Wibowo, Wahyu; Sinu, Elisabeth B.; Setiawan
2017-03-01
The condition of East Nusa Tenggara Province which recently developed new districts can affect the number of information or data collected become unbalanced. One of the consequences of ignoring the data incompleteness is the estimator become not valid. Therefore, the analysis of unbalanced panel data is very crucial.The aim of this paper is to find the estimation of Gross Regional Domestic Product in East Nusa Tenggara Province using unbalanced panel data regression model for two-way error component which assume random effect model (REM). In this research, we employ Feasible Generalized Least Squares (FGLS) as regression coefficients estimation method. Since variance of the model is unknown, ANOVA method is considered to obtain the variance components in order to construct the variance-covariance matrix. The data used in this research is secondary data taken from Central Bureau of Statistics of East Nusa Tenggara Province in 21 districts period 2004-2013. The predictors are the number of labor over 15 years old (X1), electrification ratios (X2), and local revenues (X3) while Gross Regional Domestic Product based on constant price 2000 is the response (Y). The FGLS estimation result shows that the value of R2 is 80,539% and all the predictors chosen are significantly affect (α = 5%) the Gross Regional Domestic Product in all district of East Nusa Tenggara Province. Those variables are the number of labor over 15 years old (X1), electrification ratios (X2), and local revenues (X3) with 0,22986, 0,090476, and 0,14749 of elasticities, respectively.
Fault-tolerant quantum error detection.
Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher
2017-10-01
Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.
Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.
1987-07-01
detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication
Saunders, Kathryn J; Little, Julie-Anne; McClelland, Julie F; Jackson, A Jonathan
2010-06-01
To describe refractive status in children and young adults with cerebral palsy (CP) and relate refractive error to standardized measures of type and severity of CP impairment and to ocular dimensions. A population-based sample of 118 participants aged 4 to 23 years with CP (mean 11.64 +/- 4.06) and an age-appropriate control group (n = 128; age, 4-16 years; mean, 9.33 +/- 3.52) were recruited. Motor impairment was described with the Gross Motor Function Classification Scale (GMFCS), and subtype was allocated with the Surveillance of Cerebral Palsy in Europe (SCPE). Measures of refractive error were obtained from all participants and ocular biometry from a subgroup with CP. A significantly higher prevalence and magnitude of refractive error was found in the CP group compared to the control group. Axial length and spherical refractive error were strongly related. This relation did not improve with inclusion of corneal data. There was no relation between the presence or magnitude of spherical refractive errors in CP and the level of motor impairment, intellectual impairment, or the presence of communication difficulties. Higher spherical refractive errors were significantly associated with the nonspastic CP subtype. The presence and magnitude of astigmatism were greater when intellectual impairment was more severe, and astigmatic errors were explained by corneal dimensions. Conclusions. High refractive errors are common in CP, pointing to impairment of the emmetropization process. Biometric data support this In contrast to other functional vision measures, spherical refractive error is unrelated to CP severity, but those with nonspastic CP tend to demonstrate the most extreme errors in refraction.
Data integrity systems for organ contours in radiation therapy planning.
Shah, Veeraj P; Lakshminarayanan, Pranav; Moore, Joseph; Tran, Phuoc T; Quon, Harry; Deville, Curtiland; McNutt, Todd R
2018-06-12
The purpose of this research is to develop effective data integrity models for contoured anatomy in a radiotherapy workflow for both real-time and retrospective analysis. Within this study, two classes of contour integrity models were developed: data driven models and contiguousness models. The data driven models aim to highlight contours which deviate from a gross set of contours from similar disease sites and encompass the following regions of interest (ROI): bladder, femoral heads, spinal cord, and rectum. The contiguousness models, which individually analyze the geometry of contours to detect possible errors, are applied across many different ROI's and are divided into two metrics: Extent and Region Growing over volume. After analysis, we found that 70% of detected bladder contours were verified as suspicious. The spinal cord and rectum models verified that 73% and 80% of contours were suspicious respectively. The contiguousness models were the most accurate models and the Region Growing model was the most accurate submodel. 100% of the detected noncontiguous contours were verified as suspicious, but in the cases of spinal cord, femoral heads, bladder, and rectum, the Region Growing model detected additional two to five suspicious contours that the Extent model failed to detect. When conducting a blind review to detect false negatives, it was found that all the data driven models failed to detect all suspicious contours. The Region Growing contiguousness model produced zero false negatives in all regions of interest other than prostate. With regards to runtime, the contiguousness via extent model took an average of 0.2 s per contour. On the other hand, the region growing method had a longer runtime which was dependent on the number of voxels in the contour. Both contiguousness models have potential for real-time use in clinical radiotherapy while the data driven models are better suited for retrospective use. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Horowitz-Kraus, Tzipi
2016-05-01
The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Longitudinal motor development of "apparently normal" high-risk infants at 18 months, 3 and 5 years.
Goyen, Traci Anne; Lui, Kei
2002-12-01
Motor development appears to be more affected by premature birth than other developmental domains, however few studies have specifically investigated the development of gross and fine motor skills in this population. To examine longitudinal motor development in a group of "apparently normal" high-risk infants. Developmental follow-up clinic in a perinatal centre. Longitudinal observational cohort study. Fifty-eight infants born less than 29 weeks gestation and/or 1000 g and without disabilities detected at 12 months. Longitudinal gross and fine motor skills at 18 months, 3 and 5 years using the Peabody Developmental Motor Scales. The HOME scale provided information of the home environment as a stimulus for development. A large proportion (54% at 18 months, 47% at 3 years and 64% at 5 years) of children continued to have fine motor deficits from 18 months to 5 years. The proportion of infants with gross motor deficits significantly increased over this period (14%, 33% and 81%, p<0.001), particularly for the 'micropreemies' (born <750 g). In multivariate analyses, gross motor development was positively influenced by the quality of the home environment. A large proportion of high-risk infants continued to have fine motor deficits, reflecting an underlying problem with fine motor skills. The proportion of infants with gross motor deficits significantly increased, as test demands became more challenging. In addition, the development of gross and fine motor skills appears to be influenced differently by the home environment.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Hornewer, Nancy J.
2014-01-01
Recent studies have documented the presence of trace elements, organic compounds including polycyclic aromatic hydrocarbons, and radionuclides in sediment from the Colorado River delta and from sediment in some side canyons in Lake Powell, Utah and Arizona. The fate of many of these contaminants is of significant concern to the resource managers of the National Park Service Glen Canyon National Recreation Area because of potential health impacts to humans and aquatic and terrestrial species. In 2010, the U.S. Geological Survey began a sediment-core sampling and analysis program in the San Juan River and Escalante River deltas in Lake Powell, Utah, to help the National Park Service further document the presence or absence of contaminants in deltaic sediment. Three sediment cores were collected from the San Juan River delta in August 2010 and three sediment cores and an additional replicate core were collected from the Escalante River delta in September 2011. Sediment from the cores was subsampled and composited for analysis of major and trace elements. Fifty-five major and trace elements were analyzed in 116 subsamples and 7 composited samples for the San Juan River delta cores, and in 75 subsamples and 9 composited samples for the Escalante River delta cores. Six composited sediment samples from the San Juan River delta cores and eight from the Escalante River delta cores also were analyzed for 55 low-level organochlorine pesticides and polychlorinated biphenyls, 61 polycyclic aromatic hydrocarbon compounds, gross alpha and gross beta radionuclides, and sediment-particle size. Additionally, water samples were collected from the sediment-water interface overlying each of the three cores collected from the San Juan River and Escalante River deltas. Each water sample was analyzed for 57 major and trace elements. Most of the major and trace elements analyzed were detected at concentrations greater than reporting levels for the sediment-core subsamples and composited samples. Low-level organochlorine pesticides and polychlorinated biphenyls were not detected in any of the samples. Only one polycyclic aromatic hydrocarbon compound was detected at a concentration greater than the reporting level for one San Juan composited sample. Gross alpha and gross beta radionuclides were detected at concentrations greater than reporting levels for all samples. Most of the major and trace elements analyzed were detected at concentrations greater than reporting levels for water samples.
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.
Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J
2018-01-01
Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.
The Hubble Space Telescope optical systems failure report
NASA Technical Reports Server (NTRS)
1990-01-01
The findings of the Hubble Space Telescope Optical Systems Board of Investigation are reported. The Board was formed to determine the cause of the flaw in the telescope, how it occurred, and why it was not detected before launch. The Board conducted its investigation to include interviews with personnel involved in the fabrication and test of the telescope, review of documentation, and analysis and test of the equipment used in the fabrication of the telescope's mirrors. The investigation proved that the primary mirror was made in the wrong shape (a 0.4-wave rms wavefront error at 632.8 nm). The primary mirror was manufactured by the Perkin-Elmer Corporation (Hughes Danbury Optical Systems, Inc.). The critical optics used as a template in shaping the mirror, the reflective null corrector (RNC), consisted of two small mirrors and a lens. This unit had been preserved by the manufacturer exactly as it was during the manufacture of the mirror. When the Board measured the RNC, the lens was incorrectly spaced from the mirrors. Calculations of the effect of such displacement on the primary mirror show that the measured amount, 1.3 mm, accounts in detail for the amount and character of the observed image blurring. No verification of the reflective null corrector's dimensions was carried out by Perkin-Elmer after the original assembly. There were, however, clear indications of the problem from auxiliary optical tests made at the time. A special optical unit called an inverse null corrector, designed to mimic the reflection from a perfect primary mirror, was built and used to align the apparatus; when so used, it clearly showed the error in the reflective null corrector. A second null corrector was used to measure the vertex radius of the finished primary mirror. It, too, clearly showed the error in the primary mirror. Both indicators of error were discounted at the time as being themselves flawed. The Perkin-Elmer plan for fabricating the primary mirror placed complete reliance on the reflective null corrector as the only test to be used in both manufacturing and verifying the mirror's surface with the required precision. This methodology should have alerted NASA management to the fragility of the process and the possibility of gross error. Such errors had been seen in other telescope programs, yet no independent tests were planned, although some simple tests to protect against major error were considered and rejected. During the critical time period, there was great concern about cost and schedule, which further inhibited consideration of independent tests.
Experimental investigation of observation error in anuran call surveys
McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.
2010-01-01
Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.
DeCrappeo, Nicole; DeLorenze, Elizabeth J.; Giguere, Andrew T; Pyke, David A.; Bottomley, Peter J.
2017-01-01
AimThere is interest in determining how cheatgrass (Bromus tectorum L.) modifies N cycling in sagebrush (Artemisia tridentata Nutt.) soils of the western USA.MethodsTo gain insight into the roles of fungi and bacteria in N cycling of cheatgrass-invaded and uninvaded sagebrush soils, the fungal protein synthesis inhibitor, cycloheximide (CHX), and the bacteriocidal compound, bronopol (BRO) were combined with a 15NH4+ isotope pool dilution approach.ResultsCHX reduced gross N mineralization to the same rate in both sagebrush and cheatgrass soils indicating a role for fungi in N mineralization in both soil types. In cheatgrass soils BRO completely inhibited gross N mineralization, whereas, in sagebrush soils a BRO-resistant gross N mineralization rate was detected that was slower than CHX sensitive gross N mineralization, suggesting that the microbial drivers of gross N mineralization were different in sagebrush and cheatgrass soils. Net N mineralization was stimulated to a higher rate in sagebrush than in cheatgrass soils by CHX, implying that a CHX inhibited N sink was larger in the former than the latter soils. Initial gross NH4+ consumption rates were reduced significantly by both CHX and BRO in both soil types, yet, consumption rates recovered significantly between 24 and 48 h in CHX-treated sagebrush soils. The recovery of NH4+ consumption in sagebrush soils corresponded with an increase in the rate of net nitrification.ConclusionsThese results suggest that cheatgrass invasion of sagebrush soils of the northern Great Basin reduces the capacity of the fungal N consumption sink, enhances the capacity of a CHX resistant N sink and alters the contributions of bacteria and fungi to gross N mineralization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.
2014-10-15
Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.; Mansuripur, M.
1992-01-01
A commonly used tracking method on pre-grooved magneto-optical (MO) media is the push-pull technique, and the astigmatic method is a popular focus-error detection approach. These two methods are analyzed using DIFFRACT, a general-purpose scalar diffraction modeling program, to observe the effects on the error signals due to focusing lens misalignment, Seidel aberrations, and optical crosstalk (feedthrough) between the focusing and tracking servos. Using the results of the astigmatic/push-pull system as a basis for comparison, a novel focus/track-error detection technique that utilizes a ring toric lens is evaluated as well as the obscuration method (focus error detection only).
Error detection and correction unit with built-in self-test capability for spacecraft applications
NASA Technical Reports Server (NTRS)
Timoc, Constantin
1990-01-01
The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.
ERIC Educational Resources Information Center
Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad
2010-01-01
This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…
Fault-tolerant quantum error detection
Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher
2017-01-01
Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889
Immunohistochemical characterization of neoplastic cells of breast origin.
Noriega, Mariadelasmercedes; Paesani, Fernando; Perazzo, Florencia; Lago, Néstor; Krupitzki, Hugo; Nieto, Silvana; Garcia, Alejandro; Avagnina, Alejandra; Elsner, Boris; Denninghoff, Valeria Cecilia
2012-06-22
After skin cancer, breast cancer is the most common malignancy in women. Tumors of unknown origin account for 5-15% of malignant neoplasms, with 1.5% being breast cancer. An immunohistochemical panel with conventional and newer markers, such as mammaglobin, was selected for the detection of neoplastic cells of breast origin. The specific objectives are: 1) to determine the sensitivity and specificity of the panel, with a special emphasis on the inclusion of the mammaglobin marker, and 2) to compare immunohistochemistry performed on whole tissue sections and on tissue micro-array. Twenty-nine metastatic breast tumors were included and assumed as tumors of unknown origin. Other 48 biopsies of diverse tissues were selected and assumed as negative controls. Tissue Micro-Array was performed. Immunohistochemistry for mammaglobin, gross cystic disease fluid protein-15, estrogen receptor, progesterone receptor and cytokeratin 7 was done. Mammaglobin positive staining was observed in 10/29 cases, in 13/29 cases for gross cystic disease fluid protein-15, in 20/29 cases for estrogen receptor, in 9/29 cases for progesterone receptor, and in 25/29 cases for cytokeratin 7. Among the negative controls, mammaglobin was positive in 2/48, and gross cystic disease fluid protein-15 in 4/48. The inclusion of MAG antibody in the immunohistochemical panel for the detection of tumors of unknown origin contributed to the detection of metastasis of breast cancer. The diagnostic strategy with the highest positive predictive value (88%) included hormone receptors and mammaglobin in serial manner.
Error-Analysis for Correctness, Effectiveness, and Composing Procedure.
ERIC Educational Resources Information Center
Ewald, Helen Rothschild
The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…
TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, E; Phillips, M; Bojechko, C
Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPSmore » dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.« less
3D measurement using combined Gray code and dual-frequency phase-shifting approach
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
Dahmcke, Christina M; Steven, Kenneth E; Larsen, Louise K; Poulsen, Asger L; Abdul-Al, Ahmad; Dahl, Christina; Guldberg, Per
2016-12-01
Retrospective studies have provided proof of principle that bladder cancer can be detected by testing for the presence of tumor DNA in urine. We have conducted a prospective blinded study to determine whether a urine-based DNA test can replace flexible cystoscopy in the initial assessment of gross hematuria. A total of 475 consecutive patients underwent standard urological examination including flexible cystoscopy and computed tomography urography, and provided urine samples immediately before (n=461) and after (n=444) cystoscopy. Urine cells were collected using a filtration device and tested for eight DNA mutation and methylation biomarkers. Clinical evaluation identified 99 (20.8%) patients with urothelial bladder tumors. With this result as a reference and based on the analysis of all urine samples, the DNA test had a sensitivity of 97.0%, a specificity of 76.9%, a positive predictive value of 52.5%, and a negative predictive value of 99.0%. In three patients with a positive urine-DNA test without clinical evidence of cancer, a tumor was detected at repeat cystoscopy within 16 mo. Our results suggest that urine-DNA testing can be used to identify a large subgroup of patients with gross hematuria in whom cystoscopy is not required. We tested the possibility of using a urine-based DNA test to check for bladder cancer in patients with visible blood in the urine. Our results show that the test efficiently detects bladder cancer and therefore may be used to greatly reduce the number of patients who would need to undergo cystoscopy. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Latent error detection: A golden two hours for detection.
Saward, Justin R E; Stanton, Neville A
2017-03-01
Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Error Detection/Correction in Collaborative Writing
ERIC Educational Resources Information Center
Pilotti, Maura; Chodorow, Martin
2009-01-01
In the present study, we examined error detection/correction during collaborative writing. Subjects were asked to identify and correct errors in two contexts: a passage written by the subject (familiar text) and a passage written by a person other than the subject (unfamiliar text). A computer program inserted errors in function words prior to the…
Gaining Insight Into Femtosecond-scale CMOS Effects using FPGAs
2015-03-24
paths or detecting gross path delay faults , but for characterizing subtle aging effects, there is a need to isolate very short paths and detect very...data using COTS FPGAs and novel self-test. Hardware experiments using a 28 nm FPGA demonstrate isolation of small sets of transistors, detection of...hold the static configuration data specifying the LUT function. A set of inverters drive the SRAM contents into a pass-gate multiplexor tree; we
Krimmel, R.M.
1999-01-01
Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.
ERIC Educational Resources Information Center
Lu, Hui-Chuan; Chu, Yu-Hsin; Chang, Cheng-Yu
2013-01-01
Compared with English learners, Spanish learners have fewer resources for automatic error detection and revision and following the current integrative Computer Assisted Language Learning (CALL), we combined corpus-based approach and CALL to create the System of Error Detection and Revision Suggestion (SEDRS) for learning Spanish. Through…
Computer-Assisted Detection of 90% of EFL Student Errors
ERIC Educational Resources Information Center
Harvey-Scholes, Calum
2018-01-01
Software can facilitate English as a Foreign Language (EFL) students' self-correction of their free-form writing by detecting errors; this article examines the proportion of errors which software can detect. A corpus of 13,644 words of written English was created, comprising 90 compositions written by Spanish-speaking students at levels A2-B2…
Prediction of municipal solid waste generation using nonlinear autoregressive network.
Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A
2015-12-01
Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.
Reflective-impulsive style and conceptual tempo in a gross motor task.
Keller, J; Ripoll, H
2001-06-01
The reflective-impulsive construct refers to responses made slowly or quickly in a situation with high uncertainty. Children who are labeled "reflective" take a longer time to respond and make few errors, whereas "impulsive" children are fast and inaccurate. Although the validity of the test and the definition of reflective-impulsive style are well accepted, whether such respond fast or slow to all tasks is questioned. Some children do not fit the dichotomy. Two other groups arise, the fast-accurate and the slow-inaccurate. The response styles of 86 boys, ages 5, 7, and 9 years performing a gross motor task, i.e., hitting a ball with a racquet, were studied. Analysis indicated that the slowest children on the Matching Familiar Figures Test can be faster than the fastest ones and remain more accurate. As the definition of the reflective-impulsive style is based on time, the reflective ones might better be viewed as children who can adapt the response time to the context and thus be more efficient at problem-solving.
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir
2009-06-01
Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.
Sawada, Kazuhiko; Sun, Xue-Zhi; Fukunishi, Katsuhiro; Kashima, Masatoshi; Sakata-Haga, Hiromi; Tokado, Hiroshi; Aoki, Ichio; Fukui, Yoshihiro
2009-09-01
The aim of this study was to spatio-temporally clarify gross structural changes in the forebrain of cynomolgus monkey fetuses using 7-tesla magnetic resonance imaging (MRI). T(1)-weighted coronal, horizontal, and sagittal MR slices of fixed left cerebral hemispheres were obtained from one male fetus at embryonic days (EDs) 70-150. The timetable for fetal sulcation by MRI was in good agreement with that by gross observations, with a lag time of 10-30 days. A difference in detectability of some sulci seemed to be associated with the length, depth, width, and location of the sulci. Furthermore, MRI clarified the embryonic days of the emergence of the callosal (ED 70) and circular (ED 90) sulci, which remained unpredictable under gross observations. Also made visible by the present MRI were subcortical structures of the forebrain such as the caudate nucleus, globus pallidus, putamen, major subdivisions of the thalamus, and hippocampal formation. Their adult-like features were formed by ED 100, corresponding to the onset of a signal enhancement in the gray matter, which reflects neuronal maturation. The results reveal a highly reproducible level of gross structural changes in the forebrain using a high spatial 7-tesla MRI. The present MRI study clarified some changes that are difficult to demonstrate nondestructively using only gross observations, for example, the development of cerebral sulci located on the deep portions of the cortex, as well as cortical and subcortical neuronal maturation.
Matsudate, Yoshihiro; Naruto, Takuya; Hayashi, Yumiko; Minami, Mitsuyoshi; Tohyama, Mikiko; Yokota, Kenji; Yamada, Daisuke; Imoto, Issei; Kubo, Yoshiaki
2017-06-01
Nevoid basal cell carcinoma syndrome (NBCCS) is an autosomal dominant disorder mainly caused by heterozygous mutations of PTCH1. In addition to characteristic clinical features, detection of a mutation in causative genes is reliable for the diagnosis of NBCCS; however, no mutations have been identified in some patients using conventional methods. To improve the method for the molecular diagnosis of NBCCS. We performed targeted exome sequencing (TES) analysis using a multi-gene panel, including PTCH1, PTCH2, SUFU, and other sonic hedgehog signaling pathway-related genes, based on next-generation sequencing (NGS) technology in 8 cases in whom possible causative mutations were not detected by previously performed conventional analysis and 2 recent cases of NBCCS. Subsequent analysis of gross deletion within or around PTCH1 detected by TES was performed using chromosomal microarray (CMA). Through TES analysis, specific single nucleotide variants or small indels of PTCH1 causing inferred amino acid changes were identified in 2 novel cases and 2 undiagnosed cases, whereas gross deletions within or around PTCH1, which are validated by CMA, were found in 3 undiagnosed cases. However, no mutations were detected even by TES in 3 cases. Among 3 cases with gross deletions of PTCH1, deletions containing the entire PTCH1 and additional neighboring genes were detected in 2 cases, one of which exhibited atypical clinical features, such as severe mental retardation, likely associated with genes located within the 4.3Mb deleted region, especially. TES-based simultaneous evaluation of sequences and copy number status in all targeted coding exons by NGS is likely to be more useful for the molecular diagnosis of NBCCS than conventional methods. CMA is recommended as a subsequent analysis for validation and detailed mapping of deleted regions, which may explain the atypical clinical features of NBCCS cases. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.
Bartholomay, Roy C.; Knobel, LeRoy L.; Tucker, Betty J.; Twining, Brian V.
2000-01-01
The U.S. Geological Survey, in response to a request from the U.S. Department of Energy?s Phtsburgh Naval Reactors Ofilce, Idaho Branch Office, sampled water from 13 wells during 1997?98 as part of a long-term project to monitor water quality of the Snake River Plain aquifer in the vicinity of the Naval Reactors Facility, Idaho National Engineering and Environmental Laboratory, Idaho. Water samples were analyzed for naturally occurring constituents and man-made contaminants. A totalof91 samples were collected from the 13 monitoring wells. The routine samples contained detectable concentrations of total cations and dissolved anions, and nitrite plus nitrate as nitrogen. Most of the samples also had detectable concentrations of gross alpha- and gross beta-particle radioactivity and tritium. Fourteen qualityassurance samples also were collected and analyze~ seven were field-blank samples, and seven were replicate samples. Most of the field blank samples contained less than detectable concentrations of target constituents; however, some blank samples did contain detectable concentrations of calcium, magnesium, barium, copper, manganese, nickel, zinc, nitrite plus nitrate, total organic halogens, tritium, and selected volatile organic compounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuangrod, T; Simpson, J; Greer, P
Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less
The Watchdog Task: Concurrent error detection using assertions
NASA Technical Reports Server (NTRS)
Ersoz, A.; Andrews, D. M.; Mccluskey, E. J.
1985-01-01
The Watchdog Task, a software abstraction of the Watchdog-processor, is shown to be a powerful error detection tool with a great deal of flexibility and the advantages of watchdog techniques. A Watchdog Task system in Ada is presented; issues of recovery, latency, efficiency (communication) and preprocessing are discussed. Different applications, one of which is error detection on a single processor, are examined.
A Review of Research on Error Detection. Technical Report No. 540.
ERIC Educational Resources Information Center
Meyer, Linda A.
A review was conducted of the research on error detection studies completed with children, adolescents, and young adults to determine at what age children begin to detect errors in texts. The studies were grouped according to the subjects' ages. The focus of the review was on the following aspects of each study: the hypothesis that guided the…
Healthcare: affordable quality coverage for all.
Lee, Keat Jin
2009-06-01
The quality of medical care available in the United States is the best in the world. However, today's American healthcare delivery system is unacceptable. It is too expensive, disjointed, and wasteful. The amount spent on healthcare in the United States is sufficient to meet everyone's needs; the reason it does not is that the money is misspent. Healthcare makes up 16 percent of the gross domestic product, or $2.3 trillion, yet 46 million people are uninsured, the majority of people are underinsured, and even those with insurance suffer significant hassles in receiving healthcare. Medical errors occur at alarming rates. The lack of quality measures to define best practices leads to a wide variation of practices and costs. Fragmented healthcare leads to errors. The goal of this paper is to explore a set of 20 comprehensive steps to begin reform of healthcare in this country.
Huff, Mark J; Umanath, Sharda
2018-06-01
In 2 experiments, we assessed age-related suggestibility to additive and contradictory misinformation (i.e., remembering of false details from an external source). After reading a fictional story, participants answered questions containing misleading details that were either additive (misleading details that supplemented an original event) or contradictory (errors that changed original details). On a final test, suggestibility was greater for additive than contradictory misinformation, and older adults endorsed fewer false contradictory details than younger adults. To mitigate suggestibility in Experiment 2, participants were warned about potential errors, instructed to detect errors, or instructed to detect errors after exposure to examples of additive and contradictory details. Again, suggestibility to additive misinformation was greater than contradictory, and older adults endorsed less contradictory misinformation. Only after detection instructions with misinformation examples were younger adults able to reduce contradictory misinformation effects and reduced these effects to the level of older adults. Additive misinformation however, was immune to all warning and detection instructions. Thus, older adults were less susceptible to contradictory misinformation errors, and younger adults could match this misinformation rate when warning/detection instructions were strong. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Comparing risk in conventional and organic dairy farming in the Netherlands: an empirical analysis.
Berentsen, P B M; Kovacs, K; van Asseldonk, M A P M
2012-07-01
This study was undertaken to contribute to the understanding of why most dairy farmers do not convert to organic farming. Therefore, the objective of this research was to assess and compare risks for conventional and organic farming in the Netherlands with respect to gross margin and the underlying price and production variables. To investigate the risk factors a farm accountancy database was used containing panel data from both conventional and organic representative Dutch dairy farms (2001-2007). Variables with regard to price and production risk were identified using a gross margin analysis scheme. Price risk variables were milk price and concentrate price. The main production risk variables were milk yield per cow, roughage yield per hectare, and veterinary costs per cow. To assess risk, an error component implicit detrending method was applied and the resulting detrended standard deviations were compared between conventional and organic farms. Results indicate that the risk included in the gross margin per cow is significantly higher in organic farming. This is caused by both higher price and production risks. Price risks are significantly higher in organic farming for both milk price and concentrate price. With regard to production risk, only milk yield per cow poses a significantly higher risk in organic farming. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Berrocal, Eduardo; Cappello, Franck
The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.
Using video recording to identify management errors in pediatric trauma resuscitation.
Oakley, Ed; Stocker, Sergio; Staubli, Georg; Young, Simon
2006-03-01
To determine the ability of video recording to identify management errors in trauma resuscitation and to compare this method with medical record review. The resuscitation of children who presented to the emergency department of the Royal Children's Hospital between February 19, 2001, and August 18, 2002, for whom the trauma team was activated was video recorded. The tapes were analyzed, and management was compared with Advanced Trauma Life Support guidelines. Deviations from these guidelines were recorded as errors. Fifty video recordings were analyzed independently by 2 reviewers. Medical record review was undertaken for a cohort of the most seriously injured patients, and errors were identified. The errors detected with the 2 methods were compared. Ninety resuscitations were video recorded and analyzed. An average of 5.9 errors per resuscitation was identified with this method (range: 1-12 errors). Twenty-five children (28%) had an injury severity score of >11; there was an average of 2.16 errors per patient in this group. Only 10 (20%) of these errors were detected in the medical record review. Medical record review detected an additional 8 errors that were not evident on the video recordings. Concordance between independent reviewers was high, with 93% agreement. Video recording is more effective than medical record review in detecting management errors in pediatric trauma resuscitation. Management errors in pediatric trauma resuscitation are common and often involve basic resuscitation principles. Resuscitation of the most seriously injured children was associated with fewer errors. Video recording is a useful adjunct to trauma resuscitation auditing.
Simultaneous message framing and error detection
NASA Technical Reports Server (NTRS)
Frey, A. H., Jr.
1968-01-01
Circuitry simultaneously inserts message framing information and detects noise errors in binary code data transmissions. Separate message groups are framed without requiring both framing bits and error-checking bits, and predetermined message sequence are separated from other message sequences without being hampered by intervening noise.
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
Observer detection of image degradation caused by irreversible data compression processes
NASA Astrophysics Data System (ADS)
Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David
1991-05-01
Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.
IoT for Real-Time Measurement of High-Throughput Liquid Dispensing in Laboratory Environments.
Shumate, Justin; Baillargeon, Pierre; Spicer, Timothy P; Scampavia, Louis
2018-04-01
Critical to maintaining quality control in high-throughput screening is the need for constant monitoring of liquid-dispensing fidelity. Traditional methods involve operator intervention with gravimetric analysis to monitor the gross accuracy of full plate dispenses, visual verification of contents, or dedicated weigh stations on screening platforms that introduce potential bottlenecks and increase the plate-processing cycle time. We present a unique solution using open-source hardware, software, and 3D printing to automate dispenser accuracy determination by providing real-time dispense weight measurements via a network-connected precision balance. This system uses an Arduino microcontroller to connect a precision balance to a local network. By integrating the precision balance as an Internet of Things (IoT) device, it gains the ability to provide real-time gravimetric summaries of dispensing, generate timely alerts when problems are detected, and capture historical dispensing data for future analysis. All collected data can then be accessed via a web interface for reviewing alerts and dispensing information in real time or remotely for timely intervention of dispense errors. The development of this system also leveraged 3D printing to rapidly prototype sensor brackets, mounting solutions, and component enclosures.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Error detection and reduction in blood banking.
Motschman, T L; Moore, S B
1996-12-01
Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.
Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena
2012-01-01
Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID:22395352
Transient Faults in Computer Systems
NASA Technical Reports Server (NTRS)
Masson, Gerald M.
1993-01-01
A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.
Syndromic surveillance for health information system failures: a feasibility study.
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-05-01
To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.
Is there any electrophysiological evidence for subliminal error processing?
Shalgi, Shani; Deouell, Leon Y
2013-08-29
The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness.
Activity Tracking for Pilot Error Detection from Flight Data
NASA Technical Reports Server (NTRS)
Callantine, Todd J.; Ashford, Rose (Technical Monitor)
2002-01-01
This report presents an application of activity tracking for pilot error detection from flight data, and describes issues surrounding such an application. It first describes the Crew Activity Tracking System (CATS), in-flight data collected from the NASA Langley Boeing 757 Airborne Research Integrated Experiment System aircraft, and a model of B757 flight crew activities. It then presents an example of CATS detecting actual in-flight crew errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J Zwan, B; Central Coast Cancer Centre, Gosford, NSW; Colvill, E
2016-06-15
Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3)more » field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm{sup 2} (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.« less
Validation of a Fish Short-term Reproduction Assay
The Fish Short-term Reproduction Assay is an in vivo assay conducted with fathead minnows and is designed to detect changes in spawning, gross morphology, histopathology, and specific biochemical endpoints that reflect disturbances in the hypothalamic-pituitary-gonadal (HPG) axis...
Prescribing Errors Involving Medication Dosage Forms
Lesar, Timothy S
2002-01-01
CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
Dietary lipid and gross energy affect protein utilization in the rare minnow Gobiocypris rarus
NASA Astrophysics Data System (ADS)
Wu, Benli; Xiong, Xiaoqin; Xie, Shouqi; Wang, Jianwei
2016-07-01
An 8-week feeding trial was conducted to detect the optimal dietary protein and energy, as well as the effects of protein to energy ratio on growth, for the rare minnow ( Gobiocypris rarus), which are critical to nutrition standardization for model fish. Twenty-four diets were formulated to contain three gross energy (10, 12.5, 15 kJ/g), four protein (20%, 25%, 30%, 35%), and two lipid levels (3%, 6%). The results showed that optimal dietary E/P was 41.7-50 kJ/g for maximum growth in juvenile rare minnows at 6% dietary crude lipid. At 3% dietary lipid, specific growth rate (SGR) increased markedly when E/P decreased from 62.5 kJ/g to 35.7 kJ/g and gross energy was 12.5 kJ/g, and from 75 kJ/g to 42.9 kJ/g when gross energy was 15.0 kJ/g. The optimal gross energy was estimated at 12.5 kJ/g and excess energy decreased food intake and growth. Dietary lipid exhibited an apparent protein-sparing effect. Optimal protein decreased from 35% to 25%-30% with an increase in dietary lipid from 3% to 6% without adversely effecting growth. Dietary lipid level affects the optimal dietary E/P ratio. In conclusion, recommended dietary protein and energy for rare minnow are 20%-35% and 10-12.5 kJ/g, respectively.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Design and scheduling for periodic concurrent error detection and recovery in processor arrays
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent
1992-01-01
Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.
NASA Astrophysics Data System (ADS)
Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.
2017-12-01
The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.
On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A
NASA Astrophysics Data System (ADS)
Blake, J. B.; Mandel, R.
1987-02-01
The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.
Relationship auditing of the FMA ontology
Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai
2010-01-01
The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727
Filter, Emily R; Gabril, Manal Y; Gomez, Jose A; Wang, Peter Z T; Chin, Joseph L; Izawa, Jonathan; Moussa, Madeleine
2017-08-01
The rate of incidental prostate adenocarcinoma (PCa) detection in radical cystoprostatectomy (RCP) varies widely, ranging from 15% to 54%. Such variability may be explained by institutional differences in prostate grossing protocols. Either partial or complete submission of the prostate gland in RCP may result in detection of clinically insignificant or significant incidental PCa. The aim of the study was to compare the clinical significance of PCa in RCP specimens in partial versus complete sampling. Seventy-two out of 158 RCP cases showed incidental PCa. The pathologic features, including Gleason score, margin status, extraprostatic extension (EPE), seminal vesicle invasion (SVI), PCa stage, and tumor volume, were assessed. The 72 cases were divided into partial (n = 21, 29.1%) and complete sampling (n = 51, 70.8%) groups. EPE was detected in 13/72 (18.1%) with 11/13 (84.6%) cases in the complete group. Positive margins were present in 11/72 (15.3%) with 9/11 (81.8%) in the complete group. SVI was detected in 4/72 (5.6%) with 3/4 (75.0%) in the complete group. Overall, 4/72 (5.6%) had a Gleason score >7, all of which were in the complete group. Our data suggest that complete sampling of the prostate may be the ideal approach to grossing RCP specimens, allowing for greater detection of clinically significant incidental PCa.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsin-Chen; Tan, Jun; Dolly, Steven
2015-02-15
Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy basedmore » on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately employed to test the effectiveness of the proposed contouring error detection strategy. Results: An evaluation tool was implemented to illustrate how the proposed strategy automatically detects the radiation therapy contouring errors for a given patient and provides 3D graphical visualization of error detection results as well. The contouring error detection results were achieved with an average sensitivity of 0.954/0.906 and an average specificity of 0.901/0.909 on the centroid/volume related contouring errors of all the tested samples. As for the detection results on structural shape related contouring errors, an average sensitivity of 0.816 and an average specificity of 0.94 on all the tested samples were obtained. The promising results indicated the feasibility of the proposed strategy for the detection of contouring errors with low false detection rate. Conclusions: The proposed strategy can reliably identify contouring errors based upon inter- and intrastructural constraints derived from clinically approved contours. It holds great potential for improving the radiation therapy workflow. ROC and box plot analyses allow for analytically tuning of the system parameters to satisfy clinical requirements. Future work will focus on the improvement of strategy reliability by utilizing more training sets and additional geometric attribute constraints.« less
Annual INTEC Groundwater Monitoring Report for Group 5 - Snake River Plain Aquifer (2001)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roddy, Michael Scott
2002-02-01
This report describes the monitoring activities conducted and presents the results of groundwater sampling and water-level measurements from October 2000 to September 2001. Groundwater samples were initially collected from 41 wells from the Idaho Nuclear Technology and Engineering Center and the Central Facilities Area and analyzed for iodine-129, strontium-90, tritium, gross alpha, gross beta, technetium-99, uranium isotopes, plutonium isotopes, neptunium-237, americium-241, gamma spectrometry, and mercury. Samples from 41 wells were collected in April and May 2001. Additional sampling was conducted in August 2001 and included the two CFA production wells, the CFA point of compliance for the production wells, onemore » well that was previously sampled and five additional monitoring wells. Iodine-129 and strontium-90 were the only analytes above their respective maximum contaminant levels. Iodine-129 was detected just above its maximum contaminant level of 1 pCi/L at two of the Central Facilities Area landfill wells. Iodine-129 was detected in the CFA production wells at 0.35±0.083 pCi/L in CFA-1, but was below detectable activity in CFA-2. Strontium-90 was above its maximum contaminant level of 8 pCi/L in several wells near the Idaho Nuclear Technology and Engineering Center but was below its maximum contaminant level in the downgradient wells at the Central Facilities Area landfills. Sr-90 was not detected in the CFA production wells. Gross beta results generally mirrored the results for strontium-90 and technetium-99. Plutonium isotopes and neptunium-237 were not detected. Uranium-233/234 and uranium-238 isotopes were detected in all samples. Concentrations of background and site wells were similar and are within background limits for total uranium determined by the USGS, suggesting that the concentrations are background. Uranium-235/236 was detected in 11 samples, but all the detected concentrations were similar and near the minimum detectable activity. Americium-241 was detected at three locations near the minimum detectable activity of approximately 0.07 pCi/L. The gamma spectrometry results detected cesium-137 in three samples, potassium-40 at eight locations, and radium-226 at one location. Mercury was below its maximum contaminant level of 2 µg/L in all samples. Gamma spectrometry results for the CFA production wells did not detect any analytes. Water-level measurements were taken from wells in the Idaho Nuclear Technology and Engineering Center, Central Facilities Area, and the area south of Central Facilities Area to evaluate groundwater flow directions. Water-level measurements indicated groundwater flow to the south-southwest from the Idaho Nuclear Technology and Engineering Center.« less
Comparison of direct and heterodyne detection optical intersatellite communication links
NASA Technical Reports Server (NTRS)
Chen, C. C.; Gardner, C. S.
1987-01-01
The performance of direct and heterodyne detection optical intersatellite communication links are evaluated and compared. It is shown that the performance of optical links is very sensitive to the pointing and tracking errors at the transmitter and receiver. In the presence of random pointing and tracking errors, optimal antenna gains exist that will minimize the required transmitter power. In addition to limiting the antenna gains, random pointing and tracking errors also impose a power penalty in the link budget. This power penalty is between 1.6 to 3 dB for a direct detection QPPM link, and 3 to 5 dB for a heterodyne QFSK system. For the heterodyne systems, the carrier phase noise presents another major factor of performance degradation that must be considered. In contrast, the loss due to synchronization error is small. The link budgets for direct and heterodyne detection systems are evaluated. It is shown that, for systems with large pointing and tracking errors, the link budget is dominated by the spatial tracking error, and the direct detection system shows a superior performance because it is less sensitive to the spatial tracking error. On the other hand, for systems with small pointing and tracking jitters, the antenna gains are in general limited by the launch cost, and suboptimal antenna gains are often used in practice. In which case, the heterodyne system has a slightly higher power margin because of higher receiver sensitivity.
[Effect of gross saponins of Tribulus terrestris on cardiocytes impaired by adriamycin].
Zhang, Shuang; Li, Hong; Xu, Hui; Yang, Shi-Jie
2010-01-01
This study is to observe the protection of gross saponins of Tribulus terrestris (GSTT) on cardiocytes impaired by adriamycin (ADR) and approach its mechanism of action. Cardiocytes of neonate rat were cultivated for 72 hours and divided into normal control group, model (ADR 2 mg x L(-1)) group, and GSTT (100, 30, and 10 mg x L(-1)) groups. MTT colorimetric method was deployed to detect cardiocyte survival rate, activities of CK, LDH, AST, SOD, MDA and NO were detected, and apoptosis was detected with flow cytometry. Effect of GSTT on caspase-3 was detected with Western blotting. Compared with control group, contents of CK, LDH, AST, MDA and NO were increased, and activity of SOD was reduced (P < 0.05, P < 0.01, P < 0.001) by ADR. Numbers of survival cells were increased (P < 0.05, P < 0.001), contents of CK, LDH, AST, MDA and NO were decreased, and activity of SOD was increased (P < 0.05, P < 0.01, P < 0.001) by GSTT (100 and 30 mg x L(-1)). Apoptosis of cardiocytes and concentration of caspase-3 can be reduced by GSTT (100 and 30 mg x L(-1)). GSTT can protect cardiocytes impaired by ADR, which are possible involved with its effect of resisting oxygen free radical.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less
TH-B-BRC-01: How to Identify and Resolve Potential Clinical Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, I.
2016-06-15
Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less
Kurahashi, H; Inagaki, H; Ohye, T; Kogo, H; Tsutsumi, M; Kato, T; Tong, M; Emanuel, BS
2012-01-01
The constitutional t(11;22)(q23;q11) is the most common recurrent non-Robertsonian translocation in humans. The breakpoint sequences of both chromosomes are characterized by several hundred base pairs of palindromic AT-rich repeats (PATRRs). Similar PATRRs have also been identified at the breakpoints of other nonrecurrent translocations, suggesting that PATRR-mediated chromosomal translocation represents one of the universal pathways for gross chromosomal rearrangement in the human genome. We propose that PATRRs have the potential to form cruciform structures through intrastrand-base pairing in single-stranded DNA, creating a source of genomic instability and leading to translocations. Indeed, de novo examples of the t(11;22) are detected at a high frequency in sperm from normal healthy males. This review synthesizes recent data illustrating a novel paradigm for an apparent spermatogenesis-specific translocation mechanism. This observation has important implications pertaining to the predominantly paternal origin of de novo gross chromosomal rearrangements in humans. PMID:20507342
Syndromic surveillance for health information system failures: a feasibility study
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-01-01
Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193
Detecting and correcting hard errors in a memory array
Kalamatianos, John; John, Johnsy Kanjirapallil; Gelinas, Robert; Sridharan, Vilas K.; Nevius, Phillip E.
2015-11-19
Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.
The use of self checks and voting in software error detection - An empirical study
NASA Technical Reports Server (NTRS)
Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.
1990-01-01
The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.
Hakkarainen, Elina; Pirilä, Silja; Kaartinen, Jukka; van der Meere, Jaap J
2013-06-01
This study evaluated the brain activation state during error making in youth with mild spastic cerebral palsy and a peer control group while carrying out a stimulus recognition task. The key question was whether patients were detecting their own errors and subsequently improving their performance in a future trial. Findings indicated that error responses of the group with cerebral palsy were associated with weak motor preparation, as indexed by the amplitude of the late contingent negative variation. However, patients were detecting their errors as indexed by the amplitude of the response-locked negativity and thus improved their performance in a future trial. Findings suggest that the consequence of error making on future performance is intact in a sample of youth with mild spastic cerebral palsy. Because the study group is small, the present findings need replication using a larger sample.
Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed
2015-01-01
This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299
Coherent detection of position errors in inter-satellite laser communications
NASA Astrophysics Data System (ADS)
Xu, Nan; Liu, Liren; Liu, De'an; Sun, Jianfeng; Luan, Zhu
2007-09-01
Due to the improved receiver sensitivity and wavelength selectivity, coherent detection became an attractive alternative to direct detection in inter-satellite laser communications. A novel method to coherent detection of position errors information is proposed. Coherent communication system generally consists of receive telescope, local oscillator, optical hybrid, photoelectric detector and optical phase lock loop (OPLL). Based on the system composing, this method adds CCD and computer as position error detector. CCD captures interference pattern while detection of transmission data from the transmitter laser. After processed and analyzed by computer, target position information is obtained from characteristic parameter of the interference pattern. The position errors as the control signal of PAT subsystem drive the receiver telescope to keep tracking to the target. Theoretical deviation and analysis is presented. The application extends to coherent laser rang finder, in which object distance and position information can be obtained simultaneously.
Neural evidence for enhanced error detection in major depressive disorder.
Chiu, Pearl H; Deldin, Patricia J
2007-04-01
Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.
Sex differences in cerebral palsy incidence and functional ability: a total population study.
Chounti, A; Hägglund, G; Wagner, P; Westbom, L
2013-07-01
To describe gender difference in a total population of children with cerebral palsy (CP), related to subtype, gross and fine motor function, and to compare CP incidence trends in girls and boys. All 590 children with CP born in southern Sweden 1990-2005 were included. CP subtype was classified according to the Surveillance of Cerebral Palsy in Europe, gross motor function according to Gross Motor Function Classification System (GMFCS) and manual ability according to Manual Ability Classification System (MACS). Trends in CP incidence by birth year were analysed using Poisson regression modelling. There was a male predominance in all levels of GMFCS except level II, in all levels of MACS and in all CP subtypes except ataxic CP. There was no statistically significant difference between males and females regarding gross motor function or manual ability. The CP incidence trends in boys compared with girls did not change during the period 1990-2005. No equalization was detected in the incidence of CP between girls and boys during recent years in this total population. We could not confirm any consistent sex difference in motor function levels. Male sex is a risk factor for CP. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Measurement invariance of TGMD-3 in children with and without mental and behavioral disorders.
Magistro, Daniele; Piumatti, Giovanni; Carlevaro, Fabio; Sherar, Lauren B; Esliger, Dale W; Bardaglio, Giulia; Magno, Francesca; Zecca, Massimiliano; Musella, Giovanni
2018-05-24
This study evaluated whether the Test of Gross Motor Development 3 (TGMD-3) is a reliable tool to compare children with and without mental and behavioral disorders across gross motor skill domains. A total of 1,075 children (aged 3-11 years), 98 with mental and behavioral disorders and 977 without (typically developing), were included in the analyses. The TGMD-3 evaluates fundamental gross motor skills of children across two domains: locomotor skills and ball skills. Two independent testers simultaneously observed children's performances (agreement over 95%). Each child completed one practice and then two formal trials. Scores were recorded only during the two formal trials. Multigroup confirmatory factor analysis tested the assumption of TGMD-3 measurement invariance across disability groups. According to the magnitude of changes in root mean square error of approximation and comparative fit index between nested models, the assumption of measurement invariance across groups was valid. Loadings of the manifest indicators on locomotor and ball skills were significant (p < .001) in both groups. Item response theory analysis showed good reliability results across locomotor and the ball skills full latent traits. The present study confirmed the factorial structure of TGMD-3 and demonstrated its feasibility across normally developing children and children with mental and behavioral disorders. These findings provide new opportunities for understanding the effect of specific intervention strategies on this population. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Zhang, Wenzeng; Chen, Nian; Wang, Bin; Cao, Yipeng
2005-01-01
Rocket engine is a hard-core part of aerospace transportation and thrusting system, whose research and development is very important in national defense, aviation and aerospace. A novel vision sensor is developed, which can be used for error detecting in arc length control and seam tracking in precise pulse TIG welding of the extending part of the rocket engine jet tube. The vision sensor has many advantages, such as imaging with high quality, compactness and multiple functions. The optics design, mechanism design and circuit design of the vision sensor have been described in detail. Utilizing the mirror imaging of Tungsten electrode in the weld pool, a novel method is proposed to detect the arc length and seam tracking error of Tungsten electrode to the center line of joint seam from a single weld image. A calculating model of the method is proposed according to the relation of the Tungsten electrode, weld pool, the mirror of Tungsten electrode in weld pool and joint seam. The new methodologies are given to detect the arc length and seam tracking error. Through analyzing the results of the experiments, a system error modifying method based on a linear function is developed to improve the detecting precise of arc length and seam tracking error. Experimental results show that the final precision of the system reaches 0.1 mm in detecting the arc length and the seam tracking error of Tungsten electrode to the center line of joint seam.
Is there any electrophysiological evidence for subliminal error processing?
Shalgi, Shani; Deouell, Leon Y.
2013-01-01
The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness. PMID:24009548
Magnetic-field sensing with quantum error detection under the effect of energy relaxation
NASA Astrophysics Data System (ADS)
Matsuzaki, Yuichiro; Benjamin, Simon
2017-03-01
A solid state spin is an attractive system with which to realize an ultrasensitive magnetic field sensor. A spin superposition state will acquire a phase induced by the target field, and we can estimate the field strength from this phase. Recent studies have aimed at improving sensitivity through the use of quantum error correction (QEC) to detect and correct any bit-flip errors that may occur during the sensing period. Here we investigate the performance of a two-qubit sensor employing QEC and under the effect of energy relaxation. Surprisingly, we find that the standard QEC technique to detect and recover from an error does not improve the sensitivity compared with the single-qubit sensors. This is a consequence of the fact that the energy relaxation induces both a phase-flip and a bit-flip noise where the former noise cannot be distinguished from the relative phase induced from the target fields. However, we have found that we can improve the sensitivity if we adopt postselection to discard the state when error is detected. Even when quantum error detection is moderately noisy, and allowing for the cost of the postselection technique, we find that this two-qubit system shows an advantage in sensing over a single qubit in the same conditions.
Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test
NASA Astrophysics Data System (ADS)
Christophides, Damianos; Davies, Alex; Fleckney, Mark
2016-12-01
Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.
Gastrolobium spp. poisoning in sheep: A case report
USDA-ARS?s Scientific Manuscript database
This report describes the history and investigation of a suspected plant poisoning event in Western Australia where fifteen sheep died. One of the poisoned sheep was necropsied and gross and microscopic pathology of the poisoned sheep is described. Monofluoroacetate was detected in rumen contents ...
Caffeine enhances real-world language processing: evidence from a proofreading task.
Brunyé, Tad T; Mahoney, Caroline R; Rapp, David N; Ditman, Tali; Taylor, Holly A
2012-03-01
Caffeine has become the most prevalently consumed psychostimulant in the world, but its influences on daily real-world functioning are relatively unknown. The present work investigated the effects of caffeine (0 mg, 100 mg, 200 mg, 400 mg) on a commonplace language task that required readers to identify and correct 4 error types in extended discourse: simple local errors (misspelling 1- to 2-syllable words), complex local errors (misspelling 3- to 5-syllable words), simple global errors (incorrect homophones), and complex global errors (incorrect subject-verb agreement and verb tense). In 2 placebo-controlled, double-blind studies using repeated-measures designs, we found higher detection and repair rates for complex global errors, asymptoting at 200 mg in low consumers (Experiment 1) and peaking at 400 mg in high consumers (Experiment 2). In both cases, covariate analyses demonstrated that arousal state mediated the relationship between caffeine consumption and the detection and repair of complex global errors. Detection and repair rates for the other 3 error types were not affected by caffeine consumption. Taken together, we demonstrate that caffeine has differential effects on error detection and repair as a function of dose and error type, and this relationship is closely tied to caffeine's effects on subjective arousal state. These results support the notion that central nervous system stimulants may enhance global processing of language-based materials and suggest that such effects may originate in caffeine-related right hemisphere brain processes. Implications for understanding the relationships between caffeine consumption and real-world cognitive functioning are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.
On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.
McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D
2016-01-08
We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.
Seismic Excitation of the Polar Motion, 1977-1993
NASA Technical Reports Server (NTRS)
Chao, Benjamin Fong; Gross, Richard S.; Han, Yan-Ben
1996-01-01
The mass redistribution in the earth as a result of an earthquake faulting changes the earth's inertia tensor, and hence its rotation. Using the complete formulae developed by CHAO and GROSS (1987) based on the normal mode theory, we calculated the earthquake-induced polar motion excitation for the largest 11,015 earthquakes that occurred during 1977.0-1993.6. The seismic excitations in this period are found to be two orders of magnitude below the detection threshold even with today's high precision earth rotation measurements. However, it was calculated that an earthquake of only one tenth the size of the great 1960 Chile event, if happened today, could be comfortably detected in polar motion observations. Furthermore, collectively these seismic excitations have a strong statistical tendency to nudge the pole towards approximately 140deg E, away from the actual observed polar drift direction. This non-random behavior, similarly found in other earthquake-induced changes in earth rotation and low-degree gravitational field by CHAO and GROSS (1987), manifests some geodynamic behavior yet to be explored.
Seismic Excitation of the Polar Motion
NASA Technical Reports Server (NTRS)
Chao, Benjamin Fong; Gross, Richard S.; Han, Yan-Ben
1996-01-01
The mass redistribution in the earth as a result of an earthquake faulting changes the earth's inertia tensor, and hence its rotation. Using the complete formulae developed by Chao and Gross (1987) based on the normal mode theory, we calculated the earthquake-induced polar motion excitation for the largest 11,015 earthquakes that occurred during 1977.0-1993.6. The seismic excitations in this period are found to be two orders of magnitude below the detection threshold even with today's high precision earth rotation measurements. However, it was calculated that an earthquake of only one tenth the size of the great 1960 Chile event, if happened today, could be comfortably detected in polar motion observations. Furthermore, collectively these seismic excitations have a strong statistical tendency to nudge the pole towards approx. 140 deg E, away from the actually observed polar drift direction. This non-random behavior, similarly found in other earthquake-induced changes in earth rotation and low-degree gravitational field by Chao and Gross (1987), manifests some geodynamic behavior yet to be explored.
Gross beta determination in drinking water using scintillating fiber array detector.
Lv, Wen-Hui; Yi, Hong-Chang; Liu, Tong-Qing; Zeng, Zhi; Li, Jun-Li; Zhang, Hui; Ma, Hao
2018-04-04
A scintillating fiber array detector for measuring gross beta counting is developed to monitor the real-time radioactivity in drinking water. The detector, placed in a stainless-steel tank, consists of 1096 scintillating fibers, both sides of which are connected to a photomultiplier tube. The detector parameters, including working voltage, background counting rate and stability, are tested, and the detection efficiency is calibrated using standard potassium chloride solution. Water samples are measured with the detector and the results are compared with those by evaporation method. The results show consistency with those by evaporation method. The background counting rate of the detector is 38.131 ± 0.005 cps, and the detection efficiency for β particles is 0.37 ± 0.01 cps/(Bq/l). The MDAC of this system can be less than 1.0 Bq/l for β particles in 120 min without pre-concentration. Copyright © 2018 Elsevier Ltd. All rights reserved.
Seismic excitation of the polar motion, 1977 1993
NASA Astrophysics Data System (ADS)
Chao, Benjamin Fong; Gross, Richard S.; Han, Yan-Ben
1996-09-01
The mass redistribution in the earth as a result of an earthquake faulting changes the earth's inertia tensor, and hence its rotation. Using the complete formulae developed by Chao and Gross (1987) based on the normal mode theory, we calculated the earthquake-induced polar motion excitation for the largest 11,015 earthquakes that occurred during 1977.0 1993.6. The seismic excitations in this period are found to be two orders of magnitude below the detection threshold even with today's high precision earth rotation measurements. However, it was calculated that an earthquake of only one tenth the size of the great 1960 Chile event, if happened today, could be comfortably detected in polar motion observations. Furthermore, collectively these seismic excitations have a strong statistical tendency to nudge the pole towards ˜140°E, away from the actually observed polar drift direction. This non-random behavior, similarly found in other earthquake-induced changes in earth rotation and low-degree gravitational field by Chao and Gross (1987), manifests some geodynamic behavior yet to be explored.
Efficient detection of dangling pointer error for C/C++ programs
NASA Astrophysics Data System (ADS)
Zhang, Wenzhe
2017-08-01
Dangling pointer error is pervasive in C/C++ programs and it is very hard to detect. This paper introduces an efficient detector to detect dangling pointer error in C/C++ programs. By selectively leave some memory accesses unmonitored, our method could reduce the memory monitoring overhead and thus achieves better performance over previous methods. Experiments show that our method could achieve an average speed up of 9% over previous compiler instrumentation based method and more than 50% over previous page protection based method.
Method and apparatus for detecting timing errors in a system oscillator
Gliebe, Ronald J.; Kramer, William R.
1993-01-01
A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.
Detecting genotyping errors and describing black bear movement in northern Idaho
Michael K. Schwartz; Samuel A. Cushman; Kevin S. McKelvey; Jim Hayden; Cory Engkjer
2006-01-01
Non-invasive genetic sampling has become a favored tool to enumerate wildlife. Genetic errors, caused by poor quality samples, can lead to substantial biases in numerical estimates of individuals. We demonstrate how the computer program DROPOUT can detect amplification errors (false alleles and allelic dropout) in a black bear (Ursus americanus) dataset collected in...
Experimental Inoculation of Egyptian Fruit Bats (Rousettus aegyptiacus) with Ebola Virus
Paweska, Janusz T.; Storm, Nadia; Grobbelaar, Antoinette A.; Markotter, Wanda; Kemp, Alan; Jansen van Vuren, Petrus
2016-01-01
Colonized Egyptian fruit bats (Rousettus aegyptiacus), originating in South Africa, were inoculated subcutaneously with Ebola virus (EBOV). No overt signs of morbidity, mortality, or gross lesions were noted. Bats seroconverted by Day 10–16 post inoculation (p.i.), with the highest mean anti-EBOV IgG level on Day 28 p.i. EBOV RNA was detected in blood from one bat. In 16 other tissues tested, viral RNA distribution was limited and at very low levels. No seroconversion could be demonstrated in any of the control bats up to 28 days after in-contact exposure to subcutaneously-inoculated bats. The control bats were subsequently inoculated intraperitoneally, and intramuscularly with the same dose of EBOV. No mortality, morbidity or gross pathology was observed in these bats. Kinetics of immune response was similar to that in subcutaneously-inoculated bats. Viral RNA was more widely disseminated to multiple tissues and detectable in a higher proportion of individuals, but consistently at very low levels. Irrespective of the route of inoculation, no virus was isolated from tissues which tested positive for EBOV RNA. Viral RNA was not detected in oral, nasal, ocular, vaginal, penile and rectal swabs from any of the experimental groups. PMID:26805873
MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootton, L; Nyflot, M; Ford, E
2016-06-15
Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less
Corrections of clinical chemistry test results in a laboratory information system.
Wang, Sihe; Ho, Virginia
2004-08-01
The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
SU-E-T-392: Evaluation of Ion Chamber/film and Log File Based QA to Detect Delivery Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, C; Mason, B; Kirsner, S
2015-06-15
Purpose: Ion chamber and film (ICAF) is a method used to verify patient dose prior to treatment. More recently, log file based QA has been shown as an alternative for measurement based QA. In this study, we delivered VMAT plans with and without errors to determine if ICAF and/or log file based QA was able to detect the errors. Methods: Using two VMAT patients, the original treatment plan plus 7 additional plans with delivery errors introduced were generated and delivered. The erroneous plans had gantry, collimator, MLC, gantry and collimator, collimator and MLC, MLC and gantry, and gantry, collimator, andmore » MLC errors. The gantry and collimator errors were off by 4{sup 0} for one of the two arcs. The MLC error introduced was one in which the opening aperture didn’t move throughout the delivery of the field. For each delivery, an ICAF measurement was made as well as a dose comparison based upon log files. Passing criteria to evaluate the plans were ion chamber less and 5% and film 90% of pixels pass the 3mm/3% gamma analysis(GA). For log file analysis 90% of voxels pass the 3mm/3% 3D GA and beam parameters match what was in the plan. Results: Two original plans were delivered and passed both ICAF and log file base QA. Both ICAF and log file QA met the dosimetry criteria on 4 of the 12 erroneous cases analyzed (2 cases were not analyzed). For the log file analysis, all 12 erroneous plans alerted a mismatch in delivery versus what was planned. The 8 plans that didn’t meet criteria all had MLC errors. Conclusion: Our study demonstrates that log file based pre-treatment QA was able to detect small errors that may not be detected using an ICAF and both methods of were able to detect larger delivery errors.« less
How do Community Pharmacies Recover from E-prescription Errors?
Odukoya, Olufunmilola K.; Stone, Jamie A.; Chui, Michelle A.
2014-01-01
Background The use of e-prescribing is increasing annually, with over 788 million e-prescriptions received in US pharmacies in 2012. Approximately 9% of e-prescriptions have medication errors. Objective To describe the process used by community pharmacy staff to detect, explain, and correct e-prescription errors. Methods The error recovery conceptual framework was employed for data collection and analysis. 13 pharmacists and 14 technicians from five community pharmacies in Wisconsin participated in the study. A combination of data collection methods were utilized, including direct observations, interviews, and focus groups. The transcription and content analysis of recordings were guided by the three-step error recovery model. Results Most of the e-prescription errors were detected during the entering of information into the pharmacy system. These errors were detected by both pharmacists and technicians using a variety of strategies which included: (1) performing double checks of e-prescription information; (2) printing the e-prescription to paper and confirming the information on the computer screen with information from the paper printout; and (3) using colored pens to highlight important information. Strategies used for explaining errors included: (1) careful review of patient’ medication history; (2) pharmacist consultation with patients; (3) consultation with another pharmacy team member; and (4) use of online resources. In order to correct e-prescription errors, participants made educated guesses of the prescriber’s intent or contacted the prescriber via telephone or fax. When e-prescription errors were encountered in the community pharmacies, the primary goal of participants was to get the order right for patients by verifying the prescriber’s intent. Conclusion Pharmacists and technicians play an important role in preventing e-prescription errors through the detection of errors and the verification of prescribers’ intent. Future studies are needed to examine factors that facilitate or hinder recovery from e-prescription errors. PMID:24373898
Masked and unmasked error-related potentials during continuous control and feedback
NASA Astrophysics Data System (ADS)
Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.
2018-06-01
The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR = 81.8% and average TNR = 96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR = 60.9% and average TNR = 58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...
2013-01-01
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Adaboost multi-view face detection based on YCgCr skin color model
NASA Astrophysics Data System (ADS)
Lan, Qi; Xu, Zhiyong
2016-09-01
Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.
van Elk, Michiel; Bousardt, Roel; Bekkering, Harold; van Schie, Hein T
2012-01-01
Detecting errors in other's actions is of pivotal importance for joint action, competitive behavior and observational learning. Although many studies have focused on the neural mechanisms involved in detecting low-level errors, relatively little is known about error-detection in everyday situations. The present study aimed to identify the functional and neural mechanisms whereby we understand the correctness of other's actions involving well-known objects (e.g. pouring coffee in a cup). Participants observed action sequences in which the correctness of the object grasped and the grip applied to a pair of objects were independently manipulated. Observation of object violations (e.g. grasping the empty cup instead of the coffee pot) resulted in a stronger P3-effect than observation of grip errors (e.g. grasping the coffee pot at the upper part instead of the handle), likely reflecting a reorienting response, directing attention to the relevant location. Following the P3-effect, a parietal slow wave positivity was observed that persisted for grip-errors, likely reflecting the detection of an incorrect hand-object interaction. These findings provide new insight in the functional significance of the neurophysiological markers associated with the observation of incorrect actions and suggest that the P3-effect and the subsequent parietal slow wave positivity may reflect the detection of errors at different levels in the action hierarchy. Thereby this study elucidates the cognitive processes that support the detection of action violations in the selection of objects and grips.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
Current neurotoxicity and developmental neurotoxicity testing methods for hazard identification rely on in vivo neurobehavior, neurophysiological, and gross pathology of the nervous system. These measures may not be sensitive enough to detect small changes caused by realistic ex...
40 CFR 798.6050 - Functional observational battery.
Code of Federal Regulations, 2012 CFR
2012-07-01
... observational battery is a noninvasive procedure designed to detect gross functional deficits in young adults.... This battery of tests is not intended to provide a detailed evaluation of neurotoxicity. It is designed... such factors as the comparative metabolism of the chemical and species sensitivity to the toxic effects...
40 CFR 798.6050 - Functional observational battery.
Code of Federal Regulations, 2011 CFR
2011-07-01
... observational battery is a noninvasive procedure designed to detect gross functional deficits in young adults.... This battery of tests is not intended to provide a detailed evaluation of neurotoxicity. It is designed... such factors as the comparative metabolism of the chemical and species sensitivity to the toxic effects...
40 CFR 798.6050 - Functional observational battery.
Code of Federal Regulations, 2010 CFR
2010-07-01
... observational battery is a noninvasive procedure designed to detect gross functional deficits in young adults.... This battery of tests is not intended to provide a detailed evaluation of neurotoxicity. It is designed... such factors as the comparative metabolism of the chemical and species sensitivity to the toxic effects...
40 CFR 798.6050 - Functional observational battery.
Code of Federal Regulations, 2013 CFR
2013-07-01
... observational battery is a noninvasive procedure designed to detect gross functional deficits in young adults.... This battery of tests is not intended to provide a detailed evaluation of neurotoxicity. It is designed... such factors as the comparative metabolism of the chemical and species sensitivity to the toxic effects...
40 CFR 798.6050 - Functional observational battery.
Code of Federal Regulations, 2014 CFR
2014-07-01
... observational battery is a noninvasive procedure designed to detect gross functional deficits in young adults.... This battery of tests is not intended to provide a detailed evaluation of neurotoxicity. It is designed... such factors as the comparative metabolism of the chemical and species sensitivity to the toxic effects...
Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David J; Mueller, Frank; Engelmann, Christian
Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less
ERIC Educational Resources Information Center
Sherwood, David E.
2010-01-01
According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal that serves as a basis for a correction. The main question addressed in the current study was how…
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
Tallot, Lucille; Diaz-Mataix, Lorenzo; Perry, Rosemarie E.; Wood, Kira; LeDoux, Joseph E.; Mouly, Anne-Marie; Sullivan, Regina M.; Doyère, Valérie
2017-01-01
The updating of a memory is triggered whenever it is reactivated and a mismatch from what is expected (i.e., prediction error) is detected, a process that can be unraveled through the memory's sensitivity to protein synthesis inhibitors (i.e., reconsolidation). As noted in previous studies, in Pavlovian threat/aversive conditioning in adult rats, prediction error detection and its associated protein synthesis-dependent reconsolidation can be triggered by reactivating the memory with the conditioned stimulus (CS), but without the unconditioned stimulus (US), or by presenting a CS–US pairing with a different CS–US interval than during the initial learning. Whether similar mechanisms underlie memory updating in the young is not known. Using similar paradigms with rapamycin (an mTORC1 inhibitor), we show that preweaning rats (PN18–20) do form a long-term memory of the CS–US interval, and detect a 10-sec versus 30-sec temporal prediction error. However, the resulting updating/reconsolidation processes become adult-like after adolescence (PN30–40). Our results thus show that while temporal prediction error detection exists in preweaning rats, specific infant-type mechanisms are at play for associative learning and memory. PMID:28202715
Bottoms, Hayden C; Eslick, Andrea N; Marsh, Elizabeth J
2010-08-01
Although contradictions with stored knowledge are common in daily life, people often fail to notice them. For example, in the Moses illusion, participants fail to notice errors in questions such as "How many animals of each kind did Moses take on the Ark?" despite later showing knowledge that the Biblical reference is to Noah, not Moses. We examined whether error prevalence affected participants' ability to detect distortions in questions, and whether this in turn had memorial consequences. Many of the errors were overlooked, but participants were better able to catch them when they were more common. More generally, the failure to detect errors had negative memorial consequences, increasing the likelihood that the errors were used to answer later general knowledge questions. Methodological implications of this finding are discussed, as it suggests that typical analyses likely underestimate the size of the Moses illusion. Overall, answering distorted questions can yield errors in the knowledge base; most importantly, prior knowledge does not protect against these negative memorial consequences.
Prevalence and pattern of prescription errors in a Nigerian kidney hospital.
Babatunde, Kehinde M; Akinbodewa, Akinwumi A; Akinboye, Ayodele O; Adejumo, Ademola O
2016-12-01
To determine (i) the prevalence and pattern of prescription errors in our Centre and, (ii) appraise pharmacists' intervention and correction of identified prescription errors. A descriptive, single blinded cross-sectional study. Kidney Care Centre is a public Specialist hospital. The monthly patient load averages 60 General Out-patient cases and 17.4 in-patients. A total of 31 medical doctors (comprising of 2 Consultant Nephrologists, 15 Medical Officers, 14 House Officers), 40 nurses and 24 ward assistants participated in the study. One pharmacist runs the daily call schedule. Prescribers were blinded to the study. Prescriptions containing only galenicals were excluded. An error detection mechanism was set up to identify and correct prescription errors. Life-threatening prescriptions were discussed with the Quality Assurance Team of the Centre who conveyed such errors to the prescriber without revealing the on-going study. Prevalence of prescription errors, pattern of prescription errors, pharmacist's intervention. A total of 2,660 (75.0%) combined prescription errors were found to have one form of error or the other; illegitimacy 1,388 (52.18%), omission 1,221(45.90%), wrong dose 51(1.92%) and no error of style was detected. Life-threatening errors were low (1.1-2.2%). Errors were found more commonly among junior doctors and non-medical doctors. Only 56 (1.6%) of the errors were detected and corrected during the process of dispensing. Prescription errors related to illegitimacy and omissions were highly prevalent. There is a need to improve on patient-to-healthcare giver ratio. A medication quality assurance unit is needed in our hospitals. No financial support was received by any of the authors for this study.
Online 3D EPID-based dose verification: Proof of concept.
Spreeuw, Hanno; Rozendaal, Roel; Olaciregui-Ruiz, Igor; González, Patrick; Mans, Anton; Mijnheer, Ben; van Herk, Marcel
2016-07-01
Delivery errors during radiotherapy may lead to medical harm and reduced life expectancy for patients. Such serious incidents can be avoided by performing dose verification online, i.e., while the patient is being irradiated, creating the possibility of halting the linac in case of a large overdosage or underdosage. The offline EPID-based 3D in vivo dosimetry system clinically employed at our institute is in principle suited for online treatment verification, provided the system is able to complete 3D dose reconstruction and verification within 420 ms, the present acquisition time of a single EPID frame. It is the aim of this study to show that our EPID-based dosimetry system can be made fast enough to achieve online 3D in vivo dose verification. The current dose verification system was sped up in two ways. First, a new software package was developed to perform all computations that are not dependent on portal image acquisition separately, thus removing the need for doing these calculations in real time. Second, the 3D dose reconstruction algorithm was sped up via a new, multithreaded implementation. Dose verification was implemented by comparing planned with reconstructed 3D dose distributions delivered to two regions in a patient: the target volume and the nontarget volume receiving at least 10 cGy. In both volumes, the mean dose is compared, while in the nontarget volume, the near-maximum dose (D2) is compared as well. The real-time dosimetry system was tested by irradiating an anthropomorphic phantom with three VMAT plans: a 6 MV head-and-neck treatment plan, a 10 MV rectum treatment plan, and a 10 MV prostate treatment plan. In all plans, two types of serious delivery errors were introduced. The functionality of automatically halting the linac was also implemented and tested. The precomputation time per treatment was ∼180 s/treatment arc, depending on gantry angle resolution. The complete processing of a single portal frame, including dose verification, took 266 ± 11 ms on a dual octocore Intel Xeon E5-2630 CPU running at 2.40 GHz. The introduced delivery errors were detected after 5-10 s irradiation time. A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for two different kinds of gross delivery errors. Thus, online 3D dose verification has been technologically achieved.
Nikolic, Mark I; Sarter, Nadine B
2007-08-01
To examine operator strategies for diagnosing and recovering from errors and disturbances as well as the impact of automation design and time pressure on these processes. Considerable efforts have been directed at error prevention through training and design. However, because errors cannot be eliminated completely, their detection, diagnosis, and recovery must also be supported. Research has focused almost exclusively on error detection. Little is known about error diagnosis and recovery, especially in the context of event-driven tasks and domains. With a confederate pilot, 12 airline pilots flew a 1-hr simulator scenario that involved three challenging automation-related tasks and events that were likely to produce erroneous actions or assessments. Behavioral data were compared with a canonical path to examine pilots' error and disturbance management strategies. Debriefings were conducted to probe pilots' system knowledge. Pilots seldom followed the canonical path to cope with the scenario events. Detection of a disturbance was often delayed. Diagnostic episodes were rare because of pilots' knowledge gaps and time criticality. In many cases, generic inefficient recovery strategies were observed, and pilots relied on high levels of automation to manage the consequences of an error. Our findings describe and explain the nature and shortcomings of pilots' error management activities. They highlight the need for improved automation training and design to achieve more timely detection, accurate explanation, and effective recovery from errors and disturbances. Our findings can inform the design of tools and techniques that support disturbance management in various complex, event-driven environments.
He, Jianbo; Li, Jijie; Huang, Zhongwen; Zhao, Tuanjie; Xing, Guangnan; Gai, Junyi; Guan, Rongzhan
2015-01-01
Experimental error control is very important in quantitative trait locus (QTL) mapping. Although numerous statistical methods have been developed for QTL mapping, a QTL detection model based on an appropriate experimental design that emphasizes error control has not been developed. Lattice design is very suitable for experiments with large sample sizes, which is usually required for accurate mapping of quantitative traits. However, the lack of a QTL mapping method based on lattice design dictates that the arithmetic mean or adjusted mean of each line of observations in the lattice design had to be used as a response variable, resulting in low QTL detection power. As an improvement, we developed a QTL mapping method termed composite interval mapping based on lattice design (CIMLD). In the lattice design, experimental errors are decomposed into random errors and block-within-replication errors. Four levels of block-within-replication errors were simulated to show the power of QTL detection under different error controls. The simulation results showed that the arithmetic mean method, which is equivalent to a method under random complete block design (RCBD), was very sensitive to the size of the block variance and with the increase of block variance, the power of QTL detection decreased from 51.3% to 9.4%. In contrast to the RCBD method, the power of CIMLD and the adjusted mean method did not change for different block variances. The CIMLD method showed 1.2- to 7.6-fold higher power of QTL detection than the arithmetic or adjusted mean methods. Our proposed method was applied to real soybean (Glycine max) data as an example and 10 QTLs for biomass were identified that explained 65.87% of the phenotypic variation, while only three and two QTLs were identified by arithmetic and adjusted mean methods, respectively.
Tridandapani, Srini; Ramamurthy, Senthil; Provenzale, James; Obuchowski, Nancy A; Evanoff, Michael G; Bhatti, Pamela
2014-08-01
To evaluate whether the presence of facial photographs obtained at the point-of-care of portable radiography leads to increased detection of wrong-patient errors. In this institutional review board-approved study, 166 radiograph-photograph combinations were obtained from 30 patients. Consecutive radiographs from the same patients resulted in 83 unique pairs (ie, a new radiograph and prior, comparison radiograph) for interpretation. To simulate wrong-patient errors, mismatched pairs were generated by pairing radiographs from different patients chosen randomly from the sample. Ninety radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10% mismatches (ie, error pairs). Radiologists were randomly assigned to interpret radiographs with or without photographs. The number of mismatches was identified, and interpretation times were recorded. Ninety radiologists with 21 ± 10 (mean ± standard deviation) years of experience were recruited to participate in this observer study. With the introduction of photographs, the proportion of errors detected increased from 31% (9 of 29) to 77% (23 of 30; P = .006). The odds ratio for detection of error with photographs to detection without photographs was 7.3 (95% confidence interval: 2.29-23.18). Observer qualifications, training, or practice in cardiothoracic radiology did not influence sensitivity for error detection. There is no significant difference in interpretation time for studies without photographs and those with photographs (60 ± 22 vs. 61 ± 25 seconds; P = .77). In this observer study, facial photographs obtained simultaneously with portable chest radiographs increased the identification of any wrong-patient errors, without substantial increase in interpretation time. This technique offers a potential means to increase patient safety through correct patient identification. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chida, Y.; Takagawa, T.
2017-12-01
The observation data of GPS buoys which are installed in the offshore of Japan are used for monitoring not only waves but also tsunamis in Japan. The real-time data was successfully used to upgrade the tsunami warnings just after the 2011 Tohoku earthquake. Huge tsunamis can be easily detected because the signal-noise ratio is high enough, but moderate tsunami is not. GPS data sometimes include the error waveforms like tsunamis because of changing accuracy by the number and the position of GPS satellites. To distinguish the true tsunami waveforms from pseudo-tsunami ones is important for tsunami detection. In this research, a method to reduce misdetections of tsunami in the observation data of GPS buoys and to increase the efficiency of tsunami detection was developed.Firstly, the error waveforms were extracted by using the indexes of position dilution of precision, reliability of GPS satellite positioning and satellite number for calculation. Then, the output from this procedure was used for the Continuous Wavelet Transform (CWT) to analyze the time-frequency characteristics of error waveforms and real tsunami waveforms.We found that the error waveforms tended to appear when the accuracy of GPS buoys positioning was low. By extracting these waveforms, it was possible to decrease about 43% error waveforms without the reduction of the tsunami detection rate. Moreover, we found that the amplitudes of power spectra obtained from the error waveforms and real tsunamis were similar in the component of long period (4-65 minutes), on the other hand, the amplitude in the component of short period (< 1 minute) obtained from the error waveforms was significantly larger than that of the real tsunami waveforms. By thresholding of the short-period component, further extraction of error waveforms became possible without a significant reduction of tsunami detection rate.
Detection of layup errors in prepreg laminates using shear ultrasonic waves
NASA Astrophysics Data System (ADS)
Hsu, David K.; Fischer, Brent A.
1996-11-01
The highly anisotropic elastic properties of the plies in a composite laminate manufactured from unidirectional prepregs interact strongly with the polarization direction of shear ultrasonic waves propagating through its thickness. The received signals in a 'crossed polarizer' transmission configuration are particularly sensitive to ply orientation and layup sequence in a laminate. Such measurements can therefore serve as an NDE tool for detecting layup errors. For example, it was shown experimentally recently that the sensitivity for detecting the presence of misoriented plies is better than one ply out of a 48-ply laminate of graphite epoxy. A physical model based on the decomposition and recombination of the shear polarization vector has been constructed and used in the interpretation and prediction of test results. Since errors should be detected early in the manufacturing process, this work also addresses the inspection of 'green' composite laminates using electromagnetic acoustic transducers (EMAT). Preliminary results for ply error detection obtained with EMAT probes are described.
Moriano, Javier; Rodríguez, Francisco Javier; Martín, Pedro; Jiménez, Jose Antonio; Vuksanovic, Branislav
2016-01-01
In recent years, Secondary Substations (SSs) are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF) allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected. PMID:26771613
Metacognition and proofreading: the roles of aging, motivation, and interest.
Hargis, Mary B; Yue, Carole L; Kerr, Tyson; Ikeda, Kenji; Murayama, Kou; Castel, Alan D
2017-03-01
The current study examined younger and older adults' error detection accuracy, prediction calibration, and postdiction calibration on a proofreading task, to determine if age-related differences would be present in this type of common error detection task. Participants were given text passages, and were first asked to predict the percentage of errors they would detect in the passage. They then read the passage and circled errors (which varied in complexity and locality), and made postdictions regarding their performance, before repeating this with another passage and answering a comprehension test of both passages. There were no age-related differences in error detection accuracy, text comprehension, or metacognitive calibration, though participants in both age groups were overconfident overall in their metacognitive judgments. Both groups gave similar ratings of motivation to complete the task. The older adults rated the passages as more interesting than younger adults did, although this level of interest did not appear to influence error-detection performance. The age equivalence in both proofreading ability and calibration suggests that the ability to proofread text passages and the associated metacognitive monitoring used in judging one's own performance are maintained in aging. These age-related similarities persisted when younger adults completed the proofreading tasks on a computer screen, rather than with paper and pencil. The findings provide novel insights regarding the influence that cognitive aging may have on metacognitive accuracy and text processing in an everyday task.
An extension of the receiver operating characteristic curve and AUC-optimal classification.
Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto
2012-10-01
While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.
Model assessment using a multi-metric ranking technique
NASA Astrophysics Data System (ADS)
Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.
2017-12-01
Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.
Mapping DNA polymerase errors by single-molecule sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David F.; Lu, Jenny; Chang, Seungwoo
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.
Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing
2016-01-01
The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.
Mapping DNA polymerase errors by single-molecule sequencing
Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...
2016-05-16
Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less
Evidence for aversive withdrawal response to own errors.
Hochman, Eldad Yitzhak; Milman, Valery; Tal, Liron
2017-10-01
Recent model suggests that error detection gives rise to defensive motivation prompting protective behavior. Models of active avoidance behavior predict it should grow larger with threat imminence and avoidance. We hypothesized that in a task requiring left or right key strikes, error detection would drive an avoidance reflex manifested by rapid withdrawal of an erring finger growing larger with threat imminence and avoidance. In experiment 1, three groups differing by error-related threat imminence and avoidance performed a flanker task requiring left or right force sensitive-key strikes. As predicted, errors were followed by rapid force release growing faster with threat imminence and opportunity to evade threat. In experiment 2, we established a link between error key release time (KRT) and the subjective sense of inner-threat. In a simultaneous, multiple regression analysis of three error-related compensatory mechanisms (error KRT, flanker effect, error correction RT), only error KRT was significantly associated with increased compulsive checking tendencies. We propose that error response withdrawal reflects an error-withdrawal reflex. Copyright © 2017 Elsevier B.V. All rights reserved.
Current neurotoxicity and developmental neurotoxicity testing methods for hazard identification rely on in vivo neurobehavior, neurophysiological, and gross pathology of the nervous system. These measures may not be sensitive enough to detect small changes caused by realistic ex...
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection
Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-01-01
Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.
Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-08-18
The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.
Error Detection and Correction in Spelling.
ERIC Educational Resources Information Center
Lydiatt, Steve
1984-01-01
Teachers can discover students' means of dealing with spelling as a problem through investigations of their error detection and correction skills. Approaches for measuring sensitivity and bias are described, as are means of developing appropriate instructional activities. (CL)
A median filter approach for correcting errors in a vector field
NASA Technical Reports Server (NTRS)
Schultz, H.
1985-01-01
Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Online Deviation Detection for Medical Processes
Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.
2014-01-01
Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343
Extraction and Analysis of Display Data
NASA Technical Reports Server (NTRS)
Land, Chris; Moye, Kathryn
2008-01-01
The Display Audit Suite is an integrated package of software tools that partly automates the detection of Portable Computer System (PCS) Display errors. [PCS is a lap top computer used onboard the International Space Station (ISS).] The need for automation stems from the large quantity of PCS displays (6,000+, with 1,000,000+ lines of command and telemetry data). The Display Audit Suite includes data-extraction tools, automatic error detection tools, and database tools for generating analysis spread sheets. These spread sheets allow engineers to more easily identify many different kinds of possible errors. The Suite supports over 40 independent analyses, 16 NASA Tech Briefs, November 2008 and complements formal testing by being comprehensive (all displays can be checked) and by revealing errors that are difficult to detect via test. In addition, the Suite can be run early in the development cycle to find and correct errors in advance of testing.
Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.
Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente
2014-07-15
Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-03-24
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.
Multi-segmental movements as a function of experience in karate.
Zago, Matteo; Codari, Marina; Iaia, F Marcello; Sforza, Chiarella
2017-08-01
Karate is a martial art that partly depends on subjective scoring of complex movements. Principal component analysis (PCA)-based methods can identify the fundamental synergies (principal movements) of motor system, providing a quantitative global analysis of technique. In this study, we aimed at describing the fundamental multi-joint synergies of a karate performance, under the hypothesis that the latter are skilldependent; estimate karateka's experience level, expressed as years of practice. A motion capture system recorded traditional karate techniques of 10 professional and amateur karateka. At any time point, the 3D-coordinates of body markers produced posture vectors that were normalised, concatenated from all karateka and submitted to a first PCA. Five principal movements described both gross movement synergies and individual differences. A second PCA followed by linear regression estimated the years of practice using principal movements (eigenpostures and weighting curves) and centre of mass kinematics (error: 3.71 years; R2 = 0.91, P ≪ 0.001). Principal movements and eigenpostures varied among different karateka and as functions of experience. This approach provides a framework to develop visual tools for the analysis of motor synergies in karate, allowing to detect the multi-joint motor patterns that should be restored after an injury, or to be specifically trained to increase performance.
Quantum-state anomaly detection for arbitrary errors using a machine-learning technique
NASA Astrophysics Data System (ADS)
Hara, Satoshi; Ono, Takafumi; Okamoto, Ryo; Washio, Takashi; Takeuchi, Shigeki
2016-10-01
The accurate detection of small deviations in given density matrice is important for quantum information processing, which is a difficult task because of the intrinsic fluctuation in density matrices reconstructed using a limited number of experiments. We previously proposed a method for decoherence error detection using a machine-learning technique [S. Hara, T. Ono, R. Okamoto, T. Washio, and S. Takeuchi, Phys. Rev. A 89, 022104 (2014), 10.1103/PhysRevA.89.022104]. However, the previous method is not valid when the errors are just changes in phase. Here, we propose a method that is valid for arbitrary errors in density matrices. The performance of the proposed method is verified using both numerical simulation data and real experimental data.
System of error detection in the manufacture of garments using artificial vision
NASA Astrophysics Data System (ADS)
Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.
2017-12-01
A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Improved astigmatic focus error detection method
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.
1992-01-01
All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.
Zhang, Hongliang; Chen, Gang; Hu, Jianlin; Chen, Shu-Hua; Wiedinmyer, Christine; Kleeman, Michael; Ying, Qi
2014-03-01
The performance of the Weather Research and Forecasting (WRF)/Community Multi-scale Air Quality (CMAQ) system in the eastern United States is analyzed based on results from a seven-year modeling study with a 4-km spatial resolution. For 2-m temperature, the monthly averaged mean bias (MB) and gross error (GE) values are generally within the recommended performance criteria, although temperature is over-predicted with MB values up to 2K. Water vapor at 2-m is well-predicted but significant biases (>2 g kg(-1)) were observed in wintertime. Predictions for wind speed are satisfactory but biased towards over-prediction with 0
van Elk, Michiel; Bousardt, Roel; Bekkering, Harold; van Schie, Hein T.
2012-01-01
Detecting errors in other’s actions is of pivotal importance for joint action, competitive behavior and observational learning. Although many studies have focused on the neural mechanisms involved in detecting low-level errors, relatively little is known about error-detection in everyday situations. The present study aimed to identify the functional and neural mechanisms whereby we understand the correctness of other’s actions involving well-known objects (e.g. pouring coffee in a cup). Participants observed action sequences in which the correctness of the object grasped and the grip applied to a pair of objects were independently manipulated. Observation of object violations (e.g. grasping the empty cup instead of the coffee pot) resulted in a stronger P3-effect than observation of grip errors (e.g. grasping the coffee pot at the upper part instead of the handle), likely reflecting a reorienting response, directing attention to the relevant location. Following the P3-effect, a parietal slow wave positivity was observed that persisted for grip-errors, likely reflecting the detection of an incorrect hand-object interaction. These findings provide new insight in the functional significance of the neurophysiological markers associated with the observation of incorrect actions and suggest that the P3-effect and the subsequent parietal slow wave positivity may reflect the detection of errors at different levels in the action hierarchy. Thereby this study elucidates the cognitive processes that support the detection of action violations in the selection of objects and grips. PMID:22606261
Method for Real-Time Model Based Structural Anomaly Detection
NASA Technical Reports Server (NTRS)
Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)
2015-01-01
A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.
Modeling of a bubble-memory organization with self-checking translators to achieve high reliability.
NASA Technical Reports Server (NTRS)
Bouricius, W. G.; Carter, W. C.; Hsieh, E. P.; Wadia, A. B.; Jessep, D. C., Jr.
1973-01-01
Study of the design and modeling of a highly reliable bubble-memory system that has the capabilities of: (1) correcting a single 16-adjacent bit-group error resulting from failures in a single basic storage module (BSM), and (2) detecting with a probability greater than 0.99 any double errors resulting from failures in BSM's. The results of the study justify the design philosophy adopted of employing memory data encoding and a translator to correct single group errors and detect double group errors to enhance the overall system reliability.
Seeing the invisible: direct visualization of therapeutic radiation beams using air scintillation.
Fahimian, Benjamin; Ceballos, Andrew; Türkcan, Silvan; Kapp, Daniel S; Pratx, Guillem
2014-01-01
To assess whether air scintillation produced during standard radiation treatments can be visualized and used to monitor a beam in a nonperturbing manner. Air scintillation is caused by the excitation of nitrogen gas by ionizing radiation. This weak emission occurs predominantly in the 300-430 nm range. An electron-multiplication charge-coupled device camera, outfitted with an f/0.95 lens, was used to capture air scintillation produced by kilovoltage photon beams and megavoltage electron beams used in radiation therapy. The treatment rooms were prepared to block background light and a short-pass filter was utilized to block light above 440 nm. Air scintillation from an orthovoltage unit (50 kVp, 30 mA) was visualized with a relatively short exposure time (10 s) and showed an inverse falloff (r(2) = 0.89). Electron beams were also imaged. For a fixed exposure time (100 s), air scintillation was proportional to dose rate (r(2) = 0.9998). As energy increased, the divergence of the electron beam decreased and the penumbra improved. By irradiating a transparent phantom, the authors also showed that Cherenkov luminescence did not interfere with the detection of air scintillation. In a final illustration of the capabilities of this new technique, the authors visualized air scintillation produced during a total skin irradiation treatment. Air scintillation can be measured to monitor a radiation beam in an inexpensive and nonperturbing manner. This physical phenomenon could be useful for dosimetry of therapeutic radiation beams or for online detection of gross errors during fractionated treatments.
Song, Jie; Li, Mei; Zagaja, Gregory P; Taxy, Jerome B; Shalhav, Arieh L; Al-Ahmadie, Hikmat A
2010-11-01
To evaluate the accuracy of frozen section (FS) assessment of pelvic lymph nodes (PLNs) during radical prostatectomy (RP) in a large contemporary cohort; and to analyse the contribution of FS to surgical decision making in this setting. During a 4-year period at a single institution, RPs with PLN dissection (PLND) were reviewed. The number and size of the PLNs, and the size of metastases were measured. FS was performed on 349 bilateral PLNDs. Overall, 28 (8%) cases were positive for metastasis, 11 of which were detected by FS (39%). The 17 false negatives, all of which contained metastases smaller than 5 mm, were due to failure to identify and freeze the positive PLNs (11), failure to section at the level of the metastatic tumour (four), or interpretative error (two). The sensitivity was not affected by the number of sampled nodes. The size of metastasis was the determining factor for the accuracy of FS, with metastases of ≥ 5 mm having a sensitivity of 100%, and metastases of < 5 mm having a sensitivity of 10%. Among the 11 true positives, RP was aborted in eight cases and continued in three. During the same period, 261 PLNDs were performed without FS, and 18 (6.9%) had metastases. FS is highly accurate in detecting large, grossly evident metastases, but performs poorly on micrometastases. It is recommended that a two-step approach applied to routine FS starting with a careful gross examination followed by FS for only grossly suspicious PLNs. © 2010 THE AUTHORS. JOURNAL COMPILATION © 2010 BJU INTERNATIONAL.
NASA Technical Reports Server (NTRS)
Campbell, J. W. (Editor)
1981-01-01
The detection of anthropogenic disturbances in the Earth's ozone layer was studied. Two topics were addressed: (1) the level at which a trend in total ozoning is detected by existing data sources; and (2) empirical evidence in the prediction of the depletion in total ozone. Error sources are identified. The predictability of climatological series, whether empirical models can be trusted, and how errors in the Dobson total ozone data impact trend detectability, are discussed.
Improved Conflict Detection for Reducing Operational Errors in Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Hainz
2003-01-01
An operational error is an incident in which an air traffic controller allows the separation between two aircraft to fall below the minimum separation standard. The rates of such errors in the US have increased significantly over the past few years. This paper proposes new detection methods that can help correct this trend by improving on the performance of Conflict Alert, the existing software in the Host Computer System that is intended to detect and warn controllers of imminent conflicts. In addition to the usual trajectory based on the flight plan, a "dead-reckoning" trajectory (current velocity projection) is also generated for each aircraft and checked for conflicts. Filters for reducing common types of false alerts were implemented. The new detection methods were tested in three different ways. First, a simple flightpath command language was developed t o generate precisely controlled encounters for the purpose of testing the detection software. Second, written reports and tracking data were obtained for actual operational errors that occurred in the field, and these were "replayed" to test the new detection algorithms. Finally, the detection methods were used to shadow live traffic, and performance was analysed, particularly with regard to the false-alert rate. The results indicate that the new detection methods can provide timely warnings of imminent conflicts more consistently than Conflict Alert.
Error-Detecting Identification Codes for Algebra Students.
ERIC Educational Resources Information Center
Sutherland, David C.
1990-01-01
Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)
Bayesian inversions of a dynamic vegetation model at four European grassland sites
NASA Astrophysics Data System (ADS)
Minet, J.; Laloy, E.; Tychon, B.; Francois, L.
2015-05-01
Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB (CARbon Assimilation In the Biosphere) dynamic vegetation model (DVM) with 10 unknown parameters, using the DREAM(ZS) (DiffeRential Evolution Adaptive Metropolis) Markov chain Monte Carlo (MCMC) sampler. We focus on comparing model inversions, considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a priori or jointly inferred together with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root mean square errors (RMSEs) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19, 1.04 to 1.56 g C m-2 day-1 and 0.50 to 1.28 mm day-1, respectively. For the calibration period, using a homoscedastic eddy covariance residual error model resulted in a better agreement between measured and modelled data than using a heteroscedastic residual error model. However, a model validation experiment showed that CARAIB models calibrated considering heteroscedastic residual errors perform better. Posterior parameter distributions derived from using a heteroscedastic model of the residuals thus appear to be more robust. This is the case even though the classical linear heteroscedastic error model assumed herein did not fully remove heteroscedasticity of the GPP residuals. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides the residual error treatment, differences between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics.
NASA Astrophysics Data System (ADS)
Sayed, Ahmed R. M. Al; Isa, Zaidi
2015-09-01
Many scholars have shown their interest into the relationship between energy consumption (EC), gross domestic product (GDP) and emissions. The main objective of this study is to investigate the relationship between GDP, EC and CO2 within multivariate model by using panel data method in Asian countries; Korea, Malaysia, Japan and China for annually data during the period 1960 to 2010. The main finding shows that CO2 can be explained more than 86% & 78% by EC and GDP in each of cross section model and period model respectively. As a result of that, CO2 emissions should be considered as an important factor in energy consumption and gross domestic product by policy maker.
Error control for reliable digital data transmission and storage systems
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
AfterQC: automatic filtering, trimming, error removing and quality control for fastq data.
Chen, Shifu; Huang, Tanxiao; Zhou, Yanqing; Han, Yue; Xu, Mingyan; Gu, Jia
2017-03-14
Some applications, especially those clinical applications requiring high accuracy of sequencing data, usually have to face the troubles caused by unavoidable sequencing errors. Several tools have been proposed to profile the sequencing quality, but few of them can quantify or correct the sequencing errors. This unmet requirement motivated us to develop AfterQC, a tool with functions to profile sequencing errors and correct most of them, plus highly automated quality control and data filtering features. Different from most tools, AfterQC analyses the overlapping of paired sequences for pair-end sequencing data. Based on overlapping analysis, AfterQC can detect and cut adapters, and furthermore it gives a novel function to correct wrong bases in the overlapping regions. Another new feature is to detect and visualise sequencing bubbles, which can be commonly found on the flowcell lanes and may raise sequencing errors. Besides normal per cycle quality and base content plotting, AfterQC also provides features like polyX (a long sub-sequence of a same base X) filtering, automatic trimming and K-MER based strand bias profiling. For each single or pair of FastQ files, AfterQC filters out bad reads, detects and eliminates sequencer's bubble effects, trims reads at front and tail, detects the sequencing errors and corrects part of them, and finally outputs clean data and generates HTML reports with interactive figures. AfterQC can run in batch mode with multiprocess support, it can run with a single FastQ file, a single pair of FastQ files (for pair-end sequencing), or a folder for all included FastQ files to be processed automatically. Based on overlapping analysis, AfterQC can estimate the sequencing error rate and profile the error transform distribution. The results of our error profiling tests show that the error distribution is highly platform dependent. Much more than just another new quality control (QC) tool, AfterQC is able to perform quality control, data filtering, error profiling and base correction automatically. Experimental results show that AfterQC can help to eliminate the sequencing errors for pair-end sequencing data to provide much cleaner outputs, and consequently help to reduce the false-positive variants, especially for the low-frequency somatic mutations. While providing rich configurable options, AfterQC can detect and set all the options automatically and require no argument in most cases.
Terkola, R; Czejka, M; Bérubé, J
2017-08-01
Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.
Truths, errors, and lies around "reflex sympathetic dystrophy" and "complex regional pain syndrome".
Ochoa, J L
1999-10-01
The shifting paradigm of reflex sympathetic dystrophy-sympathetically maintained pains-complex regional pain syndrome is characterized by vestigial truths and understandable errors, but also unjustifiable lies. It is true that patients with organically based neuropathic pain harbor unquestionable and physiologically demonstrable evidence of nerve fiber dysfunction leading to a predictable clinical profile with stereotyped temporal evolution. In turn, patients with psychogenic pseudoneuropathy, sustained by conversion-somatization-malingering, not only lack physiological evidence of structural nerve fiber disease but display a characteristically atypical, half-subjective, psychophysical sensory-motor profile. The objective vasomotor signs may have any variety of neurogenic, vasogenic, and psychogenic origins. Neurological differential diagnosis of "neuropathic pain" versus pseudoneuropathy is straight forward provided that stringent requirements of neurological semeiology are not bypassed. Embarrassing conceptual errors explain the assumption that there exists a clinically relevant "sympathetically maintained pain" status. Errors include historical misinterpretation of vasomotor signs in symptomatic body parts, and misconstruing symptomatic relief after "diagnostic" sympathetic blocks, due to lack of consideration of the placebo effect which explains the outcome. It is a lie that sympatholysis may specifically cure patients with unqualified "reflex sympathetic dystrophy." This was already stated by the father of sympathectomy, René Leriche, more than half a century ago. As extrapolated from observations in animals with gross experimental nerve injury, adducing hypothetical, untestable, secondary central neuron sensitization to explain psychophysical sensory-motor complaints displayed by patients with blatantly absent nerve fiber injury, is not an error, but a lie. While conceptual errors are not only forgivable, but natural to inexact medical science, lies particularly when entrepreneurially inspired are condemnable and call for peer intervention.
Teerawattananon, Kanlaya; Myint, Chaw-Yin; Wongkittirux, Kwanjai; Teerawattananon, Yot; Chinkulkitnivat, Bunyong; Orprayoon, Surapong; Kusakul, Suwat; Tengtrisorn, Supaporn; Jenchitr, Watanee
2014-01-01
As part of the development of a system for the screening of refractive error in Thai children, this study describes the accuracy and feasibility of establishing a program conducted by teachers. To assess the accuracy and feasibility of screening by teachers. A cross-sectional descriptive and analytical study was conducted in 17 schools in four provinces representing four geographic regions in Thailand. A two-staged cluster sampling was employed to compare the detection rate of refractive error among eligible students between trained teachers and health professionals. Serial focus group discussions were held for teachers and parents in order to understand their attitude towards refractive error screening at schools and the potential success factors and barriers. The detection rate of refractive error screening by teachers among pre-primary school children is relatively low (21%) for mild visual impairment but higher for moderate visual impairment (44%). The detection rate for primary school children is high for both levels of visual impairment (52% for mild and 74% for moderate). The focus group discussions reveal that both teachers and parents would benefit from further education regarding refractive errors and that the vast majority of teachers are willing to conduct a school-based screening program. Refractive error screening by health professionals in pre-primary and primary school children is not currently implemented in Thailand due to resource limitations. However, evidence suggests that a refractive error screening program conducted in schools by teachers in the country is reasonable and feasible because the detection and treatment of refractive error in very young generations is important and the screening program can be implemented and conducted with relatively low costs.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).
Ruiz, María Herrojo; Strübing, Felix; Jabusch, Hans-Christian; Altenmüller, Eckart
2011-04-15
Skilled performance requires the ability to monitor ongoing behavior, detect errors in advance and modify the performance accordingly. The acquisition of fast predictive mechanisms might be possible due to the extensive training characterizing expertise performance. Recent EEG studies on piano performance reported a negative event-related potential (ERP) triggered in the ACC 70 ms before performance errors (pitch errors due to incorrect keypress). This ERP component, termed pre-error related negativity (pre-ERN), was assumed to reflect processes of error detection in advance. However, some questions remained to be addressed: (i) Does the electrophysiological marker prior to errors reflect an error signal itself or is it related instead to the implementation of control mechanisms? (ii) Does the posterior frontomedial cortex (pFMC, including ACC) interact with other brain regions to implement control adjustments following motor prediction of an upcoming error? (iii) Can we gain insight into the electrophysiological correlates of error prediction and control by assessing the local neuronal synchronization and phase interaction among neuronal populations? (iv) Finally, are error detection and control mechanisms defective in pianists with musician's dystonia (MD), a focal task-specific dystonia resulting from dysfunction of the basal ganglia-thalamic-frontal circuits? Consequently, we investigated the EEG oscillatory and phase synchronization correlates of error detection and control during piano performances in healthy pianists and in a group of pianists with MD. In healthy pianists, the main outcomes were increased pre-error theta and beta band oscillations over the pFMC and 13-15 Hz phase synchronization, between the pFMC and the right lateral prefrontal cortex, which predicted corrective mechanisms. In MD patients, the pattern of phase synchronization appeared in a different frequency band (6-8 Hz) and correlated with the severity of the disorder. The present findings shed new light on the neural mechanisms, which might implement motor prediction by means of forward control processes, as they function in healthy pianists and in their altered form in patients with MD. Copyright © 2010 Elsevier Inc. All rights reserved.
A two-factor error model for quantitative steganalysis
NASA Astrophysics Data System (ADS)
Böhme, Rainer; Ker, Andrew D.
2006-02-01
Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bautista-Gomez, Leonardo; Cappello, Franck
Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
Ecological footprint model using the support vector machine technique.
Ma, Haibo; Chang, Wenjuan; Cui, Guangbai
2012-01-01
The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.
Automatic knee cartilage delineation using inheritable segmentation
NASA Astrophysics Data System (ADS)
Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.
2008-03-01
We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.
URANS simulations of the tip-leakage cavitating flow with verification and validation procedures
NASA Astrophysics Data System (ADS)
Cheng, Huai-yu; Long, Xin-ping; Liang, Yun-zhi; Long, Yun; Ji, Bin
2018-04-01
In the present paper, the Vortex Identified Zwart-Gerber-Belamri (VIZGB) cavitation model coupled with the SST-CC turbulence model is used to investigate the unsteady tip-leakage cavitating flow induced by a NACA0009 hydrofoil. A qualitative comparison between the numerical and experimental results is made. In order to quantitatively evaluate the reliability of the numerical data, the verification and validation (V&V) procedures are used in the present paper. Errors of numerical results are estimated with seven error estimators based on the Richardson extrapolation method. It is shown that though a strict validation cannot be achieved, a reasonable prediction of the gross characteristics of the tip-leakage cavitating flow can be obtained. Based on the numerical results, the influence of the cavitation on the tip-leakage vortex (TLV) is discussed, which indicates that the cavitation accelerates the fusion of the TLV and the tip-separation vortex (TSV). Moreover, the trajectory of the TLV, when the cavitation occurs, is close to the side wall.
The growth pattern and fuel life cycle analysis of the electricity consumption of Hong Kong.
To, W M; Lai, T M; Lo, W C; Lam, K H; Chung, W L
2012-06-01
As the consumption of electricity increases, air pollutants from power generation increase. In metropolitans such as Hong Kong and other Asian cities, the surge of electricity consumption has been phenomenal over the past decades. This paper presents a historical review about electricity consumption, population, and change in economic structure in Hong Kong. It is hypothesized that the growth of electricity consumption and change in gross domestic product can be modeled by 4-parameter logistic functions. The accuracy of the functions was assessed by Pearson's correlation coefficient, mean absolute percent error, and root mean squared percent error. The paper also applies the life cycle approach to determine carbon dioxide, methane, nitrous oxide, sulfur dioxide, and nitrogen oxide emissions for the electricity consumption of Hong Kong. Monte Carlo simulations were applied to determine the confidence intervals of pollutant emissions. The implications of importing more nuclear power are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Location precision analysis of stereo thermal anti-sniper detection system
NASA Astrophysics Data System (ADS)
He, Yuqing; Lu, Ya; Zhang, Xiaoyan; Jin, Weiqi
2012-06-01
Anti-sniper detection devices are the urgent requirement in modern warfare. The precision of the anti-sniper detection system is especially important. This paper discusses the location precision analysis of the anti-sniper detection system based on the dual-thermal imaging system. It mainly discusses the following two aspects which produce the error: the digital quantitative effects of the camera; effect of estimating the coordinate of bullet trajectory according to the infrared images in the process of image matching. The formula of the error analysis is deduced according to the method of stereovision model and digital quantitative effects of the camera. From this, we can get the relationship of the detecting accuracy corresponding to the system's parameters. The analysis in this paper provides the theory basis for the error compensation algorithms which are put forward to improve the accuracy of 3D reconstruction of the bullet trajectory in the anti-sniper detection devices.
Entanglement-enhanced Neyman-Pearson target detection using quantum illumination
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.
Flavour and identification threshold detection overview of Slovak adepts for certified testing.
Vietoris, VladimIr; Barborova, Petra; Jancovicova, Jana; Eliasova, Lucia; Karvaj, Marian
2016-07-01
During certification process of sensory assessors of Slovak certification body we obtained results for basic taste thresholds and lifestyle habits. 500 adult people were screened during experiment with food industry background. For analysis of basic and non basic tastes, we used standardized procedure of ISO 8586-1:1993. In flavour test experiment, group of (26-35 y.o) produced the lowest error ratio (1.438), highest is (56+ y.o.) group with result (2.0). Average error value based on gender for women was (1.510) in comparison to men (1.477). People with allergies have the average error ratio (1.437) in comparison to people without allergies (1.511). Non-smokers produced less errors (1.484) against the smokers (1.576). Another flavour threshold identification test detected differences among age groups (by age are values increased). The highest number of errors made by men in metallic taste was (24%) the same as made by women (22%). Higher error ratio made by men occurred in salty taste (19%) against women (10%). Analysis detected some differences between allergic/non-allergic, smokers/non-smokers groups.
Dixon, V L; Bzoch, K R; Habal, M B
1979-07-01
A comparison is made of the preoperative and postoperative speech evaluations of 15 selected subjects who had pharyngeal flap operations combined with palatal pushback. Postoperatively, 13 of the 15 patients (86 percent) showed no abnormal nasal emission and no evidence of significant hypernasality during word production. Gross substitution errors were also corrected by the surgical repair. While the number of patients is small, this study indicates equal effectiveness of the surgical technique described--regardless of the sex, the medical diagnosis, whether the procedure was primary or secondary, or the amount of postoperative time--providing there is good function of the muscles of the soft palate.
Influence of incident angle on the decoding in laser polarization encoding guidance
NASA Astrophysics Data System (ADS)
Zhou, Muchun; Chen, Yanru; Zhao, Qi; Xin, Yu; Wen, Hongyuan
2009-07-01
Dynamic detection of polarization states is very important for laser polarization coding guidance systems. In this paper, a set of dynamic polarization decoding and detection system used in laser polarization coding guidance was designed. Detection process of the normal incident polarized light is analyzed with Jones Matrix; the system can effectively detect changes in polarization. Influence of non-normal incident light on performance of polarization decoding and detection system is studied; analysis showed that changes in incident angle will have a negative impact on measure results, the non-normal incident influence is mainly caused by second-order birefringence and polarization sensitivity effect generated in the phase delay and beam splitter prism. Combined with Fresnel formula, decoding errors of linearly polarized light, elliptically polarized light and circularly polarized light with different incident angles into the detector are calculated respectively, the results show that the decoding errors increase with increase of incident angle. Decoding errors have relations with geometry parameters, material refractive index of wave plate, polarization beam splitting prism. Decoding error can be reduced by using thin low-order wave-plate. Simulation of detection of polarized light with different incident angle confirmed the corresponding conclusions.
Annotation of Korean Learner Corpora for Particle Error Detection
ERIC Educational Resources Information Center
Lee, Sun-Hee; Jang, Seok Bae; Seo, Sang-Kyu
2009-01-01
In this study, we focus on particle errors and discuss an annotation scheme for Korean learner corpora that can be used to extract heuristic patterns of particle errors efficiently. We investigate different properties of particle errors so that they can be later used to identify learner errors automatically, and we provide resourceful annotation…
Estimating forestland area change from inventory data
Paul Van Deusen; Francis Roesch; Thomas Wigley
2013-01-01
Simple methods for estimating the proportion of land changing from forest to nonforest are developed. Variance estimators are derived to facilitate significance tests. A power analysis indicates that 400 inventory plots are required to reliably detect small changes in net or gross forest loss. This is an important result because forest certification programs may...
46 CFR 118.400 - Where required.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Where required. 118.400 Section 118.400 Shipping COAST... Extinguishing and Detecting Systems § 118.400 Where required. (a) The following spaces must be equipped with a... unoccupied space with a gross volume of not more than 170 cubic meters (6,000 cubic feet); (2) A pre...
Chaichoun, Kridsada; Wiriyarat, Withawat; Phonaknguen, Rassmeepen; Sariya, Ladawan; Taowan, Nam-aoy; Chakritbudsabong, Warunya; Chaisilp, Natnapat; Eiam-ampai, Krirat; Phuttavatana, Pilaipan; Ratanakorn, Parntep
2013-09-01
This investigation detailed the clinical disease, gross and histologic lesions in juvenile openbill storks (Anastomus oscitans) intranasally inoculated with an avian influenza virus, A/chicken/Thailand/vsmu-3 (H5N1), which is highly pathogenic for chickens. High morbidity and mortality were observed in openbill storks inoculated with HPAI H5N1 virus. Gross lesions from infected birds were congestion and brain hemorrhage (10/20), pericardial effusions, pericarditis and focal necrosis of the cardiac muscle (2/20), pulmonary edema and pulmonary necrosis, serosanguineous fluid in the bronchis (16/20), liver congestion (6/20), bursitis (5/20), subcutaneous hemorrhages (2/20) and pinpoint proventiculus hemorrhage (2/20). Real time RT-PCR demonstrated the presence of viral RNA in organs associated with the lesions: brain, trachea, lungs, liver, spleen and intestines. Similar to viral genome detection, virus was also isolated from these vital organs. Antibodies to influenza virus detected with a hemagglutination inhibition test, were found only in the openbill storks who died 8 days post-inoculation.
Judging the judges' performance in rhythmic gymnastics.
Flessas, Konstantinos; Mylonas, Dimitris; Panagiotaropoulou, Georgia; Tsopani, Despina; Korda, Alexandrea; Siettos, Constantinos; Di Cagno, Alessandra; Evdokimidis, Ioannis; Smyrnis, Nikolaos
2015-03-01
Rhythmic gymnastics (RG) is an aesthetic event balancing between art and sport that also has a performance rating system (Code of Points) given by the International Gymnastics Federation. It is one of the sports in which competition results greatly depend on the judges' evaluation. In the current study, we explored the judges' performance in a five-gymnast ensemble routine. An expert-novice paradigm (10 international-level, 10 national-level, and 10 novice-level judges) was implemented under a fully simulated procedure of judgment in a five-gymnast ensemble routine of RG using two videos of routines performed by the Greek national team of RG. Simultaneous recordings of two-dimensional eye movements were taken during the judgment procedure to assess the percentage of time spent by each judge viewing the videos and fixation performance of each judge when an error in gymnast performance had occurred. All judge level groups had very modest performance of error recognition on gymnasts' routines, and the best international judges reported approximately 40% of true errors. Novice judges spent significantly more time viewing the videos compared with national and international judges and spent significantly more time fixating detected errors than the other two groups. National judges were the only group that made efficient use of fixation to detect errors. The fact that international-level judges outperformed both other groups, while not relying on visual fixation to detect errors, suggests that these experienced judges probably make use of other cognitive strategies, increasing their overall error detection efficiency, which was, however, still far below optimum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on November 15, 2012. Representatives from the U.S. Nuclear Regulatory Commission and Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross beta analyses, andmore » the results are compared using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER {<=} 3 indicates that, at a 99% confidence interval, split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty (ANSI N42.22). The NFS split sample report does not specify the confidence level of reported uncertainties (NFS 2012). Therefore, standard two sigma reporting is assumed and uncertainty values were divided by 1.96. In conclusion, all DER values were less than 3 and results are consistent with low (e.g., background) concentrations.« less
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Tomography of a displacement photon counter for discrimination of single-rail optical qubits
NASA Astrophysics Data System (ADS)
Izumi, Shuro; Neergaard-Nielsen, Jonas S.; Andersen, Ulrik L.
2018-04-01
We investigate the performance of a detection strategy composed of a displacement operation and a photon counter, which is known as a beneficial tool in optical coherent communications, to the quantum state discrimination of the two superpositions of vacuum and single photon states corresponding to the {\\hat{σ }}x eigenstates in the single-rail encoding of photonic qubits. We experimentally characterize the detection strategy in vacuum-single photon two-dimensional space using quantum detector tomography and evaluate the achievable discrimination error probability from the reconstructed measurement operators. We furthermore derive the minimum error rate obtainable with Gaussian transformations and homodyne detection. Our proof-of-principle experiment shows that the proposed scheme can achieve a discrimination error surpassing homodyne detection.
Kermani, Bahram G
2016-07-01
Crystal Genetics, Inc. is an early-stage genetic test company, focused on achieving the highest possible clinical-grade accuracy and comprehensiveness for detecting germline (e.g., in hereditary cancer) and somatic (e.g., in early cancer detection) mutations. Crystal's mission is to significantly improve the health status of the population, by providing high accuracy, comprehensive, flexible and affordable genetic tests, primarily in cancer. Crystal's philosophy is that when it comes to detecting mutations that are strongly correlated with life-threatening diseases, the detection accuracy of every single mutation counts: a single false-positive error could cause severe anxiety for the patient. And, more importantly, a single false-negative error could potentially cost the patient's life. Crystal's objective is to eliminate both of these error types.
System reliability and recovery.
DOT National Transportation Integrated Search
1971-06-01
The paper exhibits a variety of reliability techniques applicable to future ATC data processing systems. Presently envisioned schemes for error detection, error interrupt and error analysis are considered, along with methods of retry, reconfiguration...
Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.
Maier, Martin E; Steinhauser, Marco
2013-10-02
Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.
Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks
Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng
2017-01-01
High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328
SU-F-T-310: Does a Head-Mounted Ionization Chamber Detect IMRT Errors?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wegener, S; Herzog, B; Sauer, O
2016-06-15
Purpose: The conventional plan verification strategy is delivering a plan to a QA-phantom before the first treatment. Monitoring each fraction of the patient treatment in real-time would improve patient safety. We evaluated how well a new detector, the IQM (iRT Systems, Germany), is capable of detecting errors we induced into IMRT plans of three different treatment regions. Results were compared to an established phantom. Methods: Clinical plans of a brain, prostate and head-and-neck patient were modified in the Pinnacle planning system, such that they resulted in either several percent lower prescribed doses to the target volume or several percent highermore » doses to relevant organs at risk. Unaltered plans were measured on three days, modified plans once, each with the IQM at an Elekta Synergy with an Agility MLC. All plans were also measured with the ArcCHECK with the cavity plug and a PTW semiflex 31010 ionization chamber inserted. Measurements were evaluated with SNC patient software. Results: Repeated IQM measurements of the original plans were reproducible, such that a 1% deviation from the mean as warning and 3% as action level as suggested by the manufacturer seemed reasonable. The IQM detected most of the simulated errors including wrong energy, a faulty leaf, wrong trial exported and a 2 mm shift of one leaf bank. Detection limits were reached for two plans - a 2 mm field position error and a leaf bank offset combined with an MU change. ArcCHECK evaluation according to our current standards also left undetected errors. Ionization chamber evaluation alone would leave most errors undetected. Conclusion: The IQM detected most errors and performed as well as currently established phantoms with the advantage that it can be used throughout the whole treatment. Drawback is that it does not indicate the source of the error.« less
Comparison of algorithms for automatic border detection of melanoma in dermoscopy images
NASA Astrophysics Data System (ADS)
Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert
2016-09-01
Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.
‘Why should I care?’ Challenging free will attenuates neural reaction to errors
Pourtois, Gilles; Brass, Marcel
2015-01-01
Whether human beings have free will has been a philosophical question for centuries. The debate about free will has recently entered the public arena through mass media and newspaper articles commenting on scientific findings that leave little to no room for free will. Previous research has shown that encouraging such a deterministic perspective influences behavior, namely by promoting cursory and antisocial behavior. Here we propose that such behavioral changes may, at least partly, stem from a more basic neurocognitive process related to response monitoring, namely a reduced error detection mechanism. Our results show that the error-related negativity, a neural marker of error detection, was reduced in individuals led to disbelieve in free will. This finding shows that reducing the belief in free will has a specific impact on error detection mechanisms. More generally, it suggests that abstract beliefs about intentional control can influence basic and automatic processes related to action control. PMID:24795441
Michael, Claire W; Naik, Kalyani; McVicker, Michael
2013-05-01
We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.
Identifying medication error chains from critical incident reports: a new analytic approach.
Huckels-Baumgart, Saskia; Manser, Tanja
2014-10-01
Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.
2012-01-01
Background To investigate geometric and dosimetric accuracy of frame-less image-guided radiosurgery (IG-RS) for brain metastases. Methods and materials Single fraction IG-RS was practiced in 72 patients with 98 brain metastases. Patient positioning and immobilization used either double- (n = 71) or single-layer (n = 27) thermoplastic masks. Pre-treatment set-up errors (n = 98) were evaluated with cone-beam CT (CBCT) based image-guidance (IG) and were corrected in six degrees of freedom without an action level. CBCT imaging after treatment measured intra-fractional errors (n = 64). Pre- and post-treatment errors were simulated in the treatment planning system and target coverage and dose conformity were evaluated. Three scenarios of 0 mm, 1 mm and 2 mm GTV-to-PTV (gross tumor volume, planning target volume) safety margins (SM) were simulated. Results Errors prior to IG were 3.9 mm ± 1.7 mm (3D vector) and the maximum rotational error was 1.7° ± 0.8° on average. The post-treatment 3D error was 0.9 mm ± 0.6 mm. No differences between double- and single-layer masks were observed. Intra-fractional errors were significantly correlated with the total treatment time with 0.7mm±0.5mm and 1.2mm±0.7mm for treatment times ≤23 minutes and >23 minutes (p<0.01), respectively. Simulation of RS without image-guidance reduced target coverage and conformity to 75% ± 19% and 60% ± 25% of planned values. Each 3D set-up error of 1 mm decreased target coverage and dose conformity by 6% and 10% on average, respectively, with a large inter-patient variability. Pre-treatment correction of translations only but not rotations did not affect target coverage and conformity. Post-treatment errors reduced target coverage by >5% in 14% of the patients. A 1 mm safety margin fully compensated intra-fractional patient motion. Conclusions IG-RS with online correction of translational errors achieves high geometric and dosimetric accuracy. Intra-fractional errors decrease target coverage and conformity unless compensated with appropriate safety margins. PMID:22531060
Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.
2018-01-01
The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
Patient identification errors: the detective in the laboratory.
Salinas, Maria; López-Garrigós, Maite; Lillo, Rosa; Gutiérrez, Mercedes; Lugo, Javier; Leiva-Salinas, Carlos
2013-11-01
The eradication of errors regarding patients' identification is one of the main goals for safety improvement. As clinical laboratory intervenes in 70% of clinical decisions, laboratory safety is crucial in patient safety. We studied the number of Laboratory Information System (LIS) demographic data errors registered in our laboratory during one year. The laboratory attends a variety of inpatients and outpatients. The demographic data of outpatients is registered in the LIS, when they present to the laboratory front desk. The requests from the primary care centers (PCC) are made electronically by the general practitioner. A manual step is always done at the PCC to conciliate the patient identification number in the electronic request with the one in the LIS. Manual registration is done through hospital information system demographic data capture when patient's medical record number is registered in LIS. Laboratory report is always sent out electronically to the patient's electronic medical record. Daily, every demographic data in LIS is manually compared to the request form to detect potential errors. Fewer errors were committed when electronic order was used. There was great error variability between PCC when using the electronic order. LIS demographic data manual registration errors depended on patient origin and test requesting method. Even when using the electronic approach, errors were detected. There was a great variability between PCC even when using this electronic modality; this suggests that the number of errors is still dependent on the personnel in charge of the technology. © 2013.
Formal Verification of Safety Buffers for Sate-Based Conflict Detection and Resolution
NASA Technical Reports Server (NTRS)
Herencia-Zapana, Heber; Jeannin, Jean-Baptiste; Munoz, Cesar A.
2010-01-01
The information provided by global positioning systems is never totally exact, and there are always errors when measuring position and velocity of moving objects such as aircraft. This paper studies the effects of these errors in the actual separation of aircraft in the context of state-based conflict detection and resolution. Assuming that the state information is uncertain but that bounds on the errors are known, this paper provides an analytical definition of a safety buffer and sufficient conditions under which this buffer guarantees that actual conflicts are detected and solved. The results are presented as theorems, which were formally proven using a mechanical theorem prover.
Prevention of medication errors: detection and audit.
Montesi, Germana; Lechi, Alessandro
2009-06-01
1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.
Systems and methods for data quality control and cleansing
Wenzel, Michael; Boettcher, Andrew; Drees, Kirk; Kummer, James
2016-05-31
A method for detecting and cleansing suspect building automation system data is shown and described. The method includes using processing electronics to automatically determine which of a plurality of error detectors and which of a plurality of data cleansers to use with building automation system data. The method further includes using processing electronics to automatically detect errors in the data and cleanse the data using a subset of the error detectors and a subset of the cleansers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, T; Kumaraswamy, L
Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10more » CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.« less
Error field detection in DIII-D by magnetic steering of locked modes
Shiraki, Daisuke; La Haye, Robert J.; Logan, Nikolas C.; ...
2014-02-20
Optimal correction coil currents for the n = 1 intrinsic error field of the DIII-D tokamak are inferred by applying a rotating external magnetic perturbation to steer the phase of a saturated locked mode with poloidal/toroidal mode number m/n = 2/1. The error field is detected non-disruptively in a single discharge, based on the toroidal torque balance of the resonant surface, which is assumed to be dominated by the balance of resonant electromagnetic torques. This is equivalent to the island being locked at all times to the resonant 2/1 component of the total of the applied and intrinsic error fields,more » such that the deviation of the locked mode phase from the applied field phase depends on the existing error field. The optimal set of correction coil currents is determined to be those currents which best cancels the torque from the error field, based on fitting of the torque balance model. The toroidal electromagnetic torques are calculated from experimental data using a simplified approach incorporating realistic DIII-D geometry, and including the effect of the plasma response on island torque balance based on the ideal plasma response to external fields. This method of error field detection is demonstrated in DIII-D discharges, and the results are compared with those based on the onset of low-density locked modes in ohmic plasmas. Furthermore, this magnetic steering technique presents an efficient approach to error field detection and is a promising method for ITER, particularly during initial operation when the lack of auxiliary heating systems makes established techniques based on rotation or plasma amplification unsuitable.« less
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of "failure modes and effects analysis" (FMEA). In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members' decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Background: Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of “failure modes and effects analysis” (FMEA). Materials and Methods: In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members’ decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Results: Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. Conclusions: The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors. PMID:28194208
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, P; Olaciregui-Ruiz, I; Mijnheer, B
2016-06-15
Purpose: To investigate the sensitivity of an EPID-based 3D dose verification system to detect delivery errors in VMAT treatments. Methods: For this study 41 EPID-reconstructed 3D in vivo dose distributions of 15 different VMAT plans (H&N, lung, prostate and rectum) were selected. To simulate the effect of delivery errors, their TPS plans were modified by: 1) scaling of the monitor units by ±3% and ±6% and 2) systematic shifting of leaf bank positions by ±1mm, ±2mm and ±5mm. The 3D in vivo dose distributions where then compared to the unmodified and modified treatment plans. To determine the detectability of themore » various delivery errors, we made use of a receiver operator characteristic (ROC) methodology. True positive and false positive rates were calculated as a function of the γ-parameters γmean, γ1% (near-maximum γ) and the PTV dose parameter ΔD{sub 50} (i.e. D{sub 50}(EPID)-D{sub 50}(TPS)). The ROC curve is constructed by plotting the true positive rate vs. the false positive rate. The area under the ROC curve (AUC) then serves as a measure of the performance of the EPID dosimetry system in detecting a particular error; an ideal system has AUC=1. Results: The AUC ranges for the machine output errors and systematic leaf position errors were [0.64 – 0.93] and [0.48 – 0.92] respectively using γmean, [0.57 – 0.79] and [0.46 – 0.85] using γ1% and [0.61 – 0.77] and [ 0.48 – 0.62] using ΔD{sub 50}. Conclusion: For the verification of VMAT deliveries, the parameter γmean is the best discriminator for the detection of systematic leaf position errors and monitor unit scaling errors. Compared to γmean and γ1%, the parameter ΔD{sub 50} performs worse as a discriminator in all cases.« less
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Technology and medication errors: impact in nursing homes.
Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis
2014-01-01
The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.
Real-time line-width measurements: a new feature for reticle inspection systems
NASA Astrophysics Data System (ADS)
Eran, Yair; Greenberg, Gad; Joseph, Amnon; Lustig, Cornel; Mizrahi, Eyal
1997-07-01
The significance of line width control in mask production has become greater with the lessening of defect size. There are two conventional methods used for controlling line widths dimensions which employed in the manufacturing of masks for sub micron devices. These two methods are the critical dimensions (CD) measurement and the detection of edge defects. Achieving reliable and accurate control of line width errors is one of the most challenging tasks in mask production. Neither of the two methods cited above (namely CD measurement and the detection of edge defects) guarantees the detection of line width errors with good sensitivity over the whole mask area. This stems from the fact that CD measurement provides only statistical data on the mask features whereas applying edge defect detection method checks defects on each edge by itself, and does not supply information on the combined result of error detection on two adjacent edges. For example, a combination of a small edge defect together with a CD non- uniformity which are both within the allowed tolerance, may yield a significant line width error, which will not be detected using the conventional methods (see figure 1). A new approach for the detection of line width errors which overcomes this difficulty is presented. Based on this approach, a new sensitive line width error detector was developed and added to Orbot's RT-8000 die-to-database reticle inspection system. This innovative detector operates continuously during the mask inspection process and scans (inspects) the entire area of the reticle for line width errors. The detection is based on a comparison of measured line width that are taken on both the design database and the scanned image of the reticle. In section 2, the motivation for developing this new detector is presented. The section covers an analysis of various defect types, which are difficult to detect using conventional edge detection methods or, alternatively, CD measurements. In section 3, the basic concept of the new approach is introduced together with a description of the new detector and its characteristics. In section 4, the calibration process that took place in order to achieve reliable and repeatable line width measurements is presented. The description of an experiments conducted in order to evaluate the sensitivity of the new detector is given in section 5, followed by a report of the results of this evaluation. The conclusions are presented in section 6.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayed, Ahmed R. M. Al; Isa, Zaidi
2015-09-25
Many scholars have shown their interest into the relationship between energy consumption (EC), gross domestic product (GDP) and emissions. The main objective of this study is to investigate the relationship between GDP, EC and CO{sub 2} within multivariate model by using panel data method in Asian countries; Korea, Malaysia, Japan and China for annually data during the period 1960 to 2010. The main finding shows that CO{sub 2} can be explained more than 86% & 78% by EC and GDP in each of cross section model and period model respectively. As a result of that, CO{sub 2} emissions should bemore » considered as an important factor in energy consumption and gross domestic product by policy maker.« less
A novel color vision test for detection of diabetic macular edema.
Shin, Young Joo; Park, Kyu Hyung; Hwang, Jeong-Min; Wee, Won Ryang; Lee, Jin Hak; Lee, In Bum; Hyon, Joon Young
2014-01-02
To determine the sensitivity of the Seoul National University (SNU) computerized color vision test for detecting diabetic macular edema. From May to September 2003, a total of 73 eyes of 73 patients with diabetes mellitus were examined using the SNU computerized color vision test and optical coherence tomography (OCT). Color deficiency was quantified as the total error score on the SNU test and as error scores for each of four color quadrants corresponding to yellows (Q1), greens (Q2), blues (Q3), and reds (Q4). SNU error scores were assessed as a function of OCT foveal thickness and total macular volume (TMV). The error scores in Q1, Q2, Q3, and Q4 measured by the SNU color vision test increased with foveal thickness (P < 0.05), whereas they were not correlated with TMV. Total error scores, the summation of Q1 and Q3, the summation of Q2 and Q4, and blue-yellow (B-Y) error scores were significantly correlated with foveal thickness (P < 0.05), but not with TMV. The observed correlation between SNU color test error scores and foveal thickness indicates that the SNU test may be useful for detection and monitoring of diabetic macular edema.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Chen, Xiaowen; Zhao, Luqian; Qin, Hongran; Zhao, Meijia; Zhou, Yirui; Yang, Shuqiang; Su, Xu; Xu, Xiaohua
2014-05-01
The aim of this work was to develop a method to provide rapid results for humans with internal radioactive contamination. The authors hypothesized that valuable information could be obtained from gas proportional counter techniques by screening urine samples from potentially exposed individuals rapidly. Recommended gross alpha and beta activity screening methods generally employ gas proportional counting techniques. Based on International Standards Organization (ISO) methods, improvements were made in the evaporation process to develop a method to provide rapid results, adequate sensitivity, and minimum sample preparation and operator intervention for humans with internal radioactive contamination. The method described by an American National Standards Institute publication was used to calibrate the gas proportional counter, and urine samples from patients with or without radionuclide treatment were measured to validate the method. By improving the evaporation process, the time required to perform the assay was reduced dramatically. Compared with the reference data, the results of the validation samples were very satisfactory with respect to gross-alpha and gross-beta activities. The gas flow proportional counting method described here has the potential for radioactivity monitoring in the body. This method was easy, efficient, and fast, and its application is of great utility in determining whether a sample should be analyzed by a more complicated method, for example radiochemical and/or γ-spectroscopy. In the future, it may be used commonly in medical examination and nuclear emergency treatment.Health Phys. 106(5):000-000; 2014.
Bayesian inversions of a dynamic vegetation model in four European grassland sites
NASA Astrophysics Data System (ADS)
Minet, J.; Laloy, E.; Tychon, B.; François, L.
2015-01-01
Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.
NASA Technical Reports Server (NTRS)
Weinstein, Bernice
1999-01-01
A strategy for detecting control law calculation errors in critical flight control computers during laboratory validation testing is presented. This paper addresses Part I of the detection strategy which involves the use of modeling of the aircraft control laws and the design of Kalman filters to predict the correct control commands. Part II of the strategy which involves the use of the predicted control commands to detect control command errors is presented in the companion paper.
Software error data collection and categorization
NASA Technical Reports Server (NTRS)
Ostrand, T. J.; Weyuker, E. J.
1982-01-01
Software errors detected during development of an interactive special purpose editor system were studied. This product was followed during nine months of coding, unit testing, function testing, and system testing. A new error categorization scheme was developed.
Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm
NASA Technical Reports Server (NTRS)
Riggs, George; Hall, Dorothy K.
2012-01-01
The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).
Errors in Bibliographic Citations: A Continuing Problem.
ERIC Educational Resources Information Center
Sweetland, James H.
1989-01-01
Summarizes studies examining citation errors and illustrates errors resulting from a lack of standardization, misunderstanding of foreign languages, failure to examine the document cited, and general lack of training in citation norms. It is argued that the failure to detect and correct citation errors is due to diffusion of responsibility in the…
Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -
NASA Technical Reports Server (NTRS)
Chen, Paul Peichuan
1993-01-01
Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.
Error Detection Processes during Observational Learning
ERIC Educational Resources Information Center
Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.
2006-01-01
The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Bisignano, A; Wells, D; Harton, G; Munné, S
2011-12-01
Diagnosis of embryos for chromosome abnormalities, i.e. aneuploidy screening, has been invigorated by the introduction of microarray-based testing methods allowing analysis of 24 chromosomes in one test. Recent data have been suggestive of increased implantation and pregnancy rates following microarray testing. Preimplantation genetic diagnosis for infertility aims to test for gross chromosome changes with the hope that identification and transfer of normal embryos will improve IVF outcomes. Testing by some methods, specifically single-nucleotide polymorphism (SNP) microarrays, allow for more information and potential insight into parental origin of aneuploidy and uniparental disomy. The usefulness and validity of reporting this information is flawed. Numerous papers have shown that the majority of meiotic errors occur in the egg, while mitotic errors in the embryo affect parental chromosomes at random. Potential mistakes made in assigning an error as meiotic or mitotic may lead to erroneous reporting of results with medical consequences. This study's data suggest that the bioinformatic cleaning used to 'fix' the miscalls that plague single-cell whole-genome amplification provides little improvement in the quality of useful data. Based on the information available, SNP-based aneuploidy screening suffers from a number of serious issues that must be resolved. Copyright © 2011 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre
2014-01-01
Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Permanent-File-Validation Utility Computer Program
NASA Technical Reports Server (NTRS)
Derry, Stephen D.
1988-01-01
Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Hammer, John M.; Wan, C. Yoon; Vasandani, Vijay
1987-01-01
The current research is focused on detection of human error and protection from its consequences. A program for monitoring pilot error by comparing pilot actions to a script was described. It dealt primarily with routine errors (slips) that occurred during checklist activity. The model to which operator actions were compared was a script. Current research is an extension along these two dimensions. The ORS fault detection aid uses a sophisticated device model rather than a script. The newer initiative, the model-based and constraint-based warning system, uses an even more sophisticated device model and is to prevent all types of error, not just slips or bad decision.
ERIC Educational Resources Information Center
Gauderat-Bagault, Laurence; Lehalle, Henri
Children, ages 5 to 8 years (n=71), were required to listen and detect errors out of a partly wrong sequence of tape-recorded French number words from 1 to 100. Children (from several schools near Montpellier, France) were from preschool, grade 1, and grade 2. Results show that wrong syntactic rules were better detected than omissions, whereas…
Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi
2018-05-01
The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
SU-E-T-484: In Vivo Dosimetry Tolerances in External Beam Fast Neutron Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, L; Gopan, O
Purpose: Optical stimulated luminescence (OSL) dosimetry with Landauer Al2O3:C nanodots was developed at our institution as a passive in vivo dosimetry (IVD) system for patients treated with fast neutron therapy. The purpose of this study was to establish clinically relevant tolerance limits for detecting treatment errors requiring further investigation. Methods: Tolerance levels were estimated by conducting a series of IVD expected dose calculations for square field sizes ranging between 2.8 and 28.8 cm. For each field size evaluated, doses were calculated for open and internal wedged fields with angles of 30°, 45°, or 60°. Theoretical errors were computed for variationsmore » of incorrect beam configurations. Dose errors, defined as the percent difference from the expected dose calculation, were measured with groups of three nanodots placed in a 30 x 30 cm solid water phantom, at beam isocenter (150 cm SAD, 1.7 cm Dmax). The tolerances were applied to IVD patient measurements. Results: The overall accuracy of the nanodot measurements is 2–3% for open fields. Measurement errors agreed with calculated errors to within 3%. Theoretical estimates of dosimetric errors showed that IVD measurements with OSL nanodots will detect the absence of an internal wedge or a wrong wedge angle. Incorrect nanodot placement on a wedged field is more likely to be caught if the offset is in the direction of the “toe” of the wedge where the dose difference in percentage is about 12%. Errors caused by an incorrect flattening filter size produced a 2% measurement error that is not detectable by IVD measurement alone. Conclusion: IVD with nanodots will detect treatment errors associated with the incorrect implementation of the internal wedge. The results of this study will streamline the physicists’ investigations in determining the root cause of an IVD reading that is out of normally accepted tolerances.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McVicker, A; Oldham, M; Yin, F
2014-06-15
Purpose: To test the ability of the TG-119 commissioning process and RPC credentialing to detect errors in the commissioning process for a commercial Treatment Planning System (TPS). Methods: We introduced commissioning errors into the commissioning process for the Anisotropic Analytical Algorithm (AAA) within the Eclipse TPS. We included errors in Dosimetric Leaf Gap (DLG), electron contamination, flattening filter material, and beam profile measurement with an inappropriately large farmer chamber (simulated using sliding window smoothing of profiles). We then evaluated the clinical impact of these errors on clinical intensity modulated radiation therapy (IMRT) plans (head and neck, low and intermediate riskmore » prostate, mesothelioma, and scalp) by looking at PTV D99, and mean and max OAR dose. Finally, for errors with substantial clinical impact we determined sensitivity of the RPC IMRT film analysis at the midpoint between PTV and OAR using a 4mm distance to agreement metric, and of a 7% TLD dose comparison. We also determined sensitivity of the 3 dose planes of the TG-119 C-shape IMRT phantom using gamma criteria of 3% 3mm. Results: The largest clinical impact came from large changes in the DLG with a change of 1mm resulting in up to a 5% change in the primary PTV D99. This resulted in a discrepancy in the RPC TLDs in the PTVs and OARs of 7.1% and 13.6% respectively, which would have resulted in detection. While use of incorrect flattening filter caused only subtle errors (<1%) in clinical plans, the effect was most pronounced for the RPC TLDs in the OARs (>6%). Conclusion: The AAA commissioning process within the Eclipse TPS is surprisingly robust to user error. When errors do occur, the RPC and TG-119 commissioning credentialing criteria are effective at detecting them; however OAR TLDs are the most sensitive despite the RPC currently excluding them from analysis.« less
Zhou, Peng; Li, Dongmei; Zhao, Li; Li, Haitao; Zhao, Feng; Zheng, Yuanlai; Fang, Hongda; Lou, Quansheng; Cai, Weixu
2018-06-01
To understand the impact of the Fukushima nuclear accident (FNA), 137 Cs, 134 Cs, 90 Sr, and gross beta were analyzed in the northeast South China Sea (NSCS), the Luzon Strait (LS) and its adjacent areas. 137 Cs, 90 Sr, and gross beta values in the NSCS were similar to those prior to the FNA. 90 Sr and 137 Cs in the LS and its adjacent areas were consistent with those in the NSCS. The high 137 Cs-peak values occurred at depth of 150 m whereas the high 90 Sr-peak values occurred at depth of 0.5 m. The 137 Cs and gross beta mean values in Cruise I were higher than those in Cruise II whereas the 90 Sr mean value was just the reverse. 134 Cs in all seawater were below the minimum detectable activity. The past and present data since the 1970s suggested 137 Cs and 90 Sr in the study areas still originated from global fallout and the FNA influence were negligible. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Long, Junsheng
1994-01-01
This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forwared recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, look-ahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct look-ahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR.
A data-driven modeling approach to stochastic computation for low-energy biomedical devices.
Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen
2011-01-01
Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.
The use of γ-rays analysis by HPGe detector to assess the gross alpha and beta activities in waters.
Casagrande, M F S; Bonotto, D M
2018-07-01
This paper describes an alternative method for evaluating gross alpha and beta radioactivity in waters by using γ-rays analysis performed with hyper-pure germanium detector (HPGe). Several gamma emissions related to α and β - decays were used to provide the activity concentration data due to natural radionuclides commonly present in waters like 40 K and those belonging to the 238 U and 232 Th decay series. The most suitable gamma emissions related to β - decays were 214 Bi (1120.29 keV, 238 U series) and 208 Tl (583.19 keV, 232 Th series) as the equation in activity concentration yielded values compatible to those generated by the formula taking into account the detection efficiency. The absence of isolated and intense γ-rays peaks associated to α decays limited the choice to 226 Ra (186.21 keV, 238 U series) and 224 Ra (240.99 keV, 232 Th series). In these cases, it was adopted appropriate correction factors involving the absolute intensities and specific activities for avoiding the interferences of other γ-rays energies. The critical level of detection across the 186-1461 keV energy region corresponded to 0.010, 0.023, 0.038, 0.086, and 0.042 Bq/L, respectively, for 226 Ra, 224 Ra, 208 Tl, 214 Bi and 40 K. It is much lower than the WHO guideline reference value for gross alpha (0.5 Bq/L) and beta (1.0 Bq/L) in waters. The method applicability was checked by the analysis of groundwater samples from different aquifer systems occurring in the Brazilian states of São Paulo, Minas Gerais and Mato Grosso do Sul. The waters exhibit very different chemical composition and the samples with the highest radioactivity levels were those associated with lithotypes possessing enhanced uranium and thorium levels. The technique allowed directly discard the 40 K contribution to the gross beta activity as potassium is an essential element for humans. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Robichaud, A.; Ménard, R.
2013-05-01
We present multi-year objective analyses (OA) on a high spatio-temporal resolution (15 or 21 km, every hour) for the warm season period (1 May-31 October) for ground-level ozone (2002-2012) and for fine particulate matter (diameter less than 2.5 microns (PM2.5)) (2004-2012). The OA used here combines the Canadian Air Quality forecast suite with US and Canadian surface air quality monitoring sites. The analysis is based on an optimal interpolation with capabilities for adaptive error statistics for ozone and PM2.5 and an explicit bias correction scheme for the PM2.5 analyses. The estimation of error statistics has been computed using a modified version of the Hollingsworth-Lönnberg's (H-L) method. Various quality controls (gross error check, sudden jump test and background check) have been applied to the observations to remove outliers. An additional quality control is applied to check the consistency of the error statistics estimation model at each observing station and for each hour. The error statistics are further tuned "on the fly" using a χ2 (chi-square) diagnostic, a procedure which verifies significantly better than without tuning. Successful cross-validation experiments were performed with an OA set-up using 90% of observations to build the objective analysis and with the remainder left out as an independent set of data for verification purposes. Furthermore, comparisons with other external sources of information (global models and PM2.5 satellite surface derived measurements) show reasonable agreement. The multi-year analyses obtained provide relatively high precision with an absolute yearly averaged systematic error of less than 0.6 ppbv (parts per billion by volume) and 0.7 μg m-3 (micrograms per cubic meter) for ozone and PM2.5 respectively and a random error generally less than 9 ppbv for ozone and under 12 μg m-3 for PM2.5. In this paper, we focus on two applications: (1) presenting long term averages of objective analysis and analysis increments as a form of summer climatology and (2) analyzing long term (decadal) trends and inter-annual fluctuations using OA outputs. Our results show that high percentiles of ozone and PM2.5 are both following a decreasing trend overall in North America with the eastern part of United States (US) presenting the highest decrease likely due to more effective pollution controls. Some locations, however, exhibited an increasing trend in the mean ozone and PM2.5 such as the northwestern part of North America (northwest US and Alberta). The low percentiles are generally rising for ozone which may be linked to increasing emissions from emerging countries and the resulting pollution brought by the intercontinental transport. After removing the decadal trend, we demonstrate that the inter-annual fluctuations of the high percentiles are significantly correlated with temperature fluctuations for ozone and precipitation fluctuations for PM2.5. We also show that there was a moderately significant correlation between the inter-annual fluctuations of the high percentiles of ozone and PM2.5 with economic indices such as the Industrial Dow Jones and/or the US gross domestic product growth rate.
Daverio, Marco; Fino, Giuliana; Luca, Brugnaro; Zaggia, Cristina; Pettenazzo, Andrea; Parpaiola, Antonella; Lago, Paola; Amigoni, Angela
2015-12-01
Errors in are estimated to occur with an incidence of 3.7-16.6% in hospitalized patients. The application of systems for detection of adverse events is becoming a widespread reality in healthcare. Incident reporting (IR) and failure mode and effective analysis (FMEA) are strategies widely used to detect errors, but no studies have combined them in the setting of a pediatric intensive care unit (PICU). The aim of our study was to describe the trend of IR in a PICU and evaluate the effect of FMEA application on the number and severity of the errors detected. With this prospective observational study, we evaluated the frequency IR documented in standard IR forms completed from January 2009 to December 2012 in the PICU of Woman's and Child's Health Department of Padova. On the basis of their severity, errors were classified as: without outcome (55%), with minor outcome (16%), with moderate outcome (10%), and with major outcome (3%); 16% of reported incidents were 'near misses'. We compared the data before and after the introduction of FMEA. Sixty-nine errors were registered, 59 (86%) concerning drug therapy (83% during prescription). Compared to 2009-2010, in 2011-2012, we noted an increase of reported errors (43 vs 26) with a reduction of their severity (21% vs 8% 'near misses' and 65% vs 38% errors with no outcome). With the introduction of FMEA, we obtained an increased awareness in error reporting. Application of these systems will improve the quality of healthcare services. © 2015 John Wiley & Sons Ltd.
2012-01-01
Background Presented is the method “Detection and Outline Error Estimates” (DOEE) for assessing rater agreement in the delineation of multiple sclerosis (MS) lesions. The DOEE method divides operator or rater assessment into two parts: 1) Detection Error (DE) -- rater agreement in detecting the same regions to mark, and 2) Outline Error (OE) -- agreement of the raters in outlining of the same lesion. Methods DE, OE and Similarity Index (SI) values were calculated for two raters tested on a set of 17 fluid-attenuated inversion-recovery (FLAIR) images of patients with MS. DE, OE, and SI values were tested for dependence with mean total area (MTA) of the raters' Region of Interests (ROIs). Results When correlated with MTA, neither DE (ρ = .056, p=.83) nor the ratio of OE to MTA (ρ = .23, p=.37), referred to as Outline Error Rate (OER), exhibited significant correlation. In contrast, SI is found to be strongly correlated with MTA (ρ = .75, p < .001). Furthermore, DE and OER values can be used to model the variation in SI with MTA. Conclusions The DE and OER indices are proposed as a better method than SI for comparing rater agreement of ROIs, which also provide specific information for raters to improve their agreement. PMID:22812697
Acoustic Evidence for Phonologically Mismatched Speech Errors
ERIC Educational Resources Information Center
Gormley, Andrea
2015-01-01
Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of…
Fostering the Intelligent Novice: Learning from Errors with Metacognitive Tutoring
ERIC Educational Resources Information Center
Mathan, Santosh A.; Koedinger, Kenneth R.
2005-01-01
This article explores 2 important aspects of metacognition: (a) how students monitor their ongoing performance to detect and correct errors and (b) how students reflect on those errors to learn from them. Although many instructional theories have advocated providing students with immediate feedback on errors, some researchers have argued that…
Measurement Error and Equating Error in Power Analysis
ERIC Educational Resources Information Center
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
The Neural Basis of Error Detection: Conflict Monitoring and the Error-Related Negativity
ERIC Educational Resources Information Center
Yeung, Nick; Botvinick, Matthew M.; Cohen, Jonathan D.
2004-01-01
According to a recent theory, anterior cingulate cortex is sensitive to response conflict, the coactivation of mutually incompatible responses. The present research develops this theory to provide a new account of the error-related negativity (ERN), a scalp potential observed following errors. Connectionist simulations of response conflict in an…
Near Misses in Financial Trading: Skills for Capturing and Averting Error.
Leaver, Meghan; Griffiths, Alex; Reader, Tom
2018-05-01
The aims of this study were (a) to determine whether near-miss incidents in financial trading contain information on the operator skills and systems that detect and prevent near misses and the patterns and trends revealed by these data and (b) to explore if particular operator skills and systems are found as important for avoiding particular types of error on the trading floor. In this study, we examine a cohort of near-miss incidents collected from a financial trading organization using the Financial Incident Analysis System and report on the nontechnical skills and systems that are used to detect and prevent error in this domain. One thousand near-miss incidents are analyzed using distribution, mean, chi-square, and associative analysis to describe the data; reliability is provided. Slips/lapses (52%) and human-computer interface problems (21%) often occur alone and are the main contributors to error causation, whereas the prevention of error is largely a result of teamwork (65%) and situation awareness (46%) skills. No matter the cause of error, situation awareness and teamwork skills are used most often to detect and prevent the error. Situation awareness and teamwork skills appear universally important as a "last line" of defense for capturing error, and data from incident-monitoring systems can be analyzed in a fashion more consistent with a "Safety-II" approach. This research provides data for ameliorating risk within financial trading organizations, with implications for future risk management programs and regulation.
Field evaluation of distance-estimation error during wetland-dependent bird surveys
Nadeau, Christopher P.; Conway, Courtney J.
2012-01-01
Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
NASA Astrophysics Data System (ADS)
Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.
2018-05-01
To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].
Garbe, James C.; Vrba, Lukas; Sputova, Klara; ...
2014-10-29
Telomerase reactivation and immortalization are critical for human carcinoma progression. However, little is known about the mechanisms controlling this crucial step, due in part to the paucity of experimentally tractable model systems that can examine human epithelial cell immortalization as it might occur in vivo. We achieved efficient non-clonal immortalization of normal human mammary epithelial cells (HMEC) by directly targeting the 2 main senescence barriers encountered by cultured HMEC. The stress-associated stasis barrier was bypassed using shRNA to p16INK4; replicative senescence due to critically shortened telomeres was bypassed in post-stasis HMEC by c-MYC transduction. Thus, 2 pathologically relevant oncogenic agentsmore » are sufficient to immortally transform normal HMEC. The resultant non-clonal immortalized lines exhibited normal karyotypes. Most human carcinomas contain genomically unstable cells, with widespread instability first observed in vivo in pre-malignant stages; in vitro, instability is seen as finite cells with critically shortened telomeres approach replicative senescence. Our results support our hypotheses that: (1) telomere-dysfunction induced genomic instability in pre-malignant finite cells may generate the errors required for telomerase reactivation and immortalization, as well as many additional “passenger” errors carried forward into resulting carcinomas; (2) genomic instability during cancer progression is needed to generate errors that overcome tumor suppressive barriers, but not required per se; bypassing the senescence barriers by direct targeting eliminated a need for genomic errors to generate immortalization. Achieving efficient HMEC immortalization, in the absence of “passenger” genomic errors, should facilitate examination of telomerase regulation during human carcinoma progression, and exploration of agents that could prevent immortalization.« less
Mino-León, Dolores; Reyes-Morales, Hortensia; Jasso, Luis; Douvoba, Svetlana Vladislavovna
2012-06-01
Inappropriate prescription is a relevant problem in primary health care settings in Mexico, with potentially harmful consequences for patients. To evaluate the effectiveness of incorporating a pharmacist into primary care health team to reduce prescription errors for patients with diabetes and/or hypertension. One Family Medicine Clinic from the Mexican Institute of Social Security in Mexico City. A "pharmacotherapy intervention" provided by pharmacists through a quasi experimental (before-after) design was carried out. Physicians who allowed access to their diabetes and/or hypertensive patients' medical records and prescriptions were included in the study. Prescription errors were classified as "filling", "clinical" or "both". Descriptive analysis, identification of potential drug-drug interactions (pD-DI), and comparison of the proportion of patients with prescriptions with errors detected "before" and "after" intervention were performed. Decrease in the proportion of patients who received prescriptions with errors after the intervention. Pharmacists detected at least one type of error in 79 out of 160 patients. Errors were "clinical", "both" and "filling" in 47, 21 and 11 of these patient's prescriptions respectively. Predominant errors were, in the subgroup of patient's prescriptions with "clinical" errors, pD-DI; in the subgroup of "both" errors, lack of information on dosing interval and pD-DI; and in the "filling" subgroup, lack of information on dosing interval. The pD-DI caused 50 % of the errors detected, from which 19 % were of major severity. The impact of the correction of errors post-intervention was observed in 19 % of patients who had erroneous prescriptions before the intervention of the pharmacist (49.3-30.3 %, p < 0.05). The impact of the intervention was relevant from a clinical point of view for the public health services in Mexico. The implementation of early warning systems of the most widely prescribed drugs is an alternative for reducing prescription errors and consequently the risks they may cause.
2013-08-01
both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric projection and of the use of quadrature, we also report the...interest MFE ∑(e,ψ) or GFV ∑(e,ψ). Tables 1 and 2 show this using coarse and fine forward solutions. Table 1. The forward problem with solution (4.1) is run...adjoint data components ψu and ψp are constant everywhere and ψξ = 0. adj. grid MFE ∑(e,ψ) ∑MFEi ratio GFV ∑(e,ψ) ∑GFV i ratio 20x20 : 32x32 1.96E−3
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
Gauvin, Hanna S; De Baene, Wouter; Brass, Marcel; Hartsuiker, Robert J
2016-02-01
To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated whether internal verbal monitoring takes place through the speech perception system, as proposed by perception-based theories of speech monitoring, or whether mechanisms independent of perception are applied, as proposed by production-based theories of speech monitoring. With the use of fMRI during a tongue twister task we observed that error detection in internal speech during noise-masked overt speech production and error detection in speech perception both recruit the same neural network, which includes pre-supplementary motor area (pre-SMA), dorsal anterior cingulate cortex (dACC), anterior insula (AI), and inferior frontal gyrus (IFG). Although production and perception recruit similar areas, as proposed by perception-based accounts, we did not find activation in superior temporal areas (which are typically associated with speech perception) during internal speech monitoring in speech production as hypothesized by these accounts. On the contrary, results are highly compatible with a domain general approach to speech monitoring, by which internal speech monitoring takes place through detection of conflict between response options, which is subsequently resolved by a domain general executive center (e.g., the ACC). Copyright © 2015 Elsevier Inc. All rights reserved.
The detection error of thermal test low-frequency cable based on M sequence correlation algorithm
NASA Astrophysics Data System (ADS)
Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin
2018-04-01
The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.
Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.
Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M
2018-01-01
Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.
NASA Astrophysics Data System (ADS)
Cong, Wang; Xu, Lingdi; Li, Ang
2017-10-01
Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial-grade coordinate system nominal measurement accuracy PV value of 7 microns to 4microns, Which effectively improves the grinding efficiency of aspheric mirrors and verifies the correctness of the method. This paper also investigates the error detection and operation control method, the error calibration of the CMM and the random error calibration of the CMM .
Improving patient safety through quality assurance.
Raab, Stephen S
2006-05-01
Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.
Acharya, Kamal R.; Dhand, Navneet K.; Whittington, Richard J.; Plain, Karren M.
2017-01-01
Johne’s disease is a chronic debilitating enteropathy of ruminants caused by Mycobacterium avium subspecies paratuberculosis (MAP). Current abattoir surveillance programs detect disease via examination of gross lesions and confirmation by histopathological and/or tissue culture, which is time-consuming and has relatively low sensitivity. This study aimed to investigate whether a high-throughput quantitative PCR (qPCR) test is a viable alternative for tissue testing. Intestine and mesenteric lymph nodes were sourced from sheep experimentally infected with MAP and the DNA extracted using a protocol developed for tissues, comprised enzymatic digestion of the tissue homogenate, chemical and mechanical lysis, and magnetic bead-based DNA purification. The extracted DNA was tested by adapting a previously validated qPCR for fecal samples, and the results were compared with culture and histopathology results of the corresponding tissues. The MAP tissue qPCR confirmed infection in the majority of sheep with gross lesions on postmortem (37/38). Likewise, almost all tissue culture (61/64) or histopathology (52/58) positives were detected with good to moderate agreement (Cohen’s kappa statistic) and no significant difference to the reference tests (McNemar’s Chi-square test). Higher MAP DNA quantities corresponded to animals with more severe histopathology (odds ratio: 1.82; 95% confidence interval: 1.60, 2.07). Culture-independent strain typing on tissue DNA was successfully performed. This MAP tissue qPCR method had a sensitivity equivalent to the reference tests and is thus a viable replacement for gross- and histopathological examination of tissue samples in abattoirs. In addition, the test could be validated for testing tissue samples intended for human consumption. PMID:29312970
Acharya, Kamal R; Dhand, Navneet K; Whittington, Richard J; Plain, Karren M
2017-01-01
Johne's disease is a chronic debilitating enteropathy of ruminants caused by Mycobacterium avium subspecies paratuberculosis (MAP). Current abattoir surveillance programs detect disease via examination of gross lesions and confirmation by histopathological and/or tissue culture, which is time-consuming and has relatively low sensitivity. This study aimed to investigate whether a high-throughput quantitative PCR (qPCR) test is a viable alternative for tissue testing. Intestine and mesenteric lymph nodes were sourced from sheep experimentally infected with MAP and the DNA extracted using a protocol developed for tissues, comprised enzymatic digestion of the tissue homogenate, chemical and mechanical lysis, and magnetic bead-based DNA purification. The extracted DNA was tested by adapting a previously validated qPCR for fecal samples, and the results were compared with culture and histopathology results of the corresponding tissues. The MAP tissue qPCR confirmed infection in the majority of sheep with gross lesions on postmortem (37/38). Likewise, almost all tissue culture (61/64) or histopathology (52/58) positives were detected with good to moderate agreement (Cohen's kappa statistic) and no significant difference to the reference tests (McNemar's Chi-square test). Higher MAP DNA quantities corresponded to animals with more severe histopathology (odds ratio: 1.82; 95% confidence interval: 1.60, 2.07). Culture-independent strain typing on tissue DNA was successfully performed. This MAP tissue qPCR method had a sensitivity equivalent to the reference tests and is thus a viable replacement for gross- and histopathological examination of tissue samples in abattoirs. In addition, the test could be validated for testing tissue samples intended for human consumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. R. Saffle; R. G. Mitchell; R. B. Evans
The results of the various monitoring programs for 1998 indicated that radioactivity from the DOE's Idaho National Engineering and Environmental Laboratory (INEEL) operations could generally not be distinguished from worldwide fallout and natural radioactivity in the region surrounding the INEEL. Although some radioactive materials were discharged during INEEL operations, concentrations in the offsite environment and doses to the surrounding population were far less than state of Idaho and federal health protection guidelines. Gross alpha and gross beta measurements, used as a screening technique for air filters, were investigated by making statistical comparisons between onsite or boundary location concentrations and themore » distant community group concentrations. Gross alpha activities were generally higher at distant locations than at boundary and onsite locations. Air samples were also analyzed for specific radionuclides. Some human-made radionuclides were detected at offsite locations, but most were near the minimum detectable concentration and their presence was attributable to natural sources, worldwide fallout, and statistical variations in the analytical results rather than to INEEL operations. Low concentrations of 137Cs were found in muscle tissue and liver of some game animals and sheep. These levels were mostly consistent with background concentrations measured in animals sampled onsite and offsite in recent years. Ionizing radiation measured simultaneously at the INEEL boundary and distant locations using environmental dosimeters were similar and showed only background levels. The maximum potential population dose from submersion, ingestion, inhalation, and deposition to the approximately 121,500 people residing within an 80-km (50-mi) radius from the geographical center of the INEEL was estimated to be 0.08 person-rem (8 x 10-4 person-Sv) using the MDIFF air dispersion model. This population dose is less than 0.0002 percent of the estimated 43,7 00 person-rem (437 person-Sv) population dose from background radioactivity.« less
Online 3D EPID-based dose verification: Proof of concept
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spreeuw, Hanno; Rozendaal, Roel, E-mail: r.rozenda
Purpose: Delivery errors during radiotherapy may lead to medical harm and reduced life expectancy for patients. Such serious incidents can be avoided by performing dose verification online, i.e., while the patient is being irradiated, creating the possibility of halting the linac in case of a large overdosage or underdosage. The offline EPID-based 3D in vivo dosimetry system clinically employed at our institute is in principle suited for online treatment verification, provided the system is able to complete 3D dose reconstruction and verification within 420 ms, the present acquisition time of a single EPID frame. It is the aim of thismore » study to show that our EPID-based dosimetry system can be made fast enough to achieve online 3D in vivo dose verification. Methods: The current dose verification system was sped up in two ways. First, a new software package was developed to perform all computations that are not dependent on portal image acquisition separately, thus removing the need for doing these calculations in real time. Second, the 3D dose reconstruction algorithm was sped up via a new, multithreaded implementation. Dose verification was implemented by comparing planned with reconstructed 3D dose distributions delivered to two regions in a patient: the target volume and the nontarget volume receiving at least 10 cGy. In both volumes, the mean dose is compared, while in the nontarget volume, the near-maximum dose (D2) is compared as well. The real-time dosimetry system was tested by irradiating an anthropomorphic phantom with three VMAT plans: a 6 MV head-and-neck treatment plan, a 10 MV rectum treatment plan, and a 10 MV prostate treatment plan. In all plans, two types of serious delivery errors were introduced. The functionality of automatically halting the linac was also implemented and tested. Results: The precomputation time per treatment was ∼180 s/treatment arc, depending on gantry angle resolution. The complete processing of a single portal frame, including dose verification, took 266 ± 11 ms on a dual octocore Intel Xeon E5-2630 CPU running at 2.40 GHz. The introduced delivery errors were detected after 5–10 s irradiation time. Conclusions: A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for two different kinds of gross delivery errors. Thus, online 3D dose verification has been technologically achieved.« less
A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-01-01
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224
Shahzad, Mohsin; Yousaf, Sairah; Waryah, Yar M; Gul, Hadia; Kausar, Tasleem; Tariq, Nabeela; Mahmood, Umair; Ali, Muhammad; Khan, Muzammil A; Waryah, Ali M; Shaikh, Rehan S; Riazuddin, Saima; Ahmed, Zubair M
2017-03-07
Nonsyndromic oculocutaneous Albinism (nsOCA) is clinically characterized by the loss of pigmentation in the skin, hair, and iris. OCA is amongst the most common causes of vision impairment in children. To date, pathogenic variants in six genes have been identified in individuals with nsOCA. Here, we determined the identities, frequencies, and clinical consequences of OCA alleles in 94 previously unreported Pakistani families. Combination of Sanger and Exome sequencing revealed 38 alleles, including 22 novel variants, segregating with nsOCA phenotype in 80 families. Variants of TYR and OCA2 genes were the most common cause of nsOCA, occurring in 43 and 30 families, respectively. Twenty-two novel variants include nine missense, four splice site, two non-sense, one insertion and six gross deletions. In vitro studies revealed retention of OCA proteins harboring novel missense alleles in the endoplasmic reticulum (ER) of transfected cells. Exon-trapping assays with constructs containing splice site alleles revealed errors in splicing. As eight alleles account for approximately 56% (95% CI: 46.52-65.24%) of nsOCA cases, primarily enrolled from Punjab province of Pakistan, hierarchical strategies for variant detection would be feasible and cost-efficient genetic tests for OCA in families with similar origin. Thus, we developed Tetra-primer ARMS assays for rapid, reliable, reproducible and economical screening of most of these common alleles.
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Quantifying seining detection probability for fishes of Great Plains sand‐bed rivers
Mollenhauer, Robert; Logue, Daniel R.; Brewer, Shannon K.
2018-01-01
Species detection error (i.e., imperfect and variable detection probability) is an essential consideration when investigators map distributions and interpret habitat associations. When fish detection error that is due to highly variable instream environments needs to be addressed, sand‐bed streams of the Great Plains represent a unique challenge. We quantified seining detection probability for diminutive Great Plains fishes across a range of sampling conditions in two sand‐bed rivers in Oklahoma. Imperfect detection resulted in underestimates of species occurrence using naïve estimates, particularly for less common fishes. Seining detection probability also varied among fishes and across sampling conditions. We observed a quadratic relationship between water depth and detection probability, in which the exact nature of the relationship was species‐specific and dependent on water clarity. Similarly, the direction of the relationship between water clarity and detection probability was species‐specific and dependent on differences in water depth. The relationship between water temperature and detection probability was also species dependent, where both the magnitude and direction of the relationship varied among fishes. We showed how ignoring detection error confounded an underlying relationship between species occurrence and water depth. Despite imperfect and heterogeneous detection, our results support that determining species absence can be accomplished with two to six spatially replicated seine hauls per 200‐m reach under average sampling conditions; however, required effort would be higher under certain conditions. Detection probability was low for the Arkansas River Shiner Notropis girardi, which is federally listed as threatened, and more than 10 seine hauls per 200‐m reach would be required to assess presence across sampling conditions. Our model allows scientists to estimate sampling effort to confidently assess species occurrence, which maximizes the use of available resources. Increased implementation of approaches that consider detection error promote ecological advancements and conservation and management decisions that are better informed.
Principles of gross alpha and beta radioactivity detection in water.
Semkow, T M; Parekh, P P
2001-11-01
A simultaneous detection of gross alpha and beta radioactivity was studied using gas proportional counting. This measurement is a part of a method mandated by US Environmental Protection Agency to screen for alpha and beta radioactivity in drinking water. Responses of a gas proportional detector to alpha and beta particles from several radionuclides were determined in drop and electroplated geometries. It is shown that, while the alpha radioactivity can be measured accurately in the presence of beta radioactivity, the opposite is not typically true due to alpha-to-beta crosstalk. The crosstalk, originating from the emission of conversion and Auger electrons as well as x rays, is shown to be dependent primarily on the particular alpha-decay scheme while the dependence on alpha energy is small but negligible. It was measured at 28-35% for 241Am, 22-24% for 230Th, and 4.9-6.5% for 239Pu. For 210Po, the crosstalk of 1.2-1.6% was observed mostly due to energy retardation. A method of reducing the crosstalk to a <3% level is proposed by absorbing the atomic electrons in a 6.2 mg cm(-2) Al absorber, at the same time decreasing the beta efficiency by 16-31%.
Rahman, Mizanur; Islam, Shariful; Masuduzzaman, Md; Alam, Mahabub; Chawdhury, Mohammad Nizam Uddin; Ferdous, Jinnat; Islam, Md Nurul; Hassan, Mohammad Mahmudul; Hossain, Mohammad Alamgir; Islam, Ariful
2018-04-01
Asian house shrew ( Suncus murinus ), a widely distributed small mammal in the South Asian region, can carry helminths of zoonotic importance. The aim of the study was to know the prevalence and diversity of gastrointestinal (GI) helminths in free-ranging Asian house shrew ( S. murinus ) in Bangladesh. A total of 86 Asian house shrews were captured from forest areas and other habitats of Bangladesh in 2015. Gross examination of the whole GI tract was performed for gross helminth detection, and coproscopy was done for identification of specific eggs or larvae. The overall prevalence of GI helminth was 77.9% (67/86), with six species including nematodes (3), cestodes (2), and trematodes (1). Of the detected helminths, the dominant parasitic group was from the genus Hymenolepis spp.(59%), followed by Strongyloides spp.(17%), Capillaria spp. (10%), Physaloptera spp. (3%), and Echinostoma spp.(3%). The finding shows that the presence of potential zoonotic parasites (Hymenolepis spp. and Capillaria spp.) in Asian house shrew is ubiquitous in all types of habitat (forest land, cropland and dwelling) in Bangladesh. Therefore, further investigation is crucial to examine their role in the transmission of human helminthiasis.
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
A fault-tolerant information processing concept for space vehicles.
NASA Technical Reports Server (NTRS)
Hopkins, A. L., Jr.
1971-01-01
A distributed fault-tolerant information processing system is proposed, comprising a central multiprocessor, dedicated local processors, and multiplexed input-output buses connecting them together. The processors in the multiprocessor are duplicated for error detection, which is felt to be less expensive than using coded redundancy of comparable effectiveness. Error recovery is made possible by a triplicated scratchpad memory in each processor. The main multiprocessor memory uses replicated memory for error detection and correction. Local processors use any of three conventional redundancy techniques: voting, duplex pairs with backup, and duplex pairs in independent subsystems.
Photorespiration and carbon limitation determine productivity in temperate seagrasses.
Buapet, Pimchanok; Rasmusson, Lina M; Gullström, Martin; Björk, Mats
2013-01-01
The gross primary productivity of two seagrasses, Zostera marina and Ruppia maritima, and one green macroalga, Ulva intestinalis, was assessed in laboratory and field experiments to determine whether the photorespiratory pathway operates at a substantial level in these macrophytes and to what extent it is enhanced by naturally occurring shifts in dissolved inorganic carbon (DIC) and O2 in dense vegetation. To achieve these conditions in laboratory experiments, seawater was incubated with U. intestinalis in light to obtain a range of higher pH and O2 levels and lower DIC levels. Gross photosynthetic O2 evolution was then measured in this pretreated seawater (pH, 7.8-9.8; high to low DIC:O2 ratio) at both natural and low O2 concentrations (adjusted by N2 bubbling). The presence of photorespiration was indicated by a lower gross O2 evolution rate under natural O2 conditions than when O2 was reduced. In all three macrophytes, gross photosynthetic rates were negatively affected by higher pH and lower DIC. However, while both seagrasses exhibited significant photorespiratory activity at increasing pH values, the macroalga U. intestinalis exhibited no such activity. Rates of seagrass photosynthesis were then assessed in seawater collected from the natural habitats (i.e., shallow bays characterized by high macrophyte cover and by low DIC and high pH during daytime) and compared with open baymouth water conditions (where seawater DIC is in equilibrium with air, normal DIC, and pH). The gross photosynthetic rates of both seagrasses were significantly higher when incubated in the baymouth water, indicating that these grasses can be significantly carbon limited in shallow bays. Photorespiration was also detected in both seagrasses under shallow bay water conditions. Our findings indicate that natural carbon limitations caused by high community photosynthesis can enhance photorespiration and cause a significant decline in seagrass primary production in shallow waters.
Pokupec, Rajko; Mrazovac, Danijela; Popović-Suić, Smiljka; Mrazovac, Visnja; Kordić, Rajko; Petricek, Igor
2013-04-01
Early detection of a refractive error and its correction are extremely important for the prevention of amblyopia (poor vision). The golden standard in the detection of refractive errors is retinoscopy--a method where the pupils are dilated in order to exclude accomodation. This results in a more accurate measurement of a refractive error. Automatic computer refractometer is also in use. The study included 30 patients, 15 boys, 15 girls aged 4-16. The first examination was conducted with refractometer on narrow pupils. Retinoscopy, followed by another examination with refractometer was performed on pupils dilated with mydriatic drops administered 3 times. The results obtained with three methods were compared. They indicate that in narrow pupils the autorefractometer revealed an increased diopter value in nearsightedness (myopia), the minus overcorrection, whereas findings obtained with retinoscopy and autorefractometer in mydriasis cycloplegia, were much more accurate. The results were statistically processed, which confirmed the differences between obtained measurements. These findings are consistent with the results of studies conducted by other authors. Automatic refractometry on narrow pupils has proven to be a method for detection of refractive errors in children. However, the exact value of the refractive error is obtained only in mydriasis--with retinoscopy or an automatic refractometer on dilated pupils.
Visual Scanning: Comparisons Between Student and Instructor Pilots. Final Report.
ERIC Educational Resources Information Center
DeMaio, Joseph; And Others
The performance of instructor pilots and student pilots was compared in two visual scanning tasks. In the first task both groups were shown slides of T-37 instrument displays in which errors were to be detected. Instructor pilots detected errors faster and with greater accuracy than student pilots, thus providing evidence for the validity of the…
Code of Federal Regulations, 2011 CFR
2011-04-01
... to the nearest field office of the Board. That office inspects the applications to detect errors and..., the claimant executes a registration and claim for unemployment insurance benefits (Form UI-3). In... openings, detecting errors and omissions, and noting items requiring investigation. The claim is then...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopan, O; Novak, A; Zeng, J
Purpose: Physics pre-treatment plan review is crucial to safe radiation oncology treatments. Studies show that most errors originate in treatment planning, which underscores the importance of physics plan review. As a QA measure the physics review is of fundamental importance and is central to the profession of medical physics. However, little is known about its effectiveness. More hard data are needed. The purpose of this study was to quantify the effectiveness of physics review with the goal of improving it. Methods: This study analyzed 315 “potentially serious” near-miss incidents within an institutional incident learning system collected over a two-year period.more » 139 of these originated prior to physics review and were found at the review or after. Incidents were classified as events that: 1)were detected by physics review, 2)could have been detected (but were not), and 3)could not have been detected. Category 1 and 2 events were classified by which specific check (within physics review) detected or could have detected the event. Results: Of the 139 analyzed events, 73/139 (53%) were detected or could have been detected by the physics review; although, 42/73 (58%) were not actually detected. 45/73 (62%) errors originated in treatment planning, making physics review the first step in the workflow that could detect the error. Two specific physics checks were particularly effective (combined effectiveness of >20%): verifying DRRs (8/73) and verifying isocenter (7/73). Software-based plan checking systems were evaluated and found to have potential effectiveness of 40%. Given current data structures, software implementations of some tests such as isocenter verification check would be challenging. Conclusion: Physics plan review is a key safety measure and can detect majority of reported events. However, a majority of events that potentially could have been detected were NOT detected in this study, indicating the need to improve the performance of physics review.« less
[Remote system of natural gas leakage based on multi-wavelength characteristics spectrum analysis].
Li, Jing; Lu, Xu-Tao; Yang, Ze-Hui
2014-05-01
In order to be able to quickly, to a wide range of natural gas pipeline leakage monitoring, the remote detection system for concentration of methane gas was designed based on static Fourier transform interferometer. The system used infrared light, which the center wavelength was calibrated to absorption peaks of methane molecules, to irradiated tested area, and then got the interference fringes by converging collimation system and interference module. Finally, the system calculated the concentration-path-length product in tested area by multi-wavelength characteristics spectrum analysis algorithm, furthermore the inversion of the corresponding concentration of methane. By HITRAN spectrum database, Selected wavelength position of 1. 65 microm as the main characteristic absorption peaks, thereby using 1. 65 pm DFB laser as the light source. In order to improve the detection accuracy and stability without increasing the hardware configuration of the system, solved absorbance ratio by the auxiliary wave-length, and then get concentration-path-length product of measured gas by the method of the calculation proportion of multi-wavelength characteristics. The measurement error from external disturbance is caused by this innovative approach, and it is more similar to a differential measurement. It will eliminate errors in the process of solving the ratio of multi-wavelength characteristics, and can improve accuracy and stability of the system. The infrared absorption spectrum of methane is constant, the ratio of absorbance of any two wavelengths by methane is also constant. The error coefficients produced by the system is the same when it received the same external interference, so the measured noise of the system can be effectively reduced by the ratio method. Experimental tested standards methane gas tank with leaking rate constant. Using the tested data of PN1000 type portable methane detector as the standard data, and were compared to the tested data of the system, while tested distance of the system were 100, 200 and 500 m. Experimental results show that the methane concentration detected value was stable after a certain time leakage, the concentration-path-length product value of the system was stable. For detection distance of 100 m, the detection error of the concentration-path-length product was less than 1. 0%. With increasing distance from tested area, the detection error is increased correspondingly. When the distance was 500 m, the detection error was less than 4. 5%. In short, the detected error of the system is less than 5. 0% after the gas leakage stable, to meet the requirements of the field of natural gas leakage remote sensing.
Runtime Verification in Context : Can Optimizing Error Detection Improve Fault Diagnosis
NASA Technical Reports Server (NTRS)
Dwyer, Matthew B.; Purandare, Rahul; Person, Suzette
2010-01-01
Runtime verification has primarily been developed and evaluated as a means of enriching the software testing process. While many researchers have pointed to its potential applicability in online approaches to software fault tolerance, there has been a dearth of work exploring the details of how that might be accomplished. In this paper, we describe how a component-oriented approach to software health management exposes the connections between program execution, error detection, fault diagnosis, and recovery. We identify both research challenges and opportunities in exploiting those connections. Specifically, we describe how recent approaches to reducing the overhead of runtime monitoring aimed at error detection might be adapted to reduce the overhead and improve the effectiveness of fault diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, CM; Baydush, AH; Nguyen, C
Purpose: To determine the effectiveness of SPC analysis for a model predictive maintenance process that uses accelerator generated parameter and performance data contained in trajectory log files. Methods: Each trajectory file is decoded and a total of 131 axes positions are recorded (collimator jaw position, gantry angle, each MLC, etc.). This raw data is processed and either axis positions are extracted at critical points during the delivery or positional change over time is used to determine axis velocity. The focus of our analysis is the accuracy, reproducibility and fidelity of each axis. A reference positional trace of the gantry andmore » each MLC is used as a motion baseline for cross correlation (CC) analysis. A total of 494 parameters (482 MLC related) were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and parameter/system specifications. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: TG-142 and published analysis of VMAT delivery accuracy. Results: All errors introduced were detected. Synthetic positional errors of 2mm for collimator jaw and MLC carriage exceeded the chart limits. Gantry speed and each MLC speed are analyzed at two different points in the delivery. Simulated Gantry speed error (0.2 deg/sec) and MLC speed error (0.1 cm/sec) exceeded the speed chart limits. Gantry position error of 0.2 deg was detected by the CC maximum value charts. The MLC position error of 0.1 cm was detected by the CC maximum value location charts for every MLC. Conclusion: SPC I/MR evaluation of trajectory log file parameters may be effective in providing an early warning of performance degradation or component failure for medical accelerator systems.« less
Di, Huige; Zhang, Zhanfei; Hua, Hangbo; Zhang, Jiaqi; Hua, Dengxin; Wang, Yufeng; He, Tingyao
2017-03-06
Accurate aerosol optical properties could be obtained via the high spectral resolution lidar (HSRL) technique, which employs a narrow spectral filter to suppress the Rayleigh or Mie scattering in lidar return signals. The ability of the filter to suppress Rayleigh or Mie scattering is critical for HSRL. Meanwhile, it is impossible to increase the rejection of the filter without limitation. How to optimize the spectral discriminator and select the appropriate suppression rate of the signal is important to us. The HSRL technology was thoroughly studied based on error propagation. Error analyses and sensitivity studies were carried out on the transmittance characteristics of the spectral discriminator. Moreover, ratwo different spectroscopic methods for HSRL were described and compared: one is to suppress the Mie scattering; the other is to suppress the Rayleigh scattering. The corresponding HSRLs were simulated and analyzed. The results show that excessive suppression of Rayleigh scattering or Mie scattering in a high-spectral channel is not necessary if the transmittance of the spectral filter for molecular and aerosol scattering signals can be well characterized. When the ratio of transmittance of the spectral filter for aerosol scattering and molecular scattering is less than 0.1 or greater than 10, the detection error does not change much with its value. This conclusion implies that we have more choices for the high-spectral discriminator in HSRL. Moreover, the detection errors of HSRL regarding the two spectroscopic methods vary greatly with the atmospheric backscattering ratio. To reduce the detection error, it is necessary to choose a reasonable spectroscopic method. The detection method of suppressing the Rayleigh signal and extracting the Mie signal can achieve less error in a clear atmosphere, while the method of suppressing the Mie signal and extracting the Rayleigh signal can achieve less error in a polluted atmosphere.
Bultena, Sybrine; Danielmeier, Claudia; Bekkering, Harold; Lemhöfer, Kristin
2017-01-01
Humans monitor their behavior to optimize performance, which presumably relies on stable representations of correct responses. During second language (L2) learning, however, stable representations have yet to be formed while knowledge of the first language (L1) can interfere with learning, which in some cases results in persistent errors. In order to examine how correct L2 representations are stabilized, this study examined performance monitoring in the learning process of second language learners for a feature that conflicts with their first language. Using EEG, we investigated if L2 learners in a feedback-guided word gender assignment task showed signs of error detection in the form of an error-related negativity (ERN) before and after receiving feedback, and how feedback is processed. The results indicated that initially, response-locked negativities for correct (CRN) and incorrect (ERN) responses were of similar size, showing a lack of internal error detection when L2 representations are unstable. As behavioral performance improved following feedback, the ERN became larger than the CRN, pointing to the first signs of successful error detection. Additionally, we observed a second negativity following the ERN/CRN components, the amplitude of which followed a similar pattern as the previous negativities. Feedback-locked data indicated robust FRN and P300 effects in response to negative feedback across different rounds, demonstrating that feedback remained important in order to update memory representations during learning. We thus show that initially, L2 representations may often not be stable enough to warrant successful error monitoring, but can be stabilized through repeated feedback, which means that the brain is able to overcome L1 interference, and can learn to detect errors internally after a short training session. The results contribute a different perspective to the discussion on changes in ERN and FRN components in relation to learning, by extending the investigation of these effects to the language learning domain. Furthermore, these findings provide a further characterization of the online learning process of L2 learners.
NASA Astrophysics Data System (ADS)
Zhou, H.; Luo, Z.; Li, Q.; Zhong, B.
2016-12-01
The monthly gravity field model can be used to compute the information about the mass variation within the system Earth, i.e., the relationship between mass variation in the oceans, land hydrology, and ice sheets. For more than ten years, GRACE has provided valuable information for recovering monthly gravity field model. In this study, a new time series of GRACE monthly solution, which is truncated to degree and order 60, is computed by the modified dynamic approach. Compared with the traditional dynamic approach, the major difference of our modified approach is the way to process the nuisance parameters. This type of parameters is mainly used to absorb low-frequency errors in KBRR data. One way is to remove the nuisance parameters before estimating the geo-potential coefficients, called Pure Predetermined Strategy (PPS). The other way is to determine the nuisance parameters and geo-potential coefficients simultaneously, called Pure Simultaneous Strategy (PSS). It is convenient to detect the gross error by PPS, while there is also obvious signal loss compared with the solutions derived from PSS. After comparing the difference of practical calculation formulas between PPS and PSS, we create the Filter Predetermine Strategy (FPS), which can combine the advantages of PPS and PSS efficiently. With FPS, a new monthly gravity field model entitled HUST-Grace2016s is developed. The comparisons of geoid degree powers and mass change signals in the Amazon basin, the Greenland and the Antarctic demonstrate that our model is comparable with the other published models, e.g., the CSR RL05, JPL RL05 and GFZ RL05 models. Acknowledgements: This work is supported by China Postdoctoral Science Foundation (Grant No.2016M592337), the National Natural Science Foundation of China (Grant Nos. 41131067, 41504014), the Open Research Fund Program of the State Key Laboratory of Geodesy and Earth's Dynamics (Grant No. SKLGED2015-1-3-E).
Impact of rapeseed cropping on the soil carbon balance
NASA Astrophysics Data System (ADS)
Moffat, Antje Maria; Herbst, Mathias; Huth, Vytas; Andres, Monique; Augustin, Jürgen
2015-04-01
Winter oilseed rape is the dominant biofuel crop in the young moraine landscape in Northern Germany. Since the cultivation of biofuel crops requires sustainability compared to fossil fuels by law, detailed knowledge about their green house gas (GHG) balance is necessary. The soil carbon balance is one of the key contributors to the total GHG balance and also very important for the assessment of soil fertility. However, the knowledge about the impact of different management practices on the soil carbon balance is very limited up to now. Therefore, we investigated the carbon fluxes of winter oilseed rape at field plots near Dedelow/Uckermark in NE Germany with different treatments of fertilization (mineral versus organic) and tillage (no-till and mulch-till versus ploughing). The dynamics of the carbon fluxes are mainly driven by the current climatic conditions but the overall response depends strongly on the ecosystem state (with its physiological and microbiological properties) which is affected by management. To get the full carbon flux dynamics but also the impact of the different management practices, two different approaches were used: The eddy covariance technique to get continuous fluxes throughout the year and the manual chamber technique to detect flux differences between specific management practices. The manual chamber measurements were conducted four-weekly as all-day campaigns using a flow-through non-steady-state closed chamber system. The fluxes in-between campaigns were gap-filled based on functional relationships with soil and air temperature (for the ecosystem respiration) and photosynthetic active radiation (for the gross primary production). All results presented refer to the cropping season 2012-2013. The combination of the two measurement techniques allows the evaluation of chamber fluxes including an independent estimate of the error on the overall balances. Despite the considerable errors, there are significant differences in the soil carbon balance between the tillage and fertilization treatments - ranging from net losses to net gains in the soil carbon stock.
Renal Parenchymal Area Growth Curves for Children 0 to 10 Months Old.
Fischer, Katherine; Li, Chunming; Wang, Huixuan; Song, Yihua; Furth, Susan; Tasian, Gregory E
2016-04-01
Low renal parenchymal area, which is the gross area of the kidney in maximal longitudinal length minus the area of the collecting system, has been associated with increased risk of end stage renal disease during childhood in boys with posterior urethral valves. To our knowledge normal values do not exist. We aimed to increase the clinical usefulness of this measure by defining normal renal parenchymal area during infancy. In a cross-sectional study of children with prenatally detected mild unilateral hydronephrosis who were evaluated between 2000 and 2012 we measured the renal parenchymal area of normal kidney(s) opposite the kidney with mild hydronephrosis. Measurement was done with ultrasound from birth to post-gestational age 10 months. We used the LMS method to construct unilateral, bilateral, side and gender stratified normalized centile curves. We determined the z-score and the centile of a total renal parenchymal area of 12.4 cm(2) at post-gestational age 1 to 2 weeks, which has been associated with an increased risk of kidney failure before age 18 years in boys with posterior urethral valves. A total of 975 normal kidneys of children 0 to 10 months old were used to create renal parenchymal area centile curves. At the 97th centile for unilateral and single stratified curves the estimated margin of error was 4.4% to 8.8%. For bilateral and double stratified curves the estimated margin of error at the 97th centile was 6.6% to 13.2%. Total renal parenchymal area less than 12.4 cm(2) at post-gestational age 1 to 2 weeks had a z-score of -1.96 and fell at the 3rd percentile. These normal renal parenchymal area curves may be used to track kidney growth in infants and identify those at risk for chronic kidney disease progression. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less
29 CFR 779.259 - What is included in annual gross volume.
Code of Federal Regulations, 2013 CFR
2013-07-01
... whole. The computation of the annual gross volume of sales or business of the enterprise is made... Coverage Annual Gross Volume of Sales Made Or Business Done § 779.259 What is included in annual gross volume. (a) The annual gross volume of sales made or business done of an enterprise consists of its gross...
29 CFR 779.259 - What is included in annual gross volume.
Code of Federal Regulations, 2012 CFR
2012-07-01
... whole. The computation of the annual gross volume of sales or business of the enterprise is made... Coverage Annual Gross Volume of Sales Made Or Business Done § 779.259 What is included in annual gross volume. (a) The annual gross volume of sales made or business done of an enterprise consists of its gross...
26 CFR 1.832-1 - Gross income.
Code of Federal Regulations, 2010 CFR
2010-04-01
... deposit premiums received, but not assessments, shall be excluded from gross income. Gross income does not... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Gross income. 1.832-1 Section 1.832-1 Internal... TAXES Other Insurance Companies § 1.832-1 Gross income. (a) Gross income as defined in section 832(b)(1...
A multi points ultrasonic detection method for material flow of belt conveyor
NASA Astrophysics Data System (ADS)
Zhang, Li; He, Rongjun
2018-03-01
For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN
NASA Astrophysics Data System (ADS)
Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.
2016-12-01
In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.
Danelichen, Victor H M; Biudes, Marcelo S; Velasque, Maísa C S; Machado, Nadja G; Gomes, Raphael S R; Vourlitis, George L; Nogueira, José S
2015-09-01
The acceleration of the anthropogenic activity has increased the atmospheric carbon concentration, which causes changes in regional climate. The Gross Primary Production (GPP) is an important variable in the global carbon cycle studies, since it defines the atmospheric carbon extraction rate from terrestrial ecosystems. The objective of this study was to estimate the GPP of the Amazon-Cerrado Transitional Forest by the Vegetation Photosynthesis Model (VPM) using local meteorological data and remote sensing data from MODIS and Landsat 5 TM reflectance from 2005 to 2008. The GPP was estimated using Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) calculated by MODIS and Landsat 5 TM images. The GPP estimates were compared with measurements in a flux tower by eddy covariance. The GPP measured in the tower was consistent with higher values during the wet season and there was a trend to increase from 2005 to 2008. The GPP estimated by VPM showed the same increasing trend observed in measured GPP and had high correlation and Willmott's coefficient and low error metrics in comparison to measured GPP. These results indicated high potential of the Landsat 5 TM images to estimate the GPP of Amazon-Cerrado Transitional Forest by VPM.
HARMONY: a server for the assessment of protein structures
Pugalenthi, G.; Shameer, K.; Srinivasan, N.; Sowdhamini, R.
2006-01-01
Protein structure validation is an important step in computational modeling and structure determination. Stereochemical assessment of protein structures examine internal parameters such as bond lengths and Ramachandran (φ,ψ) angles. Gross structure prediction methods such as inverse folding procedure and structure determination especially at low resolution can sometimes give rise to models that are incorrect due to assignment of misfolds or mistracing of electron density maps. Such errors are not reflected as strain in internal parameters. HARMONY is a procedure that examines the compatibility between the sequence and the structure of a protein by assigning scores to individual residues and their amino acid exchange patterns after considering their local environments. Local environments are described by the backbone conformation, solvent accessibility and hydrogen bonding patterns. We are now providing HARMONY through a web server such that users can submit their protein structure files and, if required, the alignment of homologous sequences. Scores are mapped on the structure for subsequent examination that is useful to also recognize regions of possible local errors in protein structures. HARMONY server is located at PMID:16844999
Consistent lattice Boltzmann methods for incompressible axisymmetric flows
NASA Astrophysics Data System (ADS)
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Yin, Linmao; Zhao, Ya; Chew, Jia Wei
2016-08-01
In this work, consistent lattice Boltzmann (LB) methods for incompressible axisymmetric flows are developed based on two efficient axisymmetric LB models available in the literature. In accord with their respective original models, the proposed axisymmetric models evolve within the framework of the standard LB method and the source terms contain no gradient calculations. Moreover, the incompressibility conditions are realized with the Hermite expansion, thus the compressibility errors arising in the existing models are expected to be reduced by the proposed incompressible models. In addition, an extra relaxation parameter is added to the Bhatnagar-Gross-Krook collision operator to suppress the effect of the ghost variable and thus the numerical stability of the present models is significantly improved. Theoretical analyses, based on the Chapman-Enskog expansion and the equivalent moment system, are performed to derive the macroscopic equations from the LB models and the resulting truncation terms (i.e., the compressibility errors) are investigated. In addition, numerical validations are carried out based on four well-acknowledged benchmark tests and the accuracy and applicability of the proposed incompressible axisymmetric LB models are verified.
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
A novel approach for pilot error detection using Dynamic Bayesian Networks.
Saada, Mohamad; Meng, Qinggang; Huang, Tingwen
2014-06-01
In the last decade Dynamic Bayesian Networks (DBNs) have become one type of the most attractive probabilistic modelling framework extensions of Bayesian Networks (BNs) for working under uncertainties from a temporal perspective. Despite this popularity not many researchers have attempted to study the use of these networks in anomaly detection or the implications of data anomalies on the outcome of such models. An abnormal change in the modelled environment's data at a given time, will cause a trailing chain effect on data of all related environment variables in current and consecutive time slices. Albeit this effect fades with time, it still can have an ill effect on the outcome of such models. In this paper we propose an algorithm for pilot error detection, using DBNs as the modelling framework for learning and detecting anomalous data. We base our experiments on the actions of an aircraft pilot, and a flight simulator is created for running the experiments. The proposed anomaly detection algorithm has achieved good results in detecting pilot errors and effects on the whole system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bojechko, Casey; Phillps, Mark; Kalet, Alan
Purpose: Complex treatments in radiation therapy require robust verification in order to prevent errors that can adversely affect the patient. For this purpose, the authors estimate the effectiveness of detecting errors with a “defense in depth” system composed of electronic portal imaging device (EPID) based dosimetry and a software-based system composed of rules-based and Bayesian network verifications. Methods: The authors analyzed incidents with a high potential severity score, scored as a 3 or 4 on a 4 point scale, recorded in an in-house voluntary incident reporting system, collected from February 2012 to August 2014. The incidents were categorized into differentmore » failure modes. The detectability, defined as the number of incidents that are detectable divided total number of incidents, was calculated for each failure mode. Results: In total, 343 incidents were used in this study. Of the incidents 67% were related to photon external beam therapy (EBRT). The majority of the EBRT incidents were related to patient positioning and only a small number of these could be detected by EPID dosimetry when performed prior to treatment (6%). A large fraction could be detected by in vivo dosimetry performed during the first fraction (74%). Rules-based and Bayesian network verifications were found to be complimentary to EPID dosimetry, able to detect errors related to patient prescriptions and documentation, and errors unrelated to photon EBRT. Combining all of the verification steps together, 91% of all EBRT incidents could be detected. Conclusions: This study shows that the defense in depth system is potentially able to detect a large majority of incidents. The most effective EPID-based dosimetry verification is in vivo measurements during the first fraction and is complemented by rules-based and Bayesian network plan checking.« less
Augmenting intracortical brain-machine interface with neurally driven error detectors
NASA Astrophysics Data System (ADS)
Even-Chen, Nir; Stavisky, Sergey D.; Kao, Jonathan C.; Ryu, Stephen I.; Shenoy, Krishna V.
2017-12-01
Objective. Making mistakes is inevitable, but identifying them allows us to correct or adapt our behavior to improve future performance. Current brain-machine interfaces (BMIs) make errors that need to be explicitly corrected by the user, thereby consuming time and thus hindering performance. We hypothesized that neural correlates of the user perceiving the mistake could be used by the BMI to automatically correct errors. However, it was unknown whether intracortical outcome error signals were present in the premotor and primary motor cortices, brain regions successfully used for intracortical BMIs. Approach. We report here for the first time a putative outcome error signal in spiking activity within these cortices when rhesus macaques performed an intracortical BMI computer cursor task. Main results. We decoded BMI trial outcomes shortly after and even before a trial ended with 96% and 84% accuracy, respectively. This led us to develop and implement in real-time a first-of-its-kind intracortical BMI error ‘detect-and-act’ system that attempts to automatically ‘undo’ or ‘prevent’ mistakes. The detect-and-act system works independently and in parallel to a kinematic BMI decoder. In a challenging task that resulted in substantial errors, this approach improved the performance of a BMI employing two variants of the ubiquitous Kalman velocity filter, including a state-of-the-art decoder (ReFIT-KF). Significance. Detecting errors in real-time from the same brain regions that are commonly used to control BMIs should improve the clinical viability of BMIs aimed at restoring motor function to people with paralysis.
Error image aware content restoration
NASA Astrophysics Data System (ADS)
Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee
2015-12-01
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
Choose and choose again: appearance-reality errors, pragmatics and logical ability.
Deák, Gedeon O; Enright, Brian
2006-05-01
In the Appearance/Reality (AR) task some 3- and 4-year-old children make perseverative errors: they choose the same word for the appearance and the function of a deceptive object. Are these errors specific to the AR task, or signs of a general question-answering problem? Preschoolers completed five tasks: AR; simple successive forced-choice question pairs (QP); flexible naming of objects (FN); working memory (WM) span; and indeterminacy detection (ID). AR errors correlated with QP errors. Insensitivity to indeterminacy predicted perseveration in both tasks. Neither WM span nor flexible naming predicted other measures. Age predicted sensitivity to indeterminacy. These findings suggest that AR tests measure a pragmatic understanding; specifically, different questions about a topic usually call for different answers. This understanding is related to the ability to detect indeterminacy of each question in a series. AR errors are unrelated to the ability to represent an object as belonging to multiple categories, to working memory span, or to inhibiting previously activated words.
Generalized site occupancy models allowing for false positive and false negative errors
Royle, J. Andrew; Link, W.A.
2006-01-01
Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C
1994-01-05
Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons, Inc.
Spatial heterogeneity of type I error for local cluster detection tests
2014-01-01
Background Just as power, type I error of cluster detection tests (CDTs) should be spatially assessed. Indeed, CDTs’ type I error and power have both a spatial component as CDTs both detect and locate clusters. In the case of type I error, the spatial distribution of wrongly detected clusters (WDCs) can be particularly affected by edge effect. This simulation study aims to describe the spatial distribution of WDCs and to confirm and quantify the presence of edge effect. Methods A simulation of 40 000 datasets has been performed under the null hypothesis of risk homogeneity. The simulation design used realistic parameters from survey data on birth defects, and in particular, two baseline risks. The simulated datasets were analyzed using the Kulldorff’s spatial scan as a commonly used test whose behavior is otherwise well known. To describe the spatial distribution of type I error, we defined the participation rate for each spatial unit of the region. We used this indicator in a new statistical test proposed to confirm, as well as quantify, the edge effect. Results The predefined type I error of 5% was respected for both baseline risks. Results showed strong edge effect in participation rates, with a descending gradient from center to edge, and WDCs more often centrally situated. Conclusions In routine analysis of real data, clusters on the edge of the region should be carefully considered as they rarely occur when there is no cluster. Further work is needed to combine results from power studies with this work in order to optimize CDTs performance. PMID:24885343
Altitude deviations: Breakdowns of an error-tolerant system
NASA Technical Reports Server (NTRS)
Palmer, Everett A.; Hutchins, Edwin L.; Ritter, Richard D.; Vancleemput, Inge
1993-01-01
Pilot reports of aviation incidents to the Aviation Safety Reporting System (ASRS) provide a window on the problems occurring in today's airline cockpits. The narratives of 10 pilot reports of errors made in the automation-assisted altitude-change task are used to illustrate some of the issues of pilots interacting with automatic systems. These narratives are then used to construct a description of the cockpit as an information processing system. The analysis concentrates on the error-tolerant properties of the system and on how breakdowns can occasionally occur. An error-tolerant system can detect and correct its internal processing errors. The cockpit system consists of two or three pilots supported by autoflight, flight-management, and alerting systems. These humans and machines have distributed access to clearance information and perform redundant processing of information. Errors can be detected as deviations from either expected behavior or as deviations from expected information. Breakdowns in this system can occur when the checking and cross-checking tasks that give the system its error-tolerant properties are not performed because of distractions or other task demands. Recommendations based on the analysis for improving the error tolerance of the cockpit system are given.
The next organizational challenge: finding and addressing diagnostic error.
Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H
2014-03-01
Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weismann, J.; Young, C.; Masciulli, S.
2007-07-01
Lowry Air Force Base (Lowry) was closed in September 1994 as part of the Base Realignment and Closure (BRAC) program and the base was transferred to the Lowry Redevelopment Authority in 1995. As part of the due diligence activities conducted by the Air Force, a series of remedial investigations were conducted across the base. A closed waste landfill, designated Operable Unit 2 (OU 2), was initially assessed in a 1990 Remedial Investigation (RI; [1]). A Supplemental Remedial Investigation was conducted in 1995 [2] and additional studies were conducted in a 1998 Focused Feasibility Study. [3] The three studies indicated thatmore » gross alpha, gross beta, and uranium concentrations were consistently above regulatory standards and that there were detections of low concentrations other radionuclides. Results from previous investigations at OU 2 have shown elevated gross alpha, gross beta, and uranium concentrations in groundwater, surface water, and sediments. The US Air Force has sought to understand the provenance of these radionuclides in order to determine if they could be due to leachates from buried radioactive materials within the landfill or whether they are naturally-occurring. The Air Force and regulators agreed to use a one-year monitoring and sampling program to seek to explain the origins of the radionuclides. Over the course of the one-year program, dissolved uranium levels greater than the 30 {mu}g/L Maximum Contaminant Level (MCL) were consistently found in both up-gradient and down-gradient wells at OU 2. Elevated Gross Alpha and Gross Beta measurements that were observed during prior investigations and confirmed during the LTM were found to correlate with high dissolved uranium content in groundwater. If Gross Alpha values are corrected to exclude uranium and radon contributions in accordance with US EPA guidance, then the 15 pCi/L gross alpha level is not exceeded. The large dataset also allowed development of gross alpha to total uranium correlation factors so that gross alpha action levels can be applied to future long-term landfill monitoring to track radiological conditions at lower cost. Ratios of isotopic uranium results were calculated to test whether the elevated uranium displayed signatures indicative of military use. Results of all ratio testing strongly supports the conclusion that the uranium found in groundwater, surface water, and sediment at OU 2 is naturally-occurring and has not undergone anthropogenic enrichment or processing. U-234:U-238 ratios also show that a disequilibrium state, i.e., ratio greater than 1, exists throughout OU 2 which is indicative of long-term aqueous transport in aged aquifers. These results all support the conclusion that the elevated uranium observed at OU 2 is due to the high concentrations in the regional watershed. Based on the results of this monitoring program, we concluded that the elevated uranium concentrations measured in OU 2 groundwater, surface water, and sediment are due to the naturally-occurring uranium content of the regional watershed and are not the result of waste burials in the former landfill. Several lines of evidence indicate that natural uranium has been naturally concentrated beneath OU 2 in the geologic past and the higher of uranium concentrations in down-gradient wells is the result of geochemical processes and not the result of a uranium ore disposal. These results therefore provide the data necessary to support radiological closure of OU 2. (authors)« less
Ultra-deep mutant spectrum profiling: improving sequencing accuracy using overlapping read pairs.
Chen-Harris, Haiyin; Borucki, Monica K; Torres, Clinton; Slezak, Tom R; Allen, Jonathan E
2013-02-12
High throughput sequencing is beginning to make a transformative impact in the area of viral evolution. Deep sequencing has the potential to reveal the mutant spectrum within a viral sample at high resolution, thus enabling the close examination of viral mutational dynamics both within- and between-hosts. The challenge however, is to accurately model the errors in the sequencing data and differentiate real viral mutations, particularly those that exist at low frequencies, from sequencing errors. We demonstrate that overlapping read pairs (ORP) -- generated by combining short fragment sequencing libraries and longer sequencing reads -- significantly reduce sequencing error rates and improve rare variant detection accuracy. Using this sequencing protocol and an error model optimized for variant detection, we are able to capture a large number of genetic mutations present within a viral population at ultra-low frequency levels (<0.05%). Our rare variant detection strategies have important implications beyond viral evolution and can be applied to any basic and clinical research area that requires the identification of rare mutations.
Decoy-state quantum key distribution with more than three types of photon intensity pulses
NASA Astrophysics Data System (ADS)
Chau, H. F.
2018-04-01
The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.
Effect of nonideal square-law detection on static calibration in noise-injection radiometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.
1984-01-01
The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, R; Kamima, T; Tachibana, H
2016-06-15
Purpose: To investigate the effect of the trajectory files from linear accelerator for Clarkson-based independent dose verification in IMRT and VMAT plans. Methods: A CT-based independent dose verification software (Simple MU Analysis: SMU, Triangle Products, Japan) with a Clarksonbased algorithm was modified to calculate dose using the trajectory log files. Eclipse with the three techniques of step and shoot (SS), sliding window (SW) and Rapid Arc (RA) was used as treatment planning system (TPS). In this study, clinically approved IMRT and VMAT plans for prostate and head and neck (HN) at two institutions were retrospectively analyzed to assess the dosemore » deviation between DICOM-RT plan (PL) and trajectory log file (TJ). An additional analysis was performed to evaluate MLC error detection capability of SMU when the trajectory log files was modified by adding systematic errors (0.2, 0.5, 1.0 mm) and random errors (5, 10, 30 mm) to actual MLC position. Results: The dose deviations for prostate and HN in the two sites were 0.0% and 0.0% in SS, 0.1±0.0%, 0.1±0.1% in SW and 0.6±0.5%, 0.7±0.9% in RA, respectively. The MLC error detection capability shows the plans for HN IMRT were the most sensitive and 0.2 mm of systematic error affected 0.7% dose deviation on average. Effect of the MLC random error did not affect dose error. Conclusion: The use of trajectory log files including actual information of MLC location, gantry angle, etc should be more effective for an independent verification. The tolerance level for the secondary check using the trajectory file may be similar to that of the verification using DICOM-RT plan file. From the view of the resolution of MLC positional error detection, the secondary check could detect the MLC position error corresponding to the treatment sites and techniques. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less
Subthreshold muscle twitches dissociate oscillatory neural signatures of conflicts from errors.
Cohen, Michael X; van Gaal, Simon
2014-02-01
We investigated the neural systems underlying conflict detection and error monitoring during rapid online error correction/monitoring mechanisms. We combined data from four separate cognitive tasks and 64 subjects in which EEG and EMG (muscle activity from the thumb used to respond) were recorded. In typical neuroscience experiments, behavioral responses are classified as "error" or "correct"; however, closer inspection of our data revealed that correct responses were often accompanied by "partial errors" - a muscle twitch of the incorrect hand ("mixed correct trials," ~13% of the trials). We found that these muscle twitches dissociated conflicts from errors in time-frequency domain analyses of EEG data. In particular, both mixed-correct trials and full error trials were associated with enhanced theta-band power (4-9Hz) compared to correct trials. However, full errors were additionally associated with power and frontal-parietal synchrony in the delta band. Single-trial robust multiple regression analyses revealed a significant modulation of theta power as a function of partial error correction time, thus linking trial-to-trial fluctuations in power to conflict. Furthermore, single-trial correlation analyses revealed a qualitative dissociation between conflict and error processing, such that mixed correct trials were associated with positive theta-RT correlations whereas full error trials were associated with negative delta-RT correlations. These findings shed new light on the local and global network mechanisms of conflict monitoring and error detection, and their relationship to online action adjustment. © 2013.
Symbolic Analysis of Concurrent Programs with Polymorphism
NASA Technical Reports Server (NTRS)
Rungta, Neha Shyam
2010-01-01
The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.
Registration of 2D to 3D joint images using phase-based mutual information
NASA Astrophysics Data System (ADS)
Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul
2007-03-01
Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.
Yu, Manzhu; Yang, Chaowei
2016-01-01
Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.
Monitoring robot actions for error detection and recovery
NASA Technical Reports Server (NTRS)
Gini, M.; Smith, R.
1987-01-01
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.
Text familiarity, word frequency, and sentential constraints in error detection.
Pilotti, Maura; Chodorow, Martin; Schauss, Frances
2009-12-01
The present study examines whether the frequency of an error-bearing word and its predictability, arising from sentential constraints and text familiarity, either independently or jointly, would impair error detection by making proofreading driven by top-down processes. Prior to a proofreading task, participants were asked to read, copy, memorize, or paraphrase sentences, half of which contained errors. These tasks represented a continuum of progressively more demanding and time-consuming activities, which were thought to lead to comparable increases in text familiarity and thus predictability. Proofreading times were unaffected by whether the sentences had been encountered earlier. Proofreading was slower and less accurate for high-frequency words and for highly constrained sentences. Prior memorization produced divergent effects on accuracy depending on sentential constraints. The latter finding suggested that a substantial level of predictability, such as that produced by memorizing highly constrained sentences, can increase the probability of overlooking errors.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling
NASA Astrophysics Data System (ADS)
Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd
2017-10-01
Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.
Using medication list--problem list mismatches as markers of potential error.
Carpenter, James D.; Gorman, Paul N.
2002-01-01
The goal of this project was to specify and develop an algorithm that will check for drug and problem list mismatches in an electronic medical record (EMR). The algorithm is based on the premise that a patient's problem list and medication list should agree, and a mismatch may indicate medication error. Successful development of this algorithm could mean detection of some errors, such as medication orders entered into a wrong patient record, or drug therapy omissions, that are not otherwise detected via automated means. Additionally, mismatches may identify opportunities to improve problem list integrity. To assess the concept's feasibility, this study compared medications listed in a pharmacy information system with findings in an online nursing adult admission assessment, serving as a proxy for the problem list. Where drug and problem list mismatches were discovered, examination of the patient record confirmed the mismatch, and identified any potential causes. Evaluation of the algorithm in diabetes treatment indicates that it successfully detects both potential medication error and opportunities to improve problem list completeness. This algorithm, once fully developed and deployed, could prove a valuable way to improve the patient problem list, and could decrease the risk of medication error. PMID:12463796
ERIC Educational Resources Information Center
Zamora, Ángela; Suárez, José Manuel; Ardura, Diego
2018-01-01
The authors' objective was to study the role of error detection and retroactive self-regulation as determinants of performance in secondary education students. A total of 198 students participated in the quasiexperimental study, which involved a control group and two experimental groups. This enabled the authors to analyze the effects of both…