Science.gov

Sample records for additional error due

  1. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. PMID:25649961

  2. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  3. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545

  4. Evidence Report: Risk of Performance Errors Due to Training Deficiencies

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel

    2012-01-01

    The Risk of Performance Errors Due to Training Deficiencies is identified by the National Aeronautics and Space Administration (NASA) Human Research Program (HRP) as a recognized risk to human health and performance in space. The HRP Program Requirements Document (PRD) defines these risks. This Evidence Report provides a summary of the evidence that has been used to identify and characterize this risk. Given that training content, timing, intervals, and delivery methods must support crew task performance, and given that training paradigms will be different for long-duration missions with increased crew autonomy, there is a risk that operators will lack the skills or knowledge necessary to complete critical tasks, resulting in flight and ground crew errors and inefficiencies, failed mission and program objectives, and an increase in crew injuries.

  5. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  6. Evidence Report: Risk of Performance Errors Due to Training Deficiencies

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel; Dempsey, Donna L.

    2016-01-01

    Substantial evidence supports the claim that inadequate training leads to performance errors. Barshi and Loukopoulos (2012) demonstrate that even a task as carefully developed and refined over many years as operating an aircraft can be significantly improved by a systematic analysis, followed by improved procedures and improved training (see also Loukopoulos, Dismukes, & Barshi, 2009a). Unfortunately, such a systematic analysis of training needs rarely occurs during the preliminary design phase, when modifications are most feasible. Training is often seen as a way to compensate for deficiencies in task and system design, which in turn increases the training load. As a result, task performance often suffers, and with it, the operators suffer and so does the mission. On the other hand, effective training can indeed compensate for such design deficiencies, and can even go beyond to compensate for failures of our imagination to anticipate all that might be needed when we send our crew members to go where no one else has gone before. Much of the research literature on training is motivated by current training practices aimed at current training needs. Although there is some experience with operations in extreme environments on Earth, there is no experience with long-duration space missions where crews must practice semi-autonomous operations, where ground support must accommodate significant communication delays, and where so little is known about the environment. Thus, we must develop robust methodologies and tools to prepare our crews for the unknown. The research necessary to support such an endeavor does not currently exist, but existing research does reveal general challenges that are relevant to long-duration, high-autonomy missions. The evidence presented here describes issues related to the risk of performance errors due to training deficiencies. Contributing factors regarding training deficiencies may pertain to organizational process and training programs for

  7. A concatenated coded modulation scheme for error control (addition 2)

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1988-01-01

    A concatenated coded modulation scheme for error control in data communications is described. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary PSK modulation. Error performance of the scheme is analyzed for an AWGN channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner code and a relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The proposed scheme is particularly effective for high-speed satellite communications for large file transfer where high reliability is required. This paper also presents a simple method for constructing block codes for M-ary PSK modulation. Some short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft-decision Viterbi decoding algorithm. Furthermore, some of these codes are phase invariant under multiples of 45 deg rotation.

  8. Compensation of overlay errors due to mask bending and non-flatness for EUV masks

    NASA Astrophysics Data System (ADS)

    Chandhok, Manish; Goyal, Sanjay; Carson, Steven; Park, Seh-Jin; Zhang, Guojing; Myers, Alan M.; Leeson, Michael L.; Kamna, Marilyn; Martinez, Fabian C.; Stivers, Alan R.; Lorusso, Gian F.; Hermans, Jan; Hendrickx, Eric; Govindjee, Sanjay; Brandstetter, Gerd; Laursen, Tod

    2009-03-01

    EUV blank non-flatness results in both out of plane distortion (OPD) and in-plane distortion (IPD) [3-5]. Even for extremely flat masks (~50 nm peak to valley (PV)), the overlay error is estimated to be greater than the allocation in the overlay budget. In addition, due to multilayer and other thin film induced stresses, EUV masks have severe bow (~1 um PV). Since there is no electrostatic chuck to flatten the mask during the e-beam write step, EUV masks are written in a bent state that can result in ~15 nm of overlay error. In this article we present the use of physically-based models of mask bending and non-flatness induced overlay errors, to compensate for pattern placement of EUV masks during the e-beam write step in a process we refer to as E-beam Writer based Overlay error Correction (EWOC). This work could result in less restrictive tolerances for the mask blank non-flatness specs which in turn would result in less blank defects.

  9. Systematic errors in two-dimensional digital image correlation due to lens distortion

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Yu, Liping; Wu, Dafang; Tang, Liqun

    2013-02-01

    Lens distortion practically presents in a real optical imaging system causing non-uniform geometric distortion in the recorded images, and gives rise to additional errors in the displacement and strain results measured by two-dimensional digital image correlation (2D-DIC). In this work, the systematic errors in the displacement and strain results measured by 2D-DIC due to lens distortion are investigated theoretically using the radial lens distortion model and experimentally through easy-to-implement rigid body, in-plane translation tests. Theoretical analysis shows that the displacement and strain errors at an interrogated image point are not only in linear proportion to the distortion coefficient of the camera lens used, but also depend on its distance relative to distortion center and its magnitude of displacement. To eliminate the systematic errors caused by lens distortion, a simple linear least-squares algorithm is proposed to estimate the distortion coefficient from the distorted displacement results of rigid body, in-plane translation tests, which can be used to correct the distorted displacement fields to obtain unbiased displacement and strain fields. Experimental results verify the correctness of the theoretical derivation and the effectiveness of the proposed lens distortion correction method.

  10. Dimensional errors in LIGA-produced metal structures due to thermal expansion and swelling of PMMA.

    SciTech Connect

    Kistler, Bruce L.; Dryden, Andrew S.; Crowell, Jeffrey A.W.; Griffiths, Stewart K.

    2004-04-01

    Numerical methods are used to examine dimensional errors in metal structures microfabricated by the LIGA process. These errors result from elastic displacements of the PMMA mold during electrodeposition and arise from thermal expansion of the PMMA when electroforming is performed at elevated temperatures and from PMMA swelling due to absorption of water from aqueous electrolytes. Both numerical solutions and simple analytical approximations describing PMMA displacements for idealized linear and axisymmetric geometries are presented and discussed. We find that such displacements result in tapered metal structures having sidewall slopes up to 14 {micro}m per millimeter of height for linear structures bounded by large areas of PMMA. Tapers for curved structures are of similar magnitude, but these structures are additionally skewed from the vertical. Potential remedies for reducing dimensional errors are also discussed. Here we find that auxiliary moat-like features patterned into the PMMA surrounding mold cavities can reduce taper by an order of magnitude or more. Such moats dramatically reduce tapers for all structures, but increase skew for curved structures when the radius of curvature is comparable to the structure height.

  11. Compensation of modeling errors due to unknown domain boundary in diffuse optical tomography.

    PubMed

    Mozumder, Meghdoot; Tarvainen, Tanja; Kaipio, Jari P; Arridge, Simon R; Kolehmainen, Ville

    2014-08-01

    Diffuse optical tomography is a highly unstable problem with respect to modeling and measurement errors. During clinical measurements, the body shape is not always known, and an approximate model domain has to be employed. The use of an incorrect model domain can, however, lead to significant artifacts in the reconstructed images. Recently, the Bayesian approximation error theory has been proposed to handle model-based errors. In this work, the feasibility of the Bayesian approximation error approach to compensate for modeling errors due to unknown body shape is investigated. The approach is tested with simulations. The results show that the Bayesian approximation error method can be used to reduce artifacts in reconstructed images due to unknown domain shape. PMID:25121542

  12. Drug-induced Telogen Effluvium in a Pediatric Patient due to Error of Transcription

    PubMed Central

    Feldstein, Stephanie; Awasthi, Smita

    2015-01-01

    Errors of transcription” are rarely reported, but may cause significant adverse effects in patients. Here, the authors report the case of a 15-year-old Burmese girl presenting with telogen effluvium after being dispensed the wrong medication due to a pharmacy auto-complete error. PMID:26346096

  13. Estimation of radiation risk in presence of classical additive and Berkson multiplicative errors in exposure doses.

    PubMed

    Masiuk, S V; Shklyar, S V; Kukush, A G; Carroll, R J; Kovgan, L N; Likhtarov, I A

    2016-07-01

    In this paper, the influence of measurement errors in exposure doses in a regression model with binary response is studied. Recently, it has been recognized that uncertainty in exposure dose is characterized by errors of two types: classical additive errors and Berkson multiplicative errors. The combination of classical additive and Berkson multiplicative errors has not been considered in the literature previously. In a simulation study based on data from radio-epidemiological research of thyroid cancer in Ukraine caused by the Chornobyl accident, it is shown that ignoring measurement errors in doses leads to overestimation of background prevalence and underestimation of excess relative risk. In the work, several methods to reduce these biases are proposed. They are new regression calibration, an additive version of efficient SIMEX, and novel corrected score methods. PMID:26795191

  14. Eddy-covariance flux errors due to biases in gas concentration measurements: origins, quantification and correction

    NASA Astrophysics Data System (ADS)

    Fratini, G.; McDermitt, D. K.; Papale, D.

    2013-08-01

    Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (www.licor.com/eddypro).

  15. Placement error due to charging in EBL: experimental verification of a new correction model

    NASA Astrophysics Data System (ADS)

    Babin, Sergey; Borisov, Sergey; Kimura, Yasuki; Kono, Kenji; Militsin, Vladimir; Yamamoto, Ryuuji

    2012-06-01

    Placement error due to charging in electron beam lithography has been identified as the most important factor limiting placement accuracy in EBL, which is especially important in the fabrication of masks for double patterning. Published results from a few major companies demonstrated that the placement errors due to charging are far larger than 10 nm. Here, we will describe the results of predicting the charging placement error based on a significantly improved physical model. Specially designed patterns were used to characterize the details of the charging placement error. Reference marks were exposed before the exposure of the test pattern, during the exposure, and after the exposure was completed. The experimental results were used to calibrate the parameters of the physical model. Furthermore, the DISPLACE software was used to predict the placement error maps for other experiments. The results of the measurements and simulations are presented in this paper. The results produced by the software were in good agreement with the experimental measurements. When the amplitude and the direction of the placement error due to charging is predicted, it can be easily corrected using readily available software for mask data preparation, or directly in EBL writers.

  16. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions.

    PubMed

    Karachun, Volodimir; Mel'nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the "false" angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  17. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions

    PubMed Central

    Karachun, Volodimir; Mel’nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the “false” angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  18. Accelerated Nucleation Due to Trace Additives: A Fluctuating Coverage Model.

    PubMed

    Poon, Geoffrey G; Peters, Baron

    2016-03-01

    We develop a theory to account for variable coverage of trace additives that lower the interfacial free energy for nucleation. The free energy landscape is based on classical nucleation theory and a statistical mechanical model for Langmuir adsorption. Dynamics are modeled by diffusion-controlled attachment and detachment of solutes and adsorbing additives. We compare the mechanism and kinetics from a mean-field model, a projection of the dynamics and free energy surface onto nucleus size, and a full two-dimensional calculation using Kramers-Langer-Berezhkovskii-Szabo theory. The fluctuating coverage model predicts rates more accurately than mean-field models of the same process primarily because it more accurately estimates the potential of mean force along the size coordinate. PMID:26485064

  19. Efficiency degradation due to tracking errors for point focusing solar collectors

    NASA Technical Reports Server (NTRS)

    Hughes, R. O.

    1978-01-01

    An important parameter in the design of point focusing solar collectors is the intercept factor which is a measure of efficiency and of energy available for use in the receiver. Using statistical methods, an expression of the expected value of the intercept factor is derived for various configurations and control law implementations. The analysis assumes that a radially symmetric flux distribution (not necessarily Gaussian) is generated at the focal plane due to the sun's finite image and various reflector errors. The time-varying tracking errors are assumed to be uniformly distributed within the threshold limits and allows the expected value calculation.

  20. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors

    PubMed Central

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Introduction: Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. Materials and methods: This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. Results: A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. Conclusions: The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples. PMID:25351356

  1. Incorporating uncertainty of distribution parameters due to sampling errors in flood-damage-reduction project evaluation

    NASA Astrophysics Data System (ADS)

    Su, Hsin-Ting; Tung, Yeou-Koung

    2013-03-01

    Epistemic uncertainty is a result of knowledge deficiency about the system. Sampling error exists when limited amounts of hydrologic data are used to estimate a T year event quantile. Both the natural randomness of hydrologic data and the sampling error in design quantile estimation contribute to the uncertainty in flood damage estimation. This paper presents a framework for evaluating a flood-damage-mitigation project in which both the hydrologic randomness and epistemic uncertainty due to sampling error are considered in flood damage estimation. Different risk-based decision-making criteria are used to evaluate project merits based on the mean, standard deviation, and probability distribution of the project net benefits. The results show that the uncertainty of the project net benefits is quite significant. Ignoring the data sampling error will underestimate the potential risk of each project. It can be clearly shown that adding data to existing sample observations leads to improved quality of information, enhanced reliability of the estimators, and reduced sampling error and uncertainty in the project net benefits. Through the proposed framework, the proper length of the extended record for risk reduction can be determined to achieve the required level of acceptable risk.

  2. Systematic errors in conductimetric instrumentation due to bubble adhesions on the electrodes: An experimental assessment

    NASA Astrophysics Data System (ADS)

    Neelakantaswamy, P. S.; Rajaratnam, A.; Kisdnasamy, S.; Das, N. P.

    1985-02-01

    Systematic errors in conductimetric measurements are often encountered due to partial screening of interelectrode current paths resulting from adhesion of bubbles on the electrode surfaces of the cell. A method of assessing this error quantitatively by a simulated electrolytic tank technique is proposed here. The experimental setup simulates the bubble-curtain effect in the electrolytic tank by means of a pair of electrodes partially covered by a monolayer of small polystyrene-foam spheres representing the bubble adhesions. By varying the number of spheres stuck on the electrode surface, the fractional area covered by the bubbles is controlled; and by measuring the interelectrode impedance, the systematic error is determined as a function of the fractional area covered by the simulated bubbles. A theoretical model which depicts the interelectrode resistance and, hence, the systematic error caused by bubble adhesions is calculated by considering the random dispersal of bubbles on the electrodes. Relevant computed results are compared with the measured impedance data obtained from the electrolytic tank experiment. Results due to other models are also presented and discussed. A time-domain measurement on the simulated cell to study the capacitive effects of the bubble curtain is also explained.

  3. Errors in polarization measurements due to static retardation in photoelastic modulators

    SciTech Connect

    Modine, F.A.; Jellison, G.E. Jr. )

    1993-03-01

    A mathematical description of photoelastic polarization modulators is developed for the general case in which the modulator exhibits a static retardation that is not colinear with the dynamic retardation of the modulator. Simplifying approximations are introduced which are appropriate to practical use of the modulators in polarization measurements. Measurement errors due to the modulator static retardation along with procedures for their elimination are described for reflection ellipsometers, linear dichrometers, and polarimeters.

  4. Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error

    ERIC Educational Resources Information Center

    González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén

    2015-01-01

    An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…

  5. A study of GPS measurement errors due to noise and multipath interference for CGADS

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.

    1996-01-01

    This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.

  6. Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations

    NASA Technical Reports Server (NTRS)

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang

    2006-01-01

    In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:

  7. Evaluation of Astrometry Errors due to the Optical Surface Distortions in Adaptive Optics Systems and Science Instruments

    NASA Astrophysics Data System (ADS)

    Ellerbroek, Brent; Herriot, Glen; Suzuki, Ryuji; Matthias, Schoeck

    2013-12-01

    The objectives for high precision astrometry on ELTs will be challenging, with requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Reducing and correctly calibrating the systematic and quasi-static errors introduced by optical surface distortions will be an important part of meeting these goals. In a recently submitted paper, we described an analytical Fourier domain model for evaluating these effects as the sum of three terms: (i) under-sampling errors, due to measuring the effects of static surface distortions using a finite number of discrete reference sources; (ii) unknown beam wander across the static surface distortions due to line-of-sight jitter or boresighting errors, and (iii) quasi-static errors due to slowly varying surface distortions. In this paper, we apply these methods to evaluating this term in the astrometry error budgets for the TMT Infrared Imaging Spectrograph (IRIS) and the facility AO system, NFIRAOS. The inputs to this exercise include the original top-down allocations for this error term, the original optical surface specifications for IRIS and NFIRAOS as derived earlier on the basis of wavefront error requirements, our assessment of the feasible density and positioning accuracy for an array of calibration sources, and the expected beam wander due to tip/tilt jitter and bore-sighting errors between NFIRAOS and IRIS. The astrometry error computed for these initial parameters was considerably larger than the top-down allocation due to the contributions from the NFIRAOS double-pane entrance window, which is close to the system's input focal plane. The error can be reduced to fall within the allocation by defining tighter, but still feasible, specifications for these elements. We also evaluated the astrometry errors due to quasi-static drift of the figures of the NFIRAOS deformable mirrors, and determined that they are acceptable for RMS wavefront distortions of up to about 30 nm RMS.

  8. Assessment of Error in Aerosol Optical Depth Measured by AERONET Due to Aerosol Forward Scattering

    NASA Technical Reports Server (NTRS)

    Sinyuk, Alexander; Holben, Brent N.; Smirnov, Alexander; Eck, Thomas F.; Slustsker, Ilya; Schafer, Joel S.; Giles, David M.; Sorokin, Michail

    2013-01-01

    We present an analysis of the effect of aerosol forward scattering on the accuracy of aerosol optical depth (AOD) measured by CIMEL Sun photometers. The effect is quantified in terms of AOD and solar zenith angle using radiative transfer modeling. The analysis is based on aerosol size distributions derived from multi-year climatologies of AERONET aerosol retrievals. The study shows that the modeled error is lower than AOD calibration uncertainty (0.01) for the vast majority of AERONET level 2 observations, 99.53%. Only 0.47% of the AERONET database corresponding mostly to dust aerosol with high AOD and low solar elevations has larger biases. We also show that observations with extreme reductions in direct solar irradiance do not contribute to level 2 AOD due to low Sun photometer digital counts below a quality control cutoff threshold.

  9. Global Vision Impairment and Blindness Due to Uncorrected Refractive Error, 1990-2010.

    PubMed

    Naidoo, Kovin S; Leasher, Janet; Bourne, Rupert R; Flaxman, Seth R; Jonas, Jost B; Keeffe, Jill; Limburg, Hans; Pesudovs, Konrad; Price, Holly; White, Richard A; Wong, Tien Y; Taylor, Hugh R; Resnikoff, Serge

    2016-03-01

    : The purpose of this systematic review was to estimate worldwide the number of people with moderate and severe visual impairment (MSVI; presenting visual acuity <6/18, ≥3/60) or blindness (presenting visual acuity <3/60) due to uncorrected refractive error (URE), to estimate trends in prevalence from 1990 to 2010, and to analyze regional differences. The review focuses on uncorrected refractive error which is now the most common cause of avoidable visual impairment globally. : The systematic review of 14,908 relevant manuscripts from 1990 to 2010 using Medline, Embase, and WHOLIS yielded 243 high-quality, population-based cross-sectional studies which informed a meta-analysis of trends by region. The results showed that in 2010, 6.8 million (95% confidence interval [CI]: 4.7-8.8 million) people were blind (7.9% increase from 1990) and 101.2 million (95% CI: 87.88-125.5 million) vision impaired due to URE (15% increase since 1990), while the global population increased by 30% (1990-2010). The all-age age-standardized prevalence of URE blindness decreased 33% from 0.2% (95% CI: 0.1-0.2%) in 1990 to 0.1% (95% CI: 0.1-0.1%) in 2010, whereas the prevalence of URE MSVI decreased 25% from 2.1% (95% CI: 1.6-2.4%) in 1990 to 1.5% (95% CI: 1.3-1.9%) in 2010. In 2010, URE contributed 20.9% (95% CI: 15.2-25.9%) of all blindness and 52.9% (95% CI: 47.2-57.3%) of all MSVI worldwide. The contribution of URE to all MSVI ranged from 44.2 to 48.1% in all regions except in South Asia which was at 65.4% (95% CI: 62-72%). : We conclude that in 2010, uncorrected refractive error continues as the leading cause of vision impairment and the second leading cause of blindness worldwide, affecting a total of 108 million people or 1 in 90 persons. PMID:26905537

  10. Responsibility for reporting patient death due to hospital error in Japan when an error occurred at a referring institution.

    PubMed

    Maeda, Shoichi; Starkey, Jay; Kamishiraki, Etsuko; Ikeda, Noriaki

    2013-12-01

    In Japan, physicians are required to report unexpected health care-associated patient deaths to the police. Patients needing to be transferred to another institution often have complex medical problems. If a medical error occurs, it may be either at the final or the referring institution. Some fear that liability will fall on the final institution regardless of where the error occurred or that the referring facility may oppose such reporting, leading to a failure to report to police or to recommend an autopsy. Little is known about the actual opinions of physicians and risk managers in this regard. The authors sent standardised, self-administered questionnaires to all hospitals in Japan that participate in the national general residency program. Most physicians and risk managers in Japan indicated that they would report a patient's death to the police where the patient has been transferred. Of those who indicated they would not report to the police, the majority still indicated they would recommend an autopsy PMID:24597392

  11. A table of integrals of the error function. II - Additions and corrections.

    NASA Technical Reports Server (NTRS)

    Geller, M.; Ng, E. W.

    1971-01-01

    Integrals of products of error functions with other functions are presented, taking into account a combination of the error function with powers, a combination of the error function with exponentials and powers, a combination of the error function with exponentials of more complicated arguments, definite integrals from Laplace transforms, and a combination of the error function with trigonometric functions. Other integrals considered include a combination of the error function with logarithms and powers, a combination of two error functions, and a combination of the error function with other special functions.

  12. Noncompliance pattern due to medication errors at a Teaching Hospital in Srikot, India

    PubMed Central

    Thakur, Heenopama; Thawani, Vijay; Raina, Rangeel Singh; Kothiyal, Gitanjali; Chakarabarty, Mrinmoy

    2013-01-01

    Objective: To study the medication errors leading to noncompliance in a tertiary care teaching hospital. Materials and Methods: This study was conducted in a tertiary care hospital of a teaching institution from Srikot, Garhwal, Uttarakhand to analyze the medication errors in 500 indoor prescriptions from medicine, surgery, obstetrics and gynecology, pediatrics and ENT departments over five months and 100 outdoor patients of medicine department. Results: Medication error rate for indoor patients was found to be 22.4 % and 11.4% for outdoor patients as against the standard acceptable error rate 3%. Maximum errors were observed in the indoor prescriptions of the surgery department accounting for 44 errors followed by medicine 32 and gynecology 25 in the 500 cases studied leading to faulty administration of medicines. Conclusion: Many medication errors were noted which go against the practice of rational therapeutics. Such studies can be directed to usher in the rational use of medicines for increasing compliance and therapeutic benefits. PMID:23833376

  13. Errors in Expected Human Losses Due to Incorrect Seismic Hazard Estimates

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Nekrasova, A.; Kossobokov, V. G.

    2011-12-01

    The probability of strong ground motion is presented in seismic hazard maps, in which peak ground accelerations (PGA) with 10% probability of exceedance in 50 years are shown by color codes. It has become evident that these maps do not correctly give the seismic hazard. On the seismic hazard map of Japan, the epicenters of the recent large earthquakes are located in the regions of relatively low hazard. The errors of the GSHAP maps have been measured by the difference between observed and expected intensities due to large earthquakes. Here, we estimate how the errors in seismic hazard estimates propagate into errors in estimating the potential fatalities and affected population. We calculated the numbers of fatalities that would have to be expected in the regions of the nine earthquakes with more than 1,000 fatalities during the last 10 years with relatively reliable estimates of fatalities, assuming a magnitude which generates as a maximum intensity the one given by the GSHAP maps. This value is the number of fatalities to be exceeded with probability of 10% during 50 years. In most regions of devastating earthquakes, there are no instruments to measure ground accelerations. Therefore, we converted the PGA expected as a likely maximum based on the GSHAP maps to intensity. The magnitude of the earthquake that would cause the intensity expected by GSHAP as a likely maximum was calculated by M(GSHAP) = (I0 +1.5)/1.5. The numbers of fatalities, which were expected, based on earthquakes with M(GSHAP), were calculated using the loss estimating program QLARM. We calibrated this tool for each case by calculating the theoretical damage and numbers of fatalities (Festim) for the disastrous test earthquakes, generating a match with the observe numbers of fatalities (Fobs=Festim) by adjusting the attenuation relationship within the bounds of commonly observed laws. Calculating the numbers of fatalities expected for the earthquakes with M(GSHAP) will thus yield results that

  14. Evaluation of counting error due to colony masking in bioaerosol sampling.

    PubMed

    Chang, C W; Hwang, Y H; Grinshpun, S A; Macher, J M; Willeke, K

    1994-10-01

    Colony counting error due to indistinguishable colony overlap (i.e., masking) was evaluated theoretically and experimentally. A theoretical model to predict colony masking was used to determine colony counting efficiency by Monte Carlo computer simulation of microorganism collection and development into CFU. The computer simulation was verified experimentally by collecting aerosolized Bacillus subtilis spores and examining micro- and macroscopic colonies. Colony counting efficiency decreased (i) with increasing density of collected culturable microorganisms, (ii) with increasing colony size, and (iii) with decreasing ability of an observation system to distinguish adjacent colonies as separate units. Counting efficiency for 2-mm colonies, at optimal resolution, decreased from 98 to 85% when colony density increased from 1 to 10 microorganisms cm-2, in contrast to an efficiency decrease from 90 to 45% for 5-mm colonies. No statistically significant difference (alpha = 0.05) between experimental and theoretical results was found when colony shape was used to estimate the number of individual colonies in a CFU. Experimental colony counts were 1.2 times simulation estimates when colony shape was not considered, because of nonuniformity of actual colony size and the better discrimination ability of the human eye relative to the model. Colony surface densities associated with high counting accuracy were compared with recommended upper plate count limits and found to depend on colony size and an observation system's ability to identify overlapped colonies. Correction factors were developed to estimate the actual number of collected microorganisms from observed colony counts.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7986046

  15. Pointing and tracking errors due to localized deformation in inter-satellite laser communication links.

    PubMed

    Tan, Liying; Yang, Yuqiang; Ma, Jing; Yu, Jianjie

    2008-08-18

    Instead of Zernike polynomials, ellipse Gaussian model is proposed to represent localized wave-front deformation in researching pointing and tracking errors in inter-satellite laser communication links, which can simplify the calculation. It is shown that both pointing and tracking errors depend on the center deepness h, the radiuses a and b, and the distance d of the Gaussian distortion and change regularly as they increase. The maximum peak values of pointing and tracking errors always appear around h=0.2lambda. The influence of localized deformation is up to 0.7microrad for pointing error, and 0.5microrad for tracking error. To reduce the impact of localized deformation on pointing and tracking errors, the machining precision of optical devices, which should be more greater than 0.2?, is proposed. The principle of choosing the optical devices with localized deformation is presented, and the method that adjusts the pointing direction to compensate pointing and tracking errors is given. We hope the results can be used in the design of inter-satellite lasercom systems. PMID:18711575

  16. Standard addition/absorption detection microfluidic system for salt error-free nitrite determination.

    PubMed

    Ahn, Jae-Hoon; Jo, Kyoung Ho; Hahn, Jong Hoon

    2015-07-30

    A continuous-flow microfluidic chip-based standard addition/absorption detection system has been developed for accurate determination of nitrite in water of varying salinity. The absorption detection of nitrite is made via color development using the Griess reaction. We have found the yield of the reaction is significantly affected by salinity (e.g., -12% error for 30‰ NaCl, 50.0 μg L(-1)N-NO2(-) solution). The microchip has been designed to perform standard addition, color development, and absorbance detection in sequence. To effectively block stray light, the microchip made from black poly(dimethylsiloxane) is placed on the top of a compact housing that accommodates a light-emitting diode, a photomultiplier tube, and an interference filter, where the light source and the detector are optically isolated. An 80-mm liquid-core waveguide mounted on the chip externally has been employed as the absorption detection flow cell. These designs for optics secure a wide linear response range (up to 500 μg L(-1)N-NO2(-)) and a low detection limit (0.12 μg L(-1)N-NO2(-) = 8.6 nM N-NO2(-), S/N = 3). From determination of nitrite in standard samples and real samples collected from an estuary, it has been demonstrated that our microfluidic system is highly accurate (<1% RSD, n = 3) and precise (<1% RSD, n = 3). PMID:26320643

  17. On velocity errors due to irrotational forces in the Navier-Stokes momentum balance

    NASA Astrophysics Data System (ADS)

    Linke, A.; Merdon, C.

    2016-05-01

    This contribution studies the influence of the pressure on the velocity error in finite element discretisations of the Navier-Stokes equations. Four simple benchmark problems that are all close to real-world applications convey that the pressure can be comparably large and is not to be underestimated. In fact, the velocity error can be arbitrarily large in such situations. Only pressure-robust mixed finite element methods, whose velocity error is pressure-independent, can avoid this influence. Indeed, the presented examples show that the pressure-dependent component in velocity error estimates for classical mixed finite element methods is sharp. In consequence, classical mixed finite element methods are not able to simulate some classes of real-world flows, even in cases where dominant convection and turbulence do not play a role.

  18. Strip antenna figure errors due to support truss member length imperfections

    NASA Technical Reports Server (NTRS)

    Greschik, Gyula; Mikular, Martin M.; Helms, Richard G.; Freeland, Robert E.

    2004-01-01

    The dependence of strip antenna steadyy state geometric errors on member length uncertainties in the supporting truss beam is studied with the Monte carlo analysis of a representative truss design. The results, presented in a format streamlined for practical use, can guide the specification for hardware fabrication of required error tolerances (for structural properties as well as member lengths), or they can aid the prediction of antenna performance if component statistics are available.

  19. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  20. Error propagation in hydrodynamics of lowland rivers due to uncertainty in vegetation roughness parameterization

    NASA Astrophysics Data System (ADS)

    Straatsma, Menno

    2010-05-01

    Accurate water level prediction for the design discharge of large rivers is of main importance for the flood safety of large embanked areas in The Netherlands. Within a larger framework of uncertainty assessment, this report focusses on the effect of uncertainty in roughness parameterization in a 2D hydrodynamic model. Two key elements are considered in this roughness parameterization. Firstly the manually classified ecotope map that provides base data for roughness classes, and secondly the lookup table that translates roughness classes to vegetation structural characteristics. The aim is to quantify the effects of these two error sources on the following hydrodynamic aspects: 1. the discharge distribution at the bifurcation points within the river Rhine 2. peak water levels at a stationary discharge of 16000 m3/s. To assess the effect of the first error source, new realisations of ecotope maps were made based on the current ecotope map and an error matrix of the classification. Using these realisations of the ecotope maps, twelve succesfull model runs were carried out of the Rhine distributaries at design discharge. The classification error leads to a standard deviation of the water levels per river kilometer of 0.08, 0.05 and 0.10 m for Upper Rhine- Waal, Pannerdensch Kanaal-Nederrijn-Lek and the IJssel river respectively. The range is maximum range in water levels is 0.40, 0.40 and 0.57 m for these river sections respectively. Largest effects are found in the IJssel river and the Pannerdensch Kanaal. For the second error source, the accuracy of the values in the lookup table, a compilation was made of 445 field measurements of vegetation structure was carried out. For each of the vegetation types, the minimum, 25-percentile, median, 75-percentile and maximum for vegetation height and density were computed. These five values were subsequently put in the lookup table that was used for the hydrodynamic model. The interquartile range in vegetation height and

  1. Reducing On-Board Computer Propagation Errors Due to Omitted Geopotential Terms by Judicious Selection of Uploaded State Vector

    NASA Technical Reports Server (NTRS)

    Greatorex, Scott (Editor); Beckman, Mark

    1996-01-01

    Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would

  2. QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES

    EPA Science Inventory

    The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...

  3. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    ERIC Educational Resources Information Center

    Bishara, Anthony J.; Hittner, James B.

    2015-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…

  4. Using kriging to bound satellite ranging errors due to the ionosphere

    NASA Astrophysics Data System (ADS)

    Blanch, Juan

    The Global Positioning System (GPS) has the potential to become the primary navigational aid for civilian aircraft, thanks to satellite based augmentation systems (SBAS). SBAS systems, including the United State's Wide Area Augmentation System (WAAS), provide corrections and hard bounds on the user errors. The ionosphere is the largest and least predictable source of error. The only ionospheric information available to WAAS is a set of range delay measurements taken at reference stations. From this data, the master station must compute a real time estimate of the ionospheric delay and a hard error bound valid for any user. The variability of the ionospheric behavior has caused the confidence bounds corresponding to the ionosphere to be very large in WAAS. These ranging bounds translate into conservative bounds on user position error. These position error bounds (called protection levels) have values of 30 to 50 meters. Since these values fluctuate near the maximum tolerable limit, WAAS is not always available. In order to increase the availability of WAAS, we must decrease the confidence bounds corresponding to ionospheric uncertainty while maintaining integrity. In this work, I present an ionospheric estimation algorithm based on kriging. I first introduce a simple model of the Vertical Ionospheric Delay that captures both the deterministic behavior and the random behavior of the ionosphere. Under this model, the kriging method is optimal. More importantly kriging provides an estimation variance that can be translated into an error bound. However, this method must be modified for three reasons; first, the state of the ionosphere is unknown and can only be estimated through real time measurements; second, because of bandwidth constraints, the user cannot receive all the measurements and third there is noise in the measurements. I will show how these three obstacles can be overcome. The algorithm presented here provides a reduction in the error bound corresponding

  5. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  6. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGESBeta

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  7. Space telemetry degradation due to Manchester data asymmetry induced carrier tracking phase error

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien M.

    1991-01-01

    The deleterious effects that the Manchester (or Bi-phi) data asymmetry has on the performance of phase-modulated residual carrier communication systems are analyzed. Expressions for the power spectral density of an asymmetric Manchester data stream, the interference-to-carrier signal power ratio (I/C), and the error probability performance are derived. Since data asymmetry can cause undesired spectral components at the carrier frequency, the I/C ratio is given as a function of both the data asymmetry and the telemetry modulation index. Also presented are the data asymmetry and asymmetry-induced carrier tracking loop and the system bit-error rate to various parameters of the models.

  8. Relativistic positioning: errors due to uncertainties in the satellite world lines

    NASA Astrophysics Data System (ADS)

    Puchades, Neus; Sáez, Diego

    2014-07-01

    Global navigation satellite systems use appropriate satellite constellations to get the coordinates of an user—close to Earth—in an almost inertial reference system. We have simulated both GPS and GALILEO constellations. Uncertainties in the satellite world lines lead to dominant positioning errors. In this paper, a detailed analysis of these errors is developed inside a great region surrounding Earth. This analysis is performed in the framework of the so-called relativistic positioning systems. Our study is based on the Jacobian ( J) of the transformation giving the emission coordinates in terms of the inertial ones. Around points of vanishing J, positioning errors are too large. We show that, for any 4-tuple of satellites, the points with J=0 are located at distances, D, from the Earth centre greater than about 2 R/3, where R is the radius of the satellite orbits which are assumed to be circumferences. Our results strongly suggest that, for D-distances greater than 2 R/3 and smaller than 105 km, a rather good positioning may be achieved by using appropriate satellite 4-tuples without J=0 points located in the user vicinity. The way to find these 4-tuples is discussed for arbitrary users with D<105 km and, then, preliminary considerations about satellite navigation at D<105 km are presented. Future work on the subject of space navigation—based on appropriate simulations—is in progress.

  9. Anemia Causes Hypoglycemia in ICU Patients Due to Error in Single-Channel Glucometers: Methods of Reducing Patient Risk

    PubMed Central

    Pidcoke, Heather F.; Wade, Charles E.; Mann, Elizabeth A.; Salinas, Jose; Cohee, Brian M.; Holcomb, John B.; Wolf, Steven E.

    2014-01-01

    OBJECTIVE Intensive insulin therapy (IIT) in the critically ill reduces mortality but carries the risk of increased hypoglycemia. Point-of-care (POC) blood glucose analysis is standard; however anemia causes falsely high values and potentially masks hypoglycemia. Permissive anemia is routinely practiced in most intensive care units (ICUs). We hypothesized that POC glucometer error due to anemia is prevalent, can be mathematically corrected, and correction uncovers occult hypoglycemia during IIT. DESIGN The study has both retrospective and prospective phases. We reviewed data to verify the presence of systematic error, determine the source of error, and establish the prevalence of anemia. We confirmed our findings by reproducing the error in an in-vitro model. Prospective data was used to develop a correction formula validated by the Monte Carlo method. Correction was implemented in a burn ICU and results evaluated after nine months. SETTING Burn and trauma ICUs at a single research institution. PATIENTS/SUBJECTS Samples for in-vitro studies were taken from healthy volunteers. Samples for formula development were from critically ill patients on IIT. INTERVENTIONS Insulin doses were calculated based on predicted serum glucose values from corrected POC glucometer measurements. MEASUREMENTS Time-matched POC glucose, laboratory glucose, and hematocrit values. MAIN RESULTS We previously found that anemia (HCT<34%) produces systematic error in glucometer measurements. The error was correctible with a mathematical formula developed and validated using prospectively collected data. Error of uncorrected POC glucose ranged from 19% to 29% (p<0.001), improving to ≤5% after mathematical correction of prospective data. Comparison of data pairs before and after correction formula implementation demonstrated a 78% decrease in the incidence of hypoglycemia in critically ill and anemic patients treated with insulin and tight glucose control (p<0.001). CONCLUSIONS A mathematical

  10. Seasonal GPS Positioning Errors due to Water Content Variations in Atmosphere

    NASA Astrophysics Data System (ADS)

    Tian, Y.

    2013-12-01

    There are still non-tectonic signals, e.g. the seasonal variations and common-mode errors (CME), in Global Positioning System (GPS) positioning results derived using the state-of-the-art software and models, which blurs the detection of transient events. Previous studies had shown that there are also seasonal variations in the GPS positioning accuracy, i.e., the scattering degree of GPS positions in the summer is larger than that in the winter for some regional networks. In this work, a consistent reprocessing of historical data for global GPS stations is done to confirm the existence of such variations and figure out its spatial characteristics at the global scale. It is found that GPS stations in the north hemisphere have larger positioning error in the summer than in the winter; and contrarily the south hemisphere stations have larger errors in the winter than in the summer. Results for several typical stations are shown in Fig. 1. After excluding several possible origins of this phenomenon, it is found that the variation of precipitable water vapor (PWV) content in the atmosphere is highly correlated with this kind of seasonal positioning errors of GPS (Fig. 2). Although it currently cannot be validated that the GPS positioning accuracy will increase by eliminating PWV effect thoroughly in the rainy days during the GPS observation data processing step, it is most likely that this phenomenon is caused by the water vapor content in the troposphere. The solving of this problem will surely enhance our ability to detect weak transient signals that are blurred in the continuous GPS positions. Fig. 1 Position time series deprived of CME for BJFS (left), HRAO (middle), and WTZR (right). BJFS and WTZR are located in the north hemisphere where positions are much scattered in the summer; the situation is the contrary at HRAO which is seated in the south hemisphere. Fig. 2 GPS positioning errors (represented here by the one-way postfit residuals (OWPR) from GAMIT solution

  11. Signal distortion due to beam-pointing error in a chopper modulated laser system.

    PubMed

    Eklund, H

    1978-01-15

    The detector output has been studied for a long-distance system with a chopped cw laser as transmitter source. It is shown experimentally that the pulse distortion of the detected signal is dependent on the beam-pointing error. Parameters reflecting the pulse distortion are defined. The beam deviation in 1-D is found to be strongly related to these parameters. The result is in agreement with a theoretical model based upon the Fresnel diffraction theory. Possible applications in beam-tracking systems, communications systems, and atmospheric studies are discussed. PMID:20174398

  12. Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators

    NASA Technical Reports Server (NTRS)

    Curtis, H. B.

    1976-01-01

    Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.

  13. Soft-error generation due to heavy-ion tracks in bipolar integrated circuits

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.

    1984-01-01

    Both bipolar and MOS integrated circuits have been empirically demonstrated to be susceptible to single-particle soft-error generation, commonly referred to as single-event upset (SEU), which is manifested in a bit-flip in a latch-circuit construction. Here, the intrinsic characteristics of SEU in bipolar (static) RAM's are demonstrated through results obtained from the modeling of this effect using computer circuit-simulation techniques. It is shown that as the dimensions of the devices decrease, the critical charge required to cause SEU decreases in proportion to the device cross-section. The overall results of the simulations are applicable to most integrated circuit designs.

  14. Mitigation of Angle Tracking Errors Due to Color Dependent Centroid Shifts in SIM-Lite

    NASA Technical Reports Server (NTRS)

    Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.; Zhai, Chengxing

    2010-01-01

    The SIM-Lite astrometric interferometer will search for Earth-size planets in the habitable zones of nearby stars. In this search the interferometer will monitor the astrometric position of candidate stars relative to nearby reference stars over the course of a 5 year mission. The elemental measurement is the angle between a target star and a reference star. This is a two-step process, in which the interferometer will each time need to use its controllable optics to align the starlight in the two arms with each other and with the metrology beams. The sensor for this alignment is an angle tracking CCD camera. Various constraints in the design of the camera subject it to systematic alignment errors when observing a star of one spectrum compared with a start of a different spectrum. This effect is called a Color Dependent Centroid Shift (CDCS) and has been studied extensively with SIM-Lite's SCDU testbed. Here we describe results from the simulation and testing of this error in the SCDU testbed, as well as effective ways that it can be reduced to acceptable levels.

  15. Error in Dasibi flight measurements of atmospheric ozone due to instrument wall-loss

    NASA Technical Reports Server (NTRS)

    Ainsworth, J. E.; Hagemeyer, J. R.; Reed, E. I.

    1981-01-01

    Theory suggests that in laminar flow the percent loss of a trace constituent to the walls of a measuring instrument varies as P to the -2/3, where P is the total gas pressure. Preliminary laboratory ozone wall-loss measurements confirm this P to the -2/3 dependence. Accurate assessment of wall-loss is thus of particular importance for those balloon-borne instruments utilizing laminar flow at ambient pressure, since the ambient pressure decreases by a factor of 350 during ascent to 40 km. Measurements and extrapolations made for a Dasibi ozone monitor modified for balloon flight indicate that the wall-loss error at 40 km was between 6 and 30 percent and that the wall-loss error in the derived total ozone column-content for the region from the surface to 40 km altitude was between 2 and 10 percent. At 1000 mb, turbulence caused an order of magnitude increase in the Dasibi wall-loss.

  16. Inhalation errors due to device switch in patients with chronic obstructive pulmonary disease and asthma: critical health and economic issues

    PubMed Central

    Roggeri, Alessandro; Micheletto, Claudio; Roggeri, Daniela Paola

    2016-01-01

    Background Different inhalation devices are characterized by different techniques of use. The untrained switching of device in chronic obstructive pulmonary disease (COPD) and asthma patients may be associated with inadequate inhalation technique and, consequently, could lead to a reduction in adherence to treatment and limit control of the disease. The aim of this analysis was to estimate the potential economic impact related to errors in inhalation in patients switching device without adequate training. Methods An Italian real-practice study conducted in patients affected by COPD and asthma has shown an increase in health care resource consumption associated with misuse of inhalers. Particularly, significantly higher rates of hospitalizations, emergency room visits (ER), and pharmacological treatments (steroids and antimicrobials) were observed. In this analysis, those differences in resource consumption were monetized considering the Italian National Health Service (INHS) perspective. Results Comparing a hypothetical cohort of 100 COPD patients with at least a critical error in inhalation vs 100 COPD patients without errors in inhalation, a yearly excess of 11.5 hospitalizations, 13 ER visits, 19.5 antimicrobial courses, and 47 corticosteroid courses for the first population were revealed. In the same way, considering 100 asthma patients with at least a critical error in inhalation vs 100 asthma patients without errors in inhalation, the first population is associated with a yearly excess of 19 hospitalizations, 26.5 ER visits, 4.5 antimicrobial courses, and 21.5 corticosteroid courses. These differences in resource consumption could be associated with an increase in health care expenditure for INHS, due to inhalation errors, of €23,444/yr in COPD and €44,104/yr in asthma for the considered cohorts of 100 patients. Conclusion This evaluation highlights that misuse of inhaler devices, due to inadequate training or nonconsented switch of inhaled medications

  17. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    NASA Astrophysics Data System (ADS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  18. Quantifying Errors in Jet Noise Research Due to Microphone Support Reflection

    NASA Technical Reports Server (NTRS)

    Nallasamy, Nambi; Bridges, James

    2002-01-01

    The reflection coefficient of a microphone support structure used insist noise testing is documented through tests performed in the anechoic AeroAcoustic Propulsion Laboratory. The tests involve the acquisition of acoustic data from a microphone mounted in the support structure while noise is generated from a known broadband source. The ratio of reflected signal amplitude to the original signal amplitude is determined by performing an auto-correlation function on the data. The documentation of the reflection coefficients is one component of the validation of jet noise data acquired using the given microphone support structure. Finally. two forms of acoustic material were applied to the microphone support structure to determine their effectiveness in reducing reflections which give rise to bias errors in the microphone measurements.

  19. Prevalence of visual impairment due to uncorrected refractive error: Results from Delhi-Rapid Assessment of Visual Impairment Study

    PubMed Central

    Senjam, Suraj Singh; Vashist, Praveen; Gupta, Noopur; Malhotra, Sumit; Misra, Vasundhara; Bhardwaj, Amit; Gupta, Vivek

    2016-01-01

    Aim: To estimate the prevalence of visual impairment (VI) due to uncorrected refractive error (URE) and to assess the barriers to utilization of services in the adult urban population of Delhi. Materials and Methods: A population-based rapid assessment of VI was conducted among people aged 40 years and above in 24 randomly selected clusters of East Delhi district. Presenting visual acuity (PVA) was assessed in each eye using Snellen's E chart. Pinhole examination was done if PVA was <20/60 in either eye and ocular examination to ascertain the cause of VI. Barriers to utilization of services for refractive error were recorded with questionnaires. Results: Of 2421 individuals enumerated, 2331 (96%) individuals were examined. Females were 50.7% among them. The mean age of all examined subjects was 51.32 ± 10.5 years (standard deviation). VI in either eye due to URE was present in 275 individuals (11.8%, 95% confidence interval [CI]: 10.5–13.1). URE was identified as the most common cause (53.4%) of VI. The overall prevalence of VI due to URE in the study population was 6.1% (95% CI: 5.1 CI: 5.1–7.0). The elder population as well as females were more likely to have VI due to URE (odds ratio [OR] = 12.3; P < 0.001 and OR = 1.5; P < 0.02). Lack of felt need was the most common reported barrier (31.5%). Conclusions: The prevalence of VI due to URE among the urban adult population of Delhi is still high despite the availability of abundant eye care facilities. The majority of reported barriers are related to human behavior and attitude toward the refractive error. Understanding these aspects will help in planning appropriate strategies to eliminate VI due to URE. PMID:27380979

  20. Ticks in the wrong boxes: assessing error in blanket-drag studies due to occasional sampling

    PubMed Central

    2013-01-01

    Background The risk posed by ticks as vectors of disease is typically assessed by blanket-drag sampling of host-seeking individuals. Comparisons of peak abundance between plots – either in order to establish their relative risk or to identify environmental correlates – are often carried out by sampling on one or two occasions during the period of assumed peak tick activity. Methods This paper simulates this practice by ‘re-sampling’ from model datasets derived from an empirical field study. Re-sample dates for each plot are guided by either the previous year’s peak at the plot, or the previous year’s peak at a similar, nearby plot. Results from single, double and three-weekly sampling regimes are compared. Results Sampling on single dates within a two-month window of assumed peak activity has the potential to introduce profound errors; sampling on two dates (double sampling) offers greater precision, but three-weekly sampling is the least biased. Conclusions The common practice of sampling for the abundance of host-seeking ticks on single dates in each plot-year should be strenuously avoided; it is recommended that field acarologists employ regular sampling throughout the year at intervals no greater than three weeks, for a variety of epidemiological studies. PMID:24321224

  1. A Framework for Dealing With Uncertainty due to Model Structure Error

    NASA Astrophysics Data System (ADS)

    van der Keur, P.; Refsgaard, J.; van der Sluijs, J.; Brown, J.

    2004-12-01

    Although uncertainty about structures of environmental models (conceptual uncertainty) has been recognised often to be the main source of uncertainty in model predictions, it is rarely considered in environmental modelling. Rather, formal uncertainty analyses have traditionally focused on model parameters and input data as the principal source of uncertainty in model predictions. The traditional approach to model uncertainty analysis that considers only a single conceptual model, fails to adequately sample the relevant space of plausible models. As such, it is prone to modelling bias and underestimation of model uncertainty. In this paper we review a range of strategies for assessing structural uncertainties. The existing strategies fall into two categories depending on whether field data are available for the variable of interest. Most research attention has until now been devoted to situations, where model structure uncertainties can be assessed directly on the basis of field data. This corresponds to a situation of `interpolation'. However, in many cases environmental models are used for `extrapolation' beyond the situation and the field data available for calibration. A framework is presented for assessing the predictive uncertainties of environmental models used for extrapolation. The key elements are the use of alternative conceptual models and assessment of their pedigree and the adequacy of the samples of conceptual models to represent the space of plausible models by expert elicitation. Keywords: model error, model structure, conceptual uncertainty, scenario analysis, pedigree

  2. Determination of stores pointing error due to wing flexibility under flight load

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.

    1995-01-01

    The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.

  3. Subspace electrode selection methodology for EEG multiple source localization error reduction due to uncertain conductivity values.

    PubMed

    Crevecoeur, Guillaume; Yitembe, Bertrand; Dupre, Luc; Van Keer, Roger

    2013-01-01

    This paper proposes a modification of the subspace correlation cost function and the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) method for electroencephalography (EEG) source analysis in epilepsy. This enables to reconstruct neural source locations and orientations that are less degraded due to the uncertain knowledge of the head conductivity values. An extended linear forward model is used in the subspace correlation cost function that incorporates the sensitivity of the EEG potentials to the uncertain conductivity value parameter. More specifically, the principal vector of the subspace correlation function is used to provide relevant information for solving the EEG inverse problems. A simulation study is carried out on a simplified spherical head model with uncertain skull to soft tissue conductivity ratio. Results show an improvement in the reconstruction accuracy of source parameters compared to traditional methodology, when using conductivity ratio values that are different from the actual conductivity ratio. PMID:24111154

  4. Prediction of DVH parameter changes due to setup errors for breast cancer treatment based on 2D portal dosimetry

    SciTech Connect

    Nijsten, S. M. J. J. G.; Elmpt, W. J. C. van; Mijnheer, B. J.; Minken, A. W. H.; Persoon, L. C. G. G.; Lambin, P.; Dekker, A. L. A. J.

    2009-01-15

    Electronic portal imaging devices (EPIDs) are increasingly used for portal dosimetry applications. In our department, EPIDs are clinically used for two-dimensional (2D) transit dosimetry. Predicted and measured portal dose images are compared to detect dose delivery errors caused for instance by setup errors or organ motion. The aim of this work is to develop a model to predict dose-volume histogram (DVH) changes due to setup errors during breast cancer treatment using 2D transit dosimetry. First, correlations between DVH parameter changes and 2D gamma parameters are investigated for different simulated setup errors, which are described by a binomial logistic regression model. The model calculates the probability that a DVH parameter changes more than a specific tolerance level and uses several gamma evaluation parameters for the planning target volume (PTV) projection in the EPID plane as input. Second, the predictive model is applied to clinically measured portal images. Predicted DVH parameter changes are compared to calculated DVH parameter changes using the measured setup error resulting from a dosimetric registration procedure. Statistical accuracy is investigated by using receiver operating characteristic (ROC) curves and values for the area under the curve (AUC), sensitivity, specificity, positive and negative predictive values. Changes in the mean PTV dose larger than 5%, and changes in V{sub 90} and V{sub 95} larger than 10% are accurately predicted based on a set of 2D gamma parameters. Most pronounced changes in the three DVH parameters are found for setup errors in the lateral-medial direction. AUC, sensitivity, specificity, and negative predictive values were between 85% and 100% while the positive predictive values were lower but still higher than 54%. Clinical predictive value is decreased due to the occurrence of patient rotations or breast deformations during treatment, but the overall reliability of the predictive model remains high. Based on our

  5. Quantification of the error induced on Langmuir probe determined electron temperature and density due to an RF plasma potential

    NASA Astrophysics Data System (ADS)

    Kafle, Nischal; Donovan, David; Martin, Elijah

    2015-11-01

    An RF plasma potential can significantly effect the IV characteristic of a Langmuir probe if not properly compensated. A substantial research effort in the low temperature plasma community has been carried out to determine this effect and how to achieve the required compensation for accurate measurements. However, quantification of the error induced on the extracted electron temperature and density from an uncompensated Langmuir probe due to an RF plasma potential has not been explored. The research presented is the first attempt to quantify this error in terms of RF plasma potential magnitude, electron temperature, and electron density. The Langmuir probe IV characteristic was simulated using empirical formulas fitted to the Laframboise simulation results. The RF effected IV characteristic was simulated by adding a sinusoidal variation to the plasma potential and computing the time average numerically. The error induced on the electron temperature and density was determined by fitting the RF effected IV characteristic to the empirical formulas representing the standard Laframboise simulation results. Experimental results indicating the accuracy of this quantification will be presented.

  6. ANALYSIS OF DISTRIBUTION FEEDER LOSSES DUE TO ADDITION OF DISTRIBUTED PHOTOVOLTAIC GENERATORS

    SciTech Connect

    Tuffner, Francis K.; Singh, Ruchi

    2011-08-09

    Distributed generators (DG) are small scale power supplying sources owned by customers or utilities and scattered throughout the power system distribution network. Distributed generation can be both renewable and non-renewable. Addition of distributed generation is primarily to increase feeder capacity and to provide peak load reduction. However, this addition comes with several impacts on the distribution feeder. Several studies have shown that addition of DG leads to reduction of feeder loss. However, most of these studies have considered lumped load and distributed load models to analyze the effects on system losses, where the dynamic variation of load due to seasonal changes is ignored. It is very important for utilities to minimize the losses under all scenarios to decrease revenue losses, promote efficient asset utilization, and therefore, increase feeder capacity. This paper will investigate an IEEE 13-node feeder populated with photovoltaic generators on detailed residential houses with water heater, Heating Ventilation and Air conditioning (HVAC) units, lights, and other plug and convenience loads. An analysis of losses for different power system components, such as transformers, underground and overhead lines, and triplex lines, will be performed. The analysis will utilize different seasons and different solar penetration levels (15%, 30%).

  7. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 9 2011-10-01 2011-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q of Part 502 Shipping FEDERAL MARITIME... of Freight Charges Due to Tariff or Quoting Error Federal Maritime Commission Special Docket...

  8. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q of Part 502 Shipping FEDERAL MARITIME... of Freight Charges Due to Tariff or Quoting Error Federal Maritime Commission Special Docket...

  9. Rigorous model for registration error due to EUV reticle non-flatness and a proposed disposition and compensation technique

    NASA Astrophysics Data System (ADS)

    Lieberman, Barry

    2007-03-01

    The non-telecentricity of EUV lithography exposure systems translates into a very severe specification for EUV mask flatness that is typically 10 times tighter than the typical current specification for masks used in 193 nm wavelength exposure systems. The mask contribution to the error budget for pattern placement dictates these specifications. EUV mask blank suppliers must meet this specification while simultaneously meeting the even more challenging specification for defects density. This paper suggests a process flow and correction methodology that could conceivably relax the flatness specification. The proposal does require that the proposed method of clamping the mask using an electrostatic chuck be accurate and reproducible. However, this is also a requirement of the current approach. In addition, this proposal requires the incorporation of an electrostatic chuck into a mask-shop metrology tool that precisely replicates the behavior of the chuck found in the EUV exposure tool.

  10. Phantom Effects in School Composition Research: Consequences of Failure to Control Biases Due to Measurement Error in Traditional Multilevel Models

    ERIC Educational Resources Information Center

    Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik

    2015-01-01

    The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…

  11. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    NASA Astrophysics Data System (ADS)

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  12. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  13. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  14. Kalman filter application to mitigate the errors in the trajectory simulations due to the lunar gravitational model uncertainty

    NASA Astrophysics Data System (ADS)

    Gonçalves, L. D.; Rocco, E. M.; de Moraes, R. V.; Kuga, H. K.

    2015-10-01

    This paper aims to simulate part of the orbital trajectory of Lunar Prospector mission to analyze the relevance of using a Kalman filter to estimate the trajectory. For this study it is considered the disturbance due to the lunar gravitational potential using one of the most recent models, the LP100K model, which is based on spherical harmonics, and considers the maximum degree and order up to the value 100. In order to simplify the expression of the gravitational potential and, consequently, to reduce the computational effort required in the simulation, in some cases, lower values for degree and order are used. Following this aim, it is made an analysis of the inserted error in the simulations when using such values of degree and order to propagate the spacecraft trajectory and control. This analysis was done using the standard deviation that characterizes the uncertainty for each one of the values of the degree and order used in LP100K model for the satellite orbit. With knowledge of the uncertainty of the gravity model adopted, lunar orbital trajectory simulations may be accomplished considering these values of uncertainty. Furthermore, it was also used a Kalman filter, where is considered the sensor's uncertainty that defines the satellite position at each step of the simulation and the uncertainty of the model, by means of the characteristic variance of the truncated gravity model. Thus, this procedure represents an effort to approximate the results obtained using lower values for the degree and order of the spherical harmonics, to the results that would be attained if the maximum accuracy of the model LP100K were adopted. Also a comparison is made between the error in the satellite position in the situation in which the Kalman filter is used and the situation in which the filter is not used. The data for the comparison were obtained from the standard deviation in the velocity increment of the space vehicle.

  15. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  16. Error in Airspeed Measurement Due to the Static-Pressure Field Ahead of an Airplane at Transonic Speeds

    NASA Technical Reports Server (NTRS)

    O'Bryan, Thomas C; Danforth, Edward C B; Johnston, J Ford

    1955-01-01

    The magnitude and variation of the static-pressure error for various distances ahead of sharp-nose bodies and open-nose air inlets and for a distance of 1 chord ahead of the wing tip of a swept wing are defined by a combination of experiment and theory. The mechanism of the error is discussed in some detail to show the contributing factors that make up the error. The information presented provides a useful means for choosing a proper location for measurement of static pressure for most purposes.

  17. Bounds on least-squares four-parameter sine-fit errors due to harmonic distortion and noise

    SciTech Connect

    Deyst, J.P.; Souders, T.M.; Solomon, O.M.

    1994-03-01

    Least-squares sine-fit algorithms are used extensively in signal processing applications. The parameter estimates produced by such algorithms are subject to both random and systematic errors when the record of input samples consists of a fundamental sine wave corrupted by harmonic distortion or noise. The errors occur because, in general, such sine-fits will incorporate a portion of the harmonic distortion or noise into their estimate of the fundamental. Bounds are developed for these errors for least-squares four-parameter (amplitude, frequency, phase, and offset) sine-fit algorithms. The errors are functions of the number of periods in the record, the number of samples in the record, the harmonic order, and fundamental and harmonic amplitudes and phases. The bounds do not apply to cases in which harmonic components become aliased.

  18. Errors in the determination of the solar constant by the Langley method due to the presence of volcanic aerosol

    SciTech Connect

    Schotland, R.M.; Hartman, J.E.

    1989-02-01

    The accuracy in the determination of the solar constant by means of the Langley method is strongly influenced by the spatial inhomogeneities of the atmospheric aerosol. Volcanos frequently inject aerosol into the upper troposphere and lower stratosphere. This paper evaluates the solar constant error that would occur if observations had been taken throughout the plume of El Chichon observed by NASA aircraft in the fall of 1982 and the spring of 1983. A lidar method is suggested to minimize this error. 15 refs.

  19. Anomalous yield reduction in direct-drive DT implosions due to 3He addition

    SciTech Connect

    Herrmann, Hans W; Langenbrunner, James R; Mack, Joseph M; Cooley, James H; Wilson, Douglas C; Evans, Scott C; Sedillo, Tom J; Kyrala, George A; Caldwell, Stephen E; Young, Carlton A; Nobile, Arthur; Wermer, Joseph R; Paglieri, Stephen N; Mcevoy, Aaron M; Kim, Yong Ho; Batha, Steven H; Horsfield, Colin J; Drew, Dave; Garbett, Warren; Rubery, Michael; Glebov, Vladimir Yu; Roberts, Samuel; Frenje, Johan A

    2008-01-01

    Glass capsules were imploded in direct drive on the OMEGA laser [T. R. Boehly et aI., Opt. Commun. 133, 495, 1997] to look for anomalous degradation in deuterium/tritium (DT) yield (i.e., beyond what is predicted) and changes in reaction history with {sup 3}He addition. Such anomalies have previously been reported for D/{sup 3}He plasmas, but had not yet been investigated for DT/{sup 3}He. Anomalies such as these provide fertile ground for furthering our physics understanding of ICF implosions and capsule performance. A relatively short laser pulse (600 ps) was used to provide some degree of temporal separation between shock and compression yield components for analysis. Anomalous degradation in the compression component of yield was observed, consistent with the 'factor of two' degradation previously reported by MIT at a 50% {sup 3}He atom fraction in D{sub 2} using plastic capsules [Rygg et aI., Phys. Plasmas 13, 052702 (2006)]. However, clean calculations (i.e., no fuel-shell mixing) predict the shock component of yield quite well, contrary to the result reported by MIT, but consistent with LANL results in D{sub 2}/{sup 3}He [Wilson, et aI., lml Phys: Conf Series 112, 022015 (2008)]. X-ray imaging suggests less-than-predicted compression ofcapsules containing {sup 3}He. Leading candidate explanations are poorly understood Equation-of-State (EOS) for gas mixtures, and unanticipated particle pressure variation with increasing {sup 3}He addition.

  20. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  1. Mechanism of wiggling enhancement due to HBr gas addition during amorphous carbon etching

    NASA Astrophysics Data System (ADS)

    Kofuji, Naoyuki; Ishimura, Hiroaki; Kobayashi, Hitoshi; Une, Satoshi

    2015-06-01

    The effect of gas chemistry during etching of an amorphous carbon layer (ACL) on wiggling has been investigated, focusing especially on the changes in residual stress. Although the HBr gas addition reduces critical dimension loss, it enhances the surface stress and therefore increases wiggling. Attenuated total reflectance Fourier transform infrared spectroscopy revealed that the increase in surface stress was caused by hydrogenation of the ACL surface with hydrogen radicals. Three-dimensional (3D) nonlinear finite element method analysis confirmed that the increase in surface stress is large enough to cause the wiggling. These results also suggest that etching with hydrogen compound gases using an ACL mask has high potential to cause the wiggling.

  2. EFFECT ON 105KW NORTH WALL DUE TO ADDITION OF FILTRATION SYSTEM

    SciTech Connect

    CHO CS

    2010-03-08

    CHPRC D&D Projects is adding three filtration system on two 1-ft concrete pads adjacent to the north side of existing KW Basin building. This analysis is prepared to provide qualitative assessment based on the review of design information available for 105KW basin substructure. In the proposed heating, ventilation and air conditioning (HVAC) filtration pad designs a 2 ft gap will be maintained between the pads and the north end of the existing 1 05KW -Basin building. Filtration Skids No.2 and No.3 share one pad. It is conservative to evaluate the No.2 and No.3 skid pad for the wall assessment. Figure 1 shows the plan layout of the 105KW basin site and the location of the pads for the filtration system or HVAC skids. Figure 2 shows the cross-section elevation view of the pad. The concrete pad Drawing H-1-91482 directs the replacement of the existing 8-inch concrete pad with two new 1-ft think pads. The existing 8-inch pad is separated from the 105KW basin superstructure by an expansion joint of only half an inch. The concrete pad Drawing H-1-91482 shows the gap between the new proposed pads and the north wall and any overflow pits and sumps is 2-ft. Following analysis demonstrates that the newly added filtration units and their pads do not exceed the structural capacity of existing wall. The calculation shows that the total bending moment on the north wall due to newly added filtration units and pads including seismic load is 82.636 ft-kip/ft and is within the capacity of wall which is 139.0ft-kip/ft.

  3. Estimation of site occupancy error due to statistical noise for the ratio ALCHEMI method[Atom Location by Channeling Enhanced Microanalysis

    SciTech Connect

    Hao, Y.L.; Yang, R.; Cui, Y.Y.; Li, D.

    1999-11-19

    The ALCHEMI (acronym for atom location by channeling enhanced microanalysis) method has been widely used to determine crystallographic site distributions of substitutional species within a host crystal. However, the error of site occupancy can not be easily determined for the ratio ALCHEMI method. The purpose of this paper is to present a detailed treatment of error due to statistical noise for the ratio ALCHEMI method, with specific reference to the site occupancy of alloying elements in TiAl. The formulae for calculating the site occupancy of alloying elements in an ordered phase derived by Spence and Taftoe and by Taftoe and Spence are first expressed in different forms. Then the path of error propagation in the calculation is described and the maximum error of site occupancies caused by statistical noise is estimated. Finally, the authors present experimental measurements of site occupancy made with representative elements that were known to occupy exclusively either the Ti or the Al sublattice sites in TiAl in order to test the reliability of the error-analysis method they described. The error due to delocalization interaction for the planar ALCHEMI method will also be discussed for the case of TiAl.

  4. Aerosol size distribution retrievals from sunphotometer measurements: Theoretical evaluation of errors due to circumsolar and related effects

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Gueymard, Christian A.

    2012-05-01

    The uncertainty in particle size distribution retrievals is analyzed theoretically and numerically when using aerosol optical depth (AOD) data affected by three distinct error-inducing effects. Specifically, circumsolar radiation (CS), optical mass (OM), and solar disk's brightness distribution (BD) effects are taken into consideration here. Because of these effects, the theoretical AOD is affected by an error, ∂AOD, that consequently translates into errors in the determined (apparent) particle size distribution (PSD). Through comparison of the apparent and the true size distributions, the relative error, ∂PSD, is calculated here as a function of particle radius for various instrument's fields of view (aperture) and solar zenith angles. It is shown that, in general, the CS effect overestimates the number of submicron-sized particles, and that the significance of this effect increases with the aperture. In case of maritime aerosols, the CS effect may also lead to an underestimation of the number concentration of large micron-sized particles. The BD and OM effects become important, and possibly predominant, when AOD is low. Assuming large particles dominate in the atmosphere, the BD effect tends to underestimate the concentration of the smallest aerosol particles. In general, the PSD(apparent)/PSD(true) ratio is affected by the CS effect equally over all particle sizes. The relative errors in PSD are typically smaller than 40-60%, but can exceptionally exceed 100%, which means that the apparent PSD may then be twice as large as the true PSD. This extreme situation typically occurs with maritime aerosols under elevated humidity conditions. Recent instruments tend to be designed with smaller apertures than ever before, which lower the CS-induced errors to an acceptable level in most cases.

  5. There Goes the Neighborhood Effect: Bias Due to Non-Differential Measurement Error in the Construction of Neighborhood Contextual Measures

    PubMed Central

    Mooney, Stephen J.; Richards, Catherine A.; Rundle, Andrew G.

    2015-01-01

    BACKGROUND Multilevel studies of neighborhood impacts on health frequently aggregate individual-level data to create contextual measures. For example, percent of residents living in poverty and median household income are both aggregations of Census data on individual-level household income. Because household income is sensitive and complex, it is likely to be reported with error. METHODS To assess the impact of such error on effect estimates for neighborhood contextual factors, we conducted simulation studies to relate neighborhood measures derived from Census data to individual body mass index, varying the extent of non-differential misclassification/measurement error in the underlying Census data. We then explored the relationship between the form of variables chosen for neighborhood measure and outcome, modeling technique used, size and number of neighborhoods, and categorization of neighborhoods to the magnitude of bias. RESULTS For neighborhood contextual variables expressed as percentages (e.g. % of residents living in poverty), non-differential misclassification in the underlying individual-level Census data always biases the parameter estimate for the neighborhood variable away from the null. However, estimates of differences between quantiles of neighborhoods using such contextual variables are unbiased. Aggregation of the same underlying individual-level Census income data into a continuous variable, such as median household income, also introduces bias into the regression parameter. Such bias is non-negligible if the sampled groups are small. CONCLUSIONS Decisions regarding the construction and analysis of neighborhood contextual measures substantially alter the impact on study validity of measurement error in the data used to construct the contextual measure. PMID:24815303

  6. 78 FR 14834 - Major Portion Prices and Due Date for Additional Royalty Payments on Indian Gas Production in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... Office of Natural Resources Revenue Major Portion Prices and Due Date for Additional Royalty Payments on... Secretary, Office of Natural Resources Revenue (ONRR), Interior. ACTION: Notice. SUMMARY: Final regulations.... Gregory J. Gould, Director, Office of Natural Resources Revenue. BILLING CODE 4310-T2-P...

  7. 76 FR 13431 - Major Portion Prices and Due Date for Additional Royalty Payments on Indian Gas Production in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-11

    ... Indian Leases'' (64 FR 43506). The gas valuation regulations apply to all gas production from Indian... 30 CFR, chapter XII (75 FR 61051), effective October 1, 2010.) If additional royalties are due based... Indian Gas Production in Designated Areas Not Associated With an Index Zone AGENCY: Office of...

  8. Localization Errors in MR Spectroscopic Imaging due to the Drift of the Main Magnetic Field and their Correction

    PubMed Central

    Tal, Assaf; Gonen, Oded

    2012-01-01

    PURPOSE To analyze the effect of B0 field drift on multi voxel MR spectroscopic imaging and to propose an approach for its correction. THEORY AND METHODS It is shown, both theoretically and in a phantom, that for ~30 minute acquisitions a linear B0 drift (~0.1 ppm/hour) will cause localization errors that can reach several voxels (centimeters) in the slower varying phase encoding directions. An efficient and unbiased estimator is proposed for tracking the drift by interleaving short (~T2*), non-localized acquisitions on the non-suppressed water each TR, as shown in 10 volunteers at 1.5 and 3 T. RESULTS The drift is shown to be predominantly linear in both the phantom and the volunteers at both fields. The localization errors are observed and quantified in the phantom. The unbiased estimator is shown to reliably track the instantaneous frequency in-vivo despite only using a small portion of the FID. CONCLUSION Contrary to single-voxel MR spectroscopy, where it leads to line broadening, field drift can lead to localization errors in the longer chemical shift imaging experiments. Fortunately, this drift can be obtained at a negligible cost to sequence timing, and corrected for in post processing. PMID:23165750

  9. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles

    PubMed Central

    Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956

  10. Optics for five-dimensional measurement for correction of vertical displacement error due to attitude of floating body in superconducting magnetic levitation system

    NASA Astrophysics Data System (ADS)

    Shiota, Fuyuhiko; Morokuma, Tadashi

    2006-09-01

    An improved optical system for five-dimensional measurement has been developed for the correction of vertical displacement error due to the attitude change of a superconducting floating body that shows five degrees of freedom besides a vertical displacement of 10mm. The available solid angle for the optical measurement is extremely limited because of the cryogenic laser interferometer sharing the optical window of a vacuum chamber in addition to the basic structure of the cryogenic vessel for liquid helium. The aim of the design was to develop a more practical as well as better optical system compared with the prototype system. Various artifices were built into this optical system and the result shows a satisfactory performance and easy operation overcoming the extremely severe spatial difficulty in the levitation system. Although the system described here is specifically designed for our magnetic levitation system, the concept and each artifice will be applicable to the optical measurement system for an object in a high-vacuum chamber and/or cryogenic vessel where the available solid angle for an optical path is extremely limited.

  11. Predicting wafer-level IP error due to particle-induced EUVL reticle distortion during exposure chucking

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Vasu; Mikkelson, Andrew; Engelstad, Roxann; Lovell, Edward

    2005-11-01

    The mechanical distortion of an EUVL mask from mounting in an exposure tool can be a significant source of wafer-level image placement error. In particular, the presence of debris lodged between the reticle and chuck can cause the mask to experience out-of-plane distortion and in-plane distortion. A thorough understanding of the response of the reticle/particle/chuck system during electrostatic chucking is necessary to predict the resulting effects of such particle contamination on image placement accuracy. In this research, finite element modeling is employed to simulate this response for typical clamping conditions.

  12. Estimation of Cyclic Error Due to Scattering in the Internal OPD Metrology of the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Tang, Hong; Zhao, Feng

    2005-01-01

    A common-path laser heterodyne interferometer capable of measuring the internal optical path difference (OPD) with accuracy of the order of 10 pm was demonstrated at JPL. To achieve this accuracy, the relative power received by the detector that is contributed by the scattering of light at the optical surfaces should be less than -97 dB. A method has been developed to estimate the cyclic error caused by the scattering of the optical surfaces. The result of the analysis is presented.

  13. Managing Uncertainty Due to a Fundamental Error Source Arising from Scatterer Distribution Complexity in Radar Remote Sensing of Precipitation

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Kuo, Kwo-Sen; Meneghini, Robert; Mugnai, Alberto

    2007-01-01

    The assumption that cloud and rain drops are spatially distributed according to a Poisson distribution within a scattering volume probed by a radar being used to estimate precipitation has represented bedrock theory in establishing 'rules of the game' for pulse averaging--the process needed to beat down noise to an acceptable level in the measurement of radar reflectivity factor. Based on relatively recent observations of 'realistic' spatial distributions of hydrometeor scatterers in a cloudy atmosphere motivates a renewed examination of the consequences of using a too simplified assumption underlying volume scattering--particularly in regards to the standard pulse averaging rule. Our investigation addresses two extremes, simple to complex, insofar as allowed for complexities in an underlying scatterer distribution. It is demonstrated that as the spatial distribution ranges from Poisson (a narrow distribution) to multi-fractal (much broader distribution), uncertainty in a measurement increases if the rule for pulse averaging goes unchanged from its Poisson distribution reference county. [A bounded cascade is used for the multi-fractal distribution, a regularly observed distribution vis-a-vis cloud liquid water content.] The resultant measurement uncertainty leads to a fundamental source of error in the estimation of rain rate from radar measurements, one that has been disregarded since the early 1950s when radar sets first began to be used for rainfall measuring. It is shown how this source of error can be 'managed'--under the assumption that number of data analysis experiments would be carried out, experiments involving pulse-by-pulse measurements obtained from a radar set modified to output individual pulses of reflectivity factor. For practical applications, a new parameter called normalized k-sample intensity invariance is developed to enable defining the required pulse average count according to a preferred degree of uncertainty.

  14. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  15. Extended FDD-WT method based on correcting the errors due to non-synchronous sensing of sensors

    NASA Astrophysics Data System (ADS)

    Tarinejad, Reza; Damadipour, Majid

    2016-05-01

    In this research, a combinational non-parametric method called frequency domain decomposition-wavelet transform (FDD-WT) that was recently presented by the authors, is extended for correction of the errors resulting from asynchronous sensing of sensors, in order to extend the application of the algorithm for different kinds of structures, especially for huge structures. Therefore, the analysis process is based on time-frequency domain decomposition and is performed with emphasis on correcting time delays between sensors. Time delay estimation (TDE) methods are investigated for their efficiency and accuracy for noisy environmental records and the Phase Transform - β (PHAT-β) technique was selected as an appropriate method to modify the operation of traditional FDD-WT in order to achieve the exact results. In this paper, a theoretical example (3DOF system) has been provided in order to indicate the non-synchronous sensing effects of the sensors on the modal parameters; moreover, the Pacoima dam subjected to 13 Jan 2001 earthquake excitation was selected as a case study. The modal parameters of the dam obtained from the extended FDD-WT method were compared with the output of the classical signal processing method, which is referred to as 4-Spectral method, as well as other literatures relating to the dynamic characteristics of Pacoima dam. The results comparison indicates that values are correct and reliable.

  16. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    PubMed

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure. PMID:26827321

  17. Functional mapping of left parietal areas involved in simple addition and multiplication. A single-case study of qualitative analysis of errors.

    PubMed

    Della Puppa, Alessandro; De Pellegrin, Serena; Salillas, Elena; Grego, Alberto; Lazzarini, Anna; Vallesi, Antonino; Saladini, Marina; Semenza, Carlo

    2015-09-01

    All electrostimulation studies on arithmetic have so far solely reported general errors. Nonetheless, a classification of the errors during stimulation can inform us about underlying arithmetic processes. The present electrostimulation study was performed in a case of left parietal glioma. The patient's erroneous responses suggested that calculation was mainly applied for addition and a combination of retrieval and calculation was mainly applied for multiplication. The findings of the present single-case study encourage follow up with further data collection with the same paradigm. PMID:24646158

  18. Statistical model of the range-dependent error in radar-rainfall estimates due to the vertical profile of reflectivity

    NASA Astrophysics Data System (ADS)

    Krajewski, Witold F.; Vignal, Bertrand; Seo, Bong-Chul; Villarini, Gabriele

    2011-05-01

    SummaryThe authors developed an approach for deriving a statistical model of range-dependent error (RDE) in radar-rainfall estimates by parameterizing the structure of the non-uniform vertical profile of radar reflectivity (VPR). The proposed parameterization of the mean VPR and its expected variations are characterized by several climatological parameters that describe dominant atmospheric conditions related to vertical reflectivity variation. We have used four years of radar volume scan data from the Tulsa weather radar WSR-88D (Oklahoma) to illustrate this approach and have estimated the model parameters by minimizing the sum of the squared differences between the modeled and observed VPR influences that were computed using radar data. We evaluated the mean and standard deviation of the modeled RDE against rain gauge data from the Oklahoma Mesonet network. No rain gauge data were used in the model development. The authors used the three lowest antenna elevation angles to demonstrate the model performance for cold (November-April) and warm (May-October) seasons. The RDE derived from the parameterized models shows very good agreement with the observed differences between radar and rain gauge estimates of rainfall. For the third elevation angle and cold season, there are 82% and 42% improvements for the RDE and its standard deviation with respect to the no-VPR case. The results of this study indicate that VPR is a key factor in the characterization of the radar range-dependent bias, and the proposed models can be used to represent the radar RDE in the absence of rain gauge data.

  19. Dosimetric impact of geometric errors due to respiratory motion prediction on dynamic multileaf collimator-based four-dimensional radiation delivery

    SciTech Connect

    Vedam, S.; Docef, A.; Fix, M.; Murphy, M.; Keall, P.

    2005-06-15

    The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effects of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the

  20. Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error

    PubMed Central

    Fan, Qiao; Verhoeven, Virginie J. M.; Wojciechowski, Robert; Barathi, Veluchamy A.; Hysi, Pirro G.; Guggenheim, Jeremy A.; Höhn, René; Vitart, Veronique; Khawaja, Anthony P.; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W.; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E.; Williams, Katie M.; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F.; Joshi, Peter K.; McMahon, George; St Pourcain, Beate; Evans, David M.; Simpson, Claire L.; Schwantes-An, Tae-Hwi; Igo, Robert P.; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S.; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M.; Amin, Najaf; Uitterlinden, André G.; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R.; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M. Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E. H.; Lim, Wan'e; Beuerman, Roger W.; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N.; Foster, Paul J.; Klein, Barbara E. K.; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L.; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M.; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B.; Teo, Yik-Ying; Mackey, David A.; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D.; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N.; Stambolian, Dwight; Wilson, Joan E. Bailey; Cheng, Ching-Yu; Hammond, Christopher J.; Klaver, Caroline C. W.; Saw, Seang-Mei; Rahi, Jugnoo S.; Korobelnik, Jean-François; Kemp, John P.; Timpson, Nicholas J.; Smith, George Davey; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G.; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F.; Fondran, Jeremy R.; Lass, Jonathan H.; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J.; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O.; Jhanji, Vishal; Young, Alvin L.; Döring, Angela; Raffel, Leslie J.; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K.H.; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L.; Tedja, Milly; Deangelis, Margaret M.; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  1. Meta-analysis of gene-environment-wide association scans accounting for education level identifies additional loci for refractive error.

    PubMed

    Fan, Qiao; Verhoeven, Virginie J M; Wojciechowski, Robert; Barathi, Veluchamy A; Hysi, Pirro G; Guggenheim, Jeremy A; Höhn, René; Vitart, Veronique; Khawaja, Anthony P; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E; Williams, Katie M; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F; Joshi, Peter K; McMahon, George; St Pourcain, Beate; Evans, David M; Simpson, Claire L; Schwantes-An, Tae-Hwi; Igo, Robert P; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M; Amin, Najaf; Uitterlinden, André G; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E H; Lim, Wan'e; Beuerman, Roger W; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B; Teo, Yik-Ying; Mackey, David A; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N; Stambolian, Dwight; Wilson, Joan E Bailey; Cheng, Ching-Yu; Hammond, Christopher J; Klaver, Caroline C W; Saw, Seang-Mei; Rahi, Jugnoo S; Korobelnik, Jean-François; Kemp, John P; Timpson, Nicholas J; Smith, George Davey; Craig, Jamie E; Burdon, Kathryn P; Fogarty, Rhys D; Iyengar, Sudha K; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F; Fondran, Jeremy R; Lass, Jonathan H; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O; Jhanji, Vishal; Young, Alvin L; Döring, Angela; Raffel, Leslie J; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K H; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L; Tedja, Milly; Deangelis, Margaret M; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10(-5)), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  2. One-dimensional analysis of unsteady flows due to supercritical heat addition in high speed condensing steam

    NASA Astrophysics Data System (ADS)

    Malek, N. A.; Hasini, H.; Yusoff, M. Z.

    2013-06-01

    Unsteadiness in supersonic flow in nozzles can be generated by the release of heat due to spontaneous condensation. The heat released is termed "supercritical" and may be responsible for turbine blades failure in turbine cascade as it causes a supersonic flow to decelerate. When the Mach number is reduced to unity, the flow can no longer sustain the additional heat and becomes unstable. This paper aims to numerically investigate the unsteadiness caused by supercritical heat addition in one-dimensional condensing flows. The governing equations for mass, momentum and energy, coupled with the equations describing the wetness fraction and droplet growth are integrated and solved iteratively to reveal the final solution. Comparison is made with well-established experimental and numerical solution done by previous researchers that shows similar phenomena.

  3. Systematic errors of an optical encryption system due to the discrete values of a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Monaghan, David S.

    2009-02-01

    An optical implementation of the amplitude encoded double random phase encryption/decryption technique is implemented, and both numerical and experimental results are presented. In particular, we examine the effect of quantization in the decryption process due to the discrete values and quantized levels, which a spatial light modulator (SLM) can physically display. To do this, we characterize a transmissive SLM using Jones matrices and then map a complex image to the physically achievable levels of the SLM using the pseudorandom encoding technique. We present both numerical and experimental results that quantify the performance of the system.

  4. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis.

    PubMed

    Cornforth, Daniel M; Matthews, Andrew; Brown, Sam P; Raymond, Ben

    2015-04-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  5. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis

    PubMed Central

    Cornforth, Daniel M.; Matthews, Andrew; Brown, Sam P.; Raymond, Ben

    2015-01-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  6. The Effect of Additional Dead Space on Respiratory Exchange Ratio and Carbon Dioxide Production Due to Training

    PubMed Central

    Smolka, Lukasz; Borkowski, Jacek; Zaton, Marek

    2014-01-01

    The purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training. The primary outcome measures were respiratory exchange ratio (RER) and carbon dioxide production (VCO2). Two groups of young healthy males: Experimental (Exp, n = 15) and Control (Con, n = 15), participated in this study. The training consisted of 12 sessions, performed twice a week for 6 weeks. A single training session consisted of continuous, constant-rate exercise on a cycle ergometer at 60% of VO2max which was maintained for 30 minutes. Subjects in Exp group were breathing through additional respiratory dead space (1200ml), while subjects in Con group were breathing without additional dead space. Pre-test and two post-training incremental exercise tests were performed for the detection of gas exchange variables. In all training sessions, pCO2 was higher and blood pH was lower in the Exp group (p < 0.001) ensuring respiratory acidosis. A 12-session training program resulted in significant increase in performance time in both groups (from 17”29 ± 1”31 to 18”47 ± 1”37 in Exp; p=0.02 and from 17”20 ± 1”18 to 18”45 ± 1”44 in Con; p = 0.02), but has not revealed a significant difference in RER and VCO2 in both post-training tests, performed at rest and during submaximal workload. We interpret the lack of difference in post-training values of RER and VCO2 between groups as an absence of inhibition in glycolysis and glycogenolysis during exercise with additional dead space. Key Points The purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training on respiratory exchange ratio and carbon dioxide production. In all training sessions, respiratory acidosis was gained by experimental group only. No significant difference in RER and VCO2 between experimental and control group due to the trainings. The lack of

  7. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  8. Decrease in corneal damage due to benzalkonium chloride by the addition of sericin into timolol maleate eye drops.

    PubMed

    Nagai, Noriaki; Ito, Yoshimasa; Okamoto, Norio; Shimomura, Yoshikazu

    2013-01-01

    We investigated the protective effects of sericin on corneal damage due to benzalkonium chloride (BAC) used as a preservative in commercially available timolol maleate eye drops using rat debrided corneal epithelium and a human cornea epithelial cell line (HCE-T). Corneal wounds were monitored using a fundus camera TRC-50X equipped with a digital camera; eye drops were instilled into the rat eyes five times a day after corneal epithelial abrasion. The viability of HCE-T cells was calculated by TetraColor One; and Escherichia coli (ATCC 8739) were used to measure antimicrobial activity. The reducing effects on transcorneal penetration and intraocular pressure (IOP) of the eye drops were determined using rabbits. The corneal wound healing rate and rate constants (kH) as well as cell viability were higher following treatment with 0.005% BAC solution containing 0.1% sericin than in the case of treatment with BAC solution alone; the antimicrobial activity was approximately the same for BAC solutions with and without sericin. In addition, the kH for rat eyes instilled with commercially available timolol maleate eye drops containing 0.1% sericin was significantly higher than that of eyes instilled with timolol maleate eye drops without sericin, and the addition of sericin did not affect the corneal penetration or IOP reducing effect of commercially available timolol maleate eye drops. A preservative system comprising BAC and sericin may provide effective therapy for glaucoma patients requiring long-term anti-glaucoma agents. PMID:23470443

  9. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  10. Too generous to a fault? Is reliable earthquake safety a lost art? Errors in expected human losses due to incorrect seismic hazard estimates

    NASA Astrophysics Data System (ADS)

    Bela, James

    2014-11-01

    "One is well advised, when traveling to a new territory, to take a good map and then to check the map with the actual territory during the journey." In just such a reality check, Global Seismic Hazard Assessment Program (GSHAP) maps (prepared using PSHA) portrayed a "low seismic hazard," which was then also assumed to be the "risk to which the populations were exposed." But time-after-time-after-time the actual earthquakes that occurred were not only "surprises" (many times larger than those implied on the maps), but they were often near the maximum potential size (Maximum Credible Earthquake or MCE) that geologically could occur. Given these "errors in expected human losses due to incorrect seismic hazard estimates" revealed globally in these past performances of the GSHAP maps (> 700,000 deaths 2001-2011), we need to ask not only: "Is reliable earthquake safety a lost art?" but also: "Who and what were the `Raiders of the Lost Art?' "

  11. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  12. 38 CFR 3.361 - Benefits under 38 U.S.C. 1151(a) for additional disability or death due to hospital care, medical...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... 1151(a) for additional disability or death due to hospital care, medical or surgical treatment.... 1151(a) for additional disability or death due to hospital care, medical or surgical treatment..., VA compares the veteran's condition immediately before the beginning of the hospital care, medical...

  13. SU-E-J-164: Estimation of DVH Variation for PTV Due to Interfraction Organ Motion in Prostate VMAT Using Gaussian Error Function

    SciTech Connect

    Lewis, C; Jiang, R; Chow, J

    2015-06-15

    Purpose: We developed a method to predict the change of DVH for PTV due to interfraction organ motion in prostate VMAT without repeating the CT scan and treatment planning. The method is based on a pre-calculated patient database with DVH curves of PTV modelled by the Gaussian error function (GEF). Methods: For a group of 30 patients with different prostate sizes, their VMAT plans were recalculated by shifting their PTVs 1 cm with 10 increments in the anterior-posterior, left-right and superior-inferior directions. The DVH curve of PTV in each replan was then fitted by the GEF to determine parameters describing the shape of curve. Information of parameters, varying with the DVH change due to prostate motion for different prostate sizes, was analyzed and stored in a database of a program written by MATLAB. Results: To predict a new DVH for PTV due to prostate interfraction motion, prostate size and shift distance with direction were input to the program. Parameters modelling the DVH for PTV were determined based on the pre-calculated patient dataset. From the new parameters, DVH curves of PTVs with and without considering the prostate motion were plotted for comparison. The program was verified with different prostate cases involving interfraction prostate shifts and replans. Conclusion: Variation of DVH for PTV in prostate VMAT can be predicted using a pre-calculated patient database with DVH curve fitting. The computing time is fast because CT rescan and replan are not required. This quick DVH estimation can help radiation staff to determine if the changed PTV coverage due to prostate shift is tolerable in the treatment. However, it should be noted that the program can only consider prostate interfraction motions along three axes, and is restricted to prostate VMAT plan using the same plan script in the treatment planning system.

  14. Common Ion Effects In Zeoponic Substrates: Dissolution And Cation Exchange Variations Due to Additions of Calcite, Dolomite and Wollastonite

    NASA Technical Reports Server (NTRS)

    Beiersdorfer, R. E.; Ming, D. W.; Galindo, C., Jr.

    2003-01-01

    c1inoptilolite-rich tuff-hydroxyapatite mixture (zeoponic substrate) has the potential to serve as a synthetic soil-additive for plant growth. Essential plant macro-nutrients such as calcium, phosphorous, magnesium, ammonium and potassium are released into solution via dissolution of the hydroxyapatite and cation exchange on zeolite charged sites. Plant growth experiments resulting in low yield for wheat have been attributed to a Ca deficiency caused by a high degree of cation exchange by the zeolite. Batch-equilibration experiments were performed in order to determine if the Ca deficiency can be remedied by the addition of a second Ca-bearing, soluble, mineral such as calcite, dolomite or wollastonite. Variations in the amount of calcite, dolomite or wollastonite resulted in systematic changes in the concentrations of Ca and P. The addition of calcite, dolomite or wollastonite to the zeoponic substrate resulted in an exponential decrease in the phosphorous concentration in solution. The exponential rate of decay was greatest for calcite (5.60 wt. % -I), intermediate for wollastonite (2.85 wt.% -I) and least for dolomite (1.58 wt.% -I). Additions of the three minerals resulted in linear increases in the calcium concentration in solution. The rate of increase was greatest for calcite (3.64), intermediate for wollastonite (2.41) and least for dolomite (0.61). The observed changes in P and Ca concentration are consistent with the solubilities of calcite, dolomite and wollastonite and with changes expected from a common ion effect with Ca. Keywords: zeolite, zeoponics, common-ion effect, clinoptilolite, hydroxyapatite

  15. Correction for ‘artificial’ electron disequilibrium due to cone-beam CT density errors: implications for on-line adaptive stereotactic body radiation therapy of lung

    NASA Astrophysics Data System (ADS)

    Disher, Brandon; Hajdok, George; Wang, An; Craig, Jeff; Gaede, Stewart; Battista, Jerry J.

    2013-06-01

    Cone-beam computed tomography (CBCT) has rapidly become a clinically useful imaging modality for image-guided radiation therapy. Unfortunately, CBCT images of the thorax are susceptible to artefacts due to scattered photons, beam hardening, lag in data acquisition, and respiratory motion during a slow scan. These limitations cause dose errors when CBCT image data are used directly in dose computations for on-line, dose adaptive radiation therapy (DART). The purpose of this work is to assess the magnitude of errors in CBCT numbers (HU), and determine the resultant effects on derived tissue density and computed dose accuracy for stereotactic body radiation therapy (SBRT) of lung cancer. Planning CT (PCT) images of three lung patients were acquired using a Philips multi-slice helical CT simulator, while CBCT images were obtained with a Varian On-Board Imaging system. To account for erroneous CBCT data, three practical correction techniques were tested: (1) conversion of CBCT numbers to electron density using phantoms, (2) replacement of individual CBCT pixel values with bulk CT numbers, averaged from PCT images for tissue regions, and (3) limited replacement of CBCT lung pixels values (LCT) likely to produce artificial lateral electron disequilibrium. For each corrected CBCT data set, lung SBRT dose distributions were computed for a 6 MV volume modulated arc therapy (VMAT) technique within the Philips Pinnacle treatment planning system. The reference prescription dose was set such that 95% of the planning target volume (PTV) received at least 54 Gy (i.e. D95). Further, we used the relative depth dose factor as an a priori index to predict the effects of incorrect low tissue density on computed lung dose in regions of severe electron disequilibrium. CT number profiles from co-registered CBCT and PCT patient lung images revealed many reduced lung pixel values in CBCT data, with some pixels corresponding to vacuum (-1000 HU). Similarly, CBCT data in a plastic lung

  16. Correction for 'artificial' electron disequilibrium due to cone-beam CT density errors: implications for on-line adaptive stereotactic body radiation therapy of lung.

    PubMed

    Disher, Brandon; Hajdok, George; Wang, An; Craig, Jeff; Gaede, Stewart; Battista, Jerry J

    2013-06-21

    Cone-beam computed tomography (CBCT) has rapidly become a clinically useful imaging modality for image-guided radiation therapy. Unfortunately, CBCT images of the thorax are susceptible to artefacts due to scattered photons, beam hardening, lag in data acquisition, and respiratory motion during a slow scan. These limitations cause dose errors when CBCT image data are used directly in dose computations for on-line, dose adaptive radiation therapy (DART). The purpose of this work is to assess the magnitude of errors in CBCT numbers (HU), and determine the resultant effects on derived tissue density and computed dose accuracy for stereotactic body radiation therapy (SBRT) of lung cancer. Planning CT (PCT) images of three lung patients were acquired using a Philips multi-slice helical CT simulator, while CBCT images were obtained with a Varian On-Board Imaging system. To account for erroneous CBCT data, three practical correction techniques were tested: (1) conversion of CBCT numbers to electron density using phantoms, (2) replacement of individual CBCT pixel values with bulk CT numbers, averaged from PCT images for tissue regions, and (3) limited replacement of CBCT lung pixels values (LCT) likely to produce artificial lateral electron disequilibrium. For each corrected CBCT data set, lung SBRT dose distributions were computed for a 6 MV volume modulated arc therapy (VMAT) technique within the Philips Pinnacle treatment planning system. The reference prescription dose was set such that 95% of the planning target volume (PTV) received at least 54 Gy (i.e. D95). Further, we used the relative depth dose factor as an a priori index to predict the effects of incorrect low tissue density on computed lung dose in regions of severe electron disequilibrium. CT number profiles from co-registered CBCT and PCT patient lung images revealed many reduced lung pixel values in CBCT data, with some pixels corresponding to vacuum (-1000 HU). Similarly, CBCT data in a plastic lung

  17. Decrease in Corneal Damage due to Benzalkonium Chloride by the Addition of Mannitol into Timolol Maleate Eye Drops.

    PubMed

    Nagai, Noriaki; Yoshioka, Chiaki; Tanino, Tadatoshi; Ito, Yoshimasa; Okamoto, Norio; Shimomura, Yoshikazu

    2015-01-01

    We investigated the protective effects of mannitol on corneal damage caused by benzalkonium chloride (BAC), which is used as a preservative in commercially available timolol maleate eye drops, using rat debrided corneal epithelium and a human cornea epithelial cell line (HCE-T). Corneal wounds were monitored using a fundus camera TRC-50X equipped with a digital camera; eye drops were instilled into rat eyes five times a day after corneal epithelial abrasion. The viability of HCE-T cells was calculated by TetraColor One; and Escherichia coli (ATCC 8739) were used to measure antimicrobial activity. The reducing effects on transcorneal penetration and intraocular pressure (IOP) of the eye drops were determined using rabbits. The corneal wound healing rate and rate constant (kH), as well as cell viability, were higher following treatment with 0.005% BAC solution containing 0.5% mannitol than in the case BAC solution alone; the antimicrobial activity was approximately the same for BAC solutions with and without mannitol. In addition, the kH for rat eyes instilled with commercially available timolol maleate eye drops containing 0.5% mannitol was significantly higher than that for eyes instilled with timolol maleate eye drops without mannitol, and the addition of mannitol did not affect the corneal penetration or IOP reducing effect of the timolol maleate eye drops. A preservative system comprising BAC and mannitol may provide effective therapy for glaucoma patients requiring long-term treatment with anti-glaucoma agents. PMID:26136174

  18. Addressing Loss of Efficiency Due to Misclassification Error in Enriched Clinical Trials for the Evaluation of Targeted Therapies Based on the Cox Proportional Hazards Model

    PubMed Central

    Tsai, Chen-An; Lee, Kuan-Ting; Liu, Jen-pei

    2016-01-01

    A key feature of precision medicine is that it takes individual variability at the genetic or molecular level into account in determining the best treatment for patients diagnosed with diseases detected by recently developed novel biotechnologies. The enrichment design is an efficient design that enrolls only the patients testing positive for specific molecular targets and randomly assigns them for the targeted treatment or the concurrent control. However there is no diagnostic device with perfect accuracy and precision for detecting molecular targets. In particular, the positive predictive value (PPV) can be quite low for rare diseases with low prevalence. Under the enrichment design, some patients testing positive for specific molecular targets may not have the molecular targets. The efficacy of the targeted therapy may be underestimated in the patients that actually do have the molecular targets. To address the loss of efficiency due to misclassification error, we apply the discrete mixture modeling for time-to-event data proposed by Eng and Hanlon [8] to develop an inferential procedure, based on the Cox proportional hazard model, for treatment effects of the targeted treatment effect for the true-positive patients with the molecular targets. Our proposed procedure incorporates both inaccuracy of diagnostic devices and uncertainty of estimated accuracy measures. We employed the expectation-maximization algorithm in conjunction with the bootstrap technique for estimation of the hazard ratio and its estimated variance. We report the results of simulation studies which empirically investigated the performance of the proposed method. Our proposed method is illustrated by a numerical example. PMID:27120450

  19. An additional child case of an aldosterone-producing adenoma with an atypical presentation of peripheral paralysis due to hypokalemia.

    PubMed

    Dinleyici, E C; Dogruel, N; Acikalin, M F; Tokar, B; Oztelcan, B; Ilhan, H

    2007-11-01

    Aldosterone-producing adenoma, which is characterized by hypertension, hypokalemia, and elevated aldosterone levels with suppressed plasma renin activity, is a rare condition during childhood and is also potentially curable. To the best of our knowledge, nearly 25 cases of childhood aldosterone-secreting adenoma have been reported in the literature to date. Here we describe a 13-yr-old girl with primary hyperaldosteronism secondary to aldosterone-secreting adenoma. The patient was admitted to our hospital with the neuromuscular complaints of muscle weakness and inability to walk due to hypokalemia. She had been misdiagnosed as having hypokalemic periodic paralysis 2 months before admission and her symptoms had radically improved with potassium supplementation. However, her blood pressure levels had increased and her symptoms reappeared 2 days prior to being observed during hospitalization in our institution. Laboratory examinations revealed hypokalemia (2.1 mEq/l), and increased serum aldosterone levels with suppressed plasma renin activity. Abdominal ultrasonography and abdominal magnetic resonance imaging revealed left adrenal mass. Laparoscopic adrenalectomy was performed and histopathological examinations showed benign adrenal adenoma. Serum aldosterone levels and blood pressure levels returned to normal after surgical intervention. This case demonstrates the importance of a systemic evaluation including blood pressure monitorization of children with hypokalemia as intermittent hypertension episodes may be seen; cases without hypertension may be misdiagnosed as rheumatological or neurological disorders such as hypokalemic periodic paralysis, as in our case. PMID:18075291

  20. Radio metric errors due to mismatch and offset between a DSN antenna beam and the beam of a troposphere calibration instrument

    NASA Technical Reports Server (NTRS)

    Linfield, R. P.; Wilcox, J. Z.

    1993-01-01

    Two components of the error of a troposphere calibration measurement were quantified by theoretical calculations. The first component is a beam mismatch error, which occurs when the calibration instrument senses a conical volume different from the cylindrical volume sampled by a Deep Space Network (DSN) antenna. The second component is a beam offset error, which occurs if the calibration instrument is not mounted on the axis of the DSN antenna. These two error sources were calculated for both delay (e.g., VLBI) and delay rate (e.g., Doppler) measurements. The beam mismatch error for both delay and delay rate drops rapidly as the beamwidth of the troposphere calibration instrument (e.g., a water vapor radiometer or an infrared Fourier transform spectrometer) is reduced. At a 10-deg elevation angle, the instantaneous beam mismatch error is 1.0 mm for a 6-deg beamwidth and 0.09 mm for a 0.5-deg beam (these are the full angular widths of a circular beam with uniform gain out to a sharp cutoff). Time averaging for 60-100 sec will reduce these errors by factors of 1.2-2.2. At a 20-deg elevation angle, the lower limit for current Doppler observations, the beam-mismatch delay rate error is an Allan standard deviation over 100 sec of 1.1 x 10(exp -14) with a 4-deg beam and 1.3 x 10(exp -l5) for a 0.5-deg beam. A 50-m beam offset would result in a fairly modest (compared to other expected error sources) delay error (less than or equal to 0.3 mm for 60-sec integrations at any elevation angle is greater than or equal to 6 deg). However, the same offset would cause a large error in delay rate measurements (e.g., an Allan standard deviation of 1.2 x 10(exp -14) over 100 sec at a 20-deg elevation angle), which would dominate over other known error sources if the beamwidth is 2 deg or smaller. An on-axis location is essential for accurate troposphere calibration of delay rate measurements. A half-power beamwidth (for a beam with a tapered gain profile) of 1.2 deg or smaller is

  1. Correcting for bias in relative risk estimates due to exposure measurement error: a case study of occupational exposure to antineoplastics in pharmacists.

    PubMed Central

    Spiegelman, D; Valanis, B

    1998-01-01

    OBJECTIVES: This paper describes 2 statistical methods designed to correct for bias from exposure measurement error in point and interval estimates of relative risk. METHODS: The first method takes the usual point and interval estimates of the log relative risk obtained from logistic regression and corrects them for nondifferential measurement error using an exposure measurement error model estimated from validation data. The second, likelihood-based method fits an arbitrary measurement error model suitable for the data at hand and then derives the model for the outcome of interest. RESULTS: Data from Valanis and colleagues' study of the health effects of antineoplastics exposure among hospital pharmacists were used to estimate the prevalence ratio of fever in the previous 3 months from this exposure. For an interdecile increase in weekly number of drugs mixed, the prevalence ratio, adjusted for confounding, changed from 1.06 to 1.17 (95% confidence interval [CI] = 1.04, 1.26) after correction for exposure measurement error. CONCLUSIONS: Exposure measurement error is often an important source of bias in public health research. Methods are available to correct such biases. PMID:9518972

  2. SU-E-P-13: Quantifying the Geometric Error Due to Irregular Motion in Four-Dimensional Computed Tomography (4DCT)

    SciTech Connect

    Sawant, A

    2015-06-15

    Purpose: Respiratory correlated 4DCT images are generated under the assumption of a regular breathing cycle. This study evaluates the error in 4DCT-based target position estimation in the presence of irregular respiratory motion. Methods: A custom-made programmable externally-and internally-deformable lung motion phantom was placed inside the CT bore. An abdominal pressure belt was placed around the phantom to mimic clinical 4DCT acquisitio and the motion platform was programmed with a sinusoidal (±10mm, 10 cycles per minute) motion trace and 7 motion traces recorded from lung cancer patients. The same setup and motion trajectories were repeated in the linac room and kV fluoroscopic images were acquired using the on-board imager. Positions of 4 internal markers segmented from the 4DCT volumes were overlaid upon the motion trajectories derived from the fluoroscopic time series to calculate the difference between estimated (4DCT) and “actual” (kV fluoro) positions. Results: With a sinusoidal trace, absolute errors of the 4DCT estimated markers positions vary between 0.78mm and 5.4mm and RMS errors are between 0.38mm to 1.7mm. With irregular patient traces, absolute errors of the 4DCT estimated markers positions increased significantly by 100 to 200 percent, while the corresponding RMS error values have much smaller changes. Significant mismatches were frequently found at peak-inhale or peak-exhale phase. Conclusion: As expected, under conditions of well-behaved, periodic sinusoidal motion, the 4DCT yielded much better estimation of marker positions. When an actual patient trace is used 4DCT-derived positions showed significant mismatches with the fluoroscopic trajectories, indicating the potential for geometric and therefore dosimetric errors in the presence of cycle-to-cycle respiratory variations.

  3. Microstructural evolution and intermetallic formation in Al-8wt% Si-0.8wt% Fe alloy due to grain refiner and modifier additions

    NASA Astrophysics Data System (ADS)

    Hassani, Amir; Ranjbar, Khalil; Sami, Sattar

    2012-08-01

    An alloy of Al-8wt% Si-0.8wt% Fe was cast in a metallic die, and its microstructural changes due to Ti-B refiner and Sr modifier additions were studied. Apart from usual refinement and modification of the microstructure, some mutual influences of the additives took place, and no mutual poisoning effects by these additives, in combined form, were observed. It was noticed that the dimensions of the iron-rich intermetallics were influenced by the additives causing them to become larger. The needle-shaped intermetallics that were obtained from refiner addition became thicker and longer when adding the modifier. It was also found that α-Al and eutectic silicon phases preferentially nucleate on different types of intermetallic compounds. The more iron content of the intermetallic compounds and the more changes in their dimensions occurred. Formation of the shrinkage porosities was also observed.

  4. Measuring uncertainty in dose delivered to the cochlea due to setup error during external beam treatment of patients with cancer of the head and neck

    SciTech Connect

    Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A.

    2013-12-15

    Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup

  5. Relativistic regimes in which Compton scattering doubly differential cross sections obtained from impulse approximation are accurate due to cancelation of errors

    NASA Astrophysics Data System (ADS)

    Lajohn, L. A.; Pratt, R. H.

    2015-05-01

    There is no simple parameter that can be used to predict when impulse approximation (IA) can yield accurate Compton scattering doubly differential cross sections (DDCS) in relativistic regimes. When Z is low, a small value of the parameter /q (where is the average initial electron momentum and q is the momentum transfer) suffices. For small Z the photon electron kinematic contribution described in relativistic S-matrix (SM) theory reduces to an expression, Xrel, which is present in the relativistic impulse approximation (RIA) formula for Compton DDCS. When Z is high, the S-Matrix photon electron kinematics no longer reduces to Xrel, and this along with the error characterized by the magnitude of /q contribute to the RIA error Δ. We demonstrate and illustrate in the form of contour plots that there are regimes of incident photon energy ωi and scattering angle θ in which the two types of errors at least partially cancel. Our calculations show that when θ is about 65° for Uranium K-shell scattering, Δ is less than 1% over an ωi range of 300 to 900 keV.

  6. Analytical Calculation of Errors in Time and Value Perception Due to a Subjective Time Accumulator: A Mechanistic Model and the Generation of Weber's Law.

    PubMed

    Namboodiri, Vijay Mohan K; Mihalas, Stefan; Hussain Shuler, Marshall G

    2016-01-01

    It has been previously shown (Namboodiri, Mihalas, Marton, & Hussain Shuler, 2014) that an evolutionary theory of decision making and time perception is capable of explaining numerous behavioral observations regarding how humans and animals decide between differently delayed rewards of differing magnitudes and how they perceive time. An implementation of this theory using a stochastic drift-diffusion accumulator model (Namboodiri, Mihalas, & Hussain Shuler, 2014a) showed that errors in time perception and decision making approximately obey Weber's law for a range of parameters. However, prior calculations did not have a clear mechanistic underpinning. Further, these calculations were only approximate, with the range of parameters being limited. In this letter, we provide a full analytical treatment of such an accumulator model, along with a mechanistic implementation, to calculate the expression of these errors for the entirety of the parameter space. In our mechanistic model, Weber's law results from synaptic facilitation and depression within the feedback synapses of the accumulator. Our theory also makes the prediction that the steepness of temporal discounting can be affected by requiring the precise timing of temporal intervals. Thus, by presenting exact quantitative calculations, this work provides falsifiable predictions for future experimental testing. PMID:26599714

  7. A novel error detection due to joint CRC aided denoise-and-forward network coding for two-way relay channels.

    PubMed

    Cheng, Yulun; Yang, Longxiang

    2014-01-01

    In wireless two-way (TW) relay channels, denoise-and-forward (DNF) network coding (NC) is a promising technique to achieve spectral efficiency. However, unsuccessful detection at relay severely deteriorates the diversity gain, as well as end-to-end pairwise error probability (PEP). To handle this issue, a novel joint cyclic redundancy code (CRC) check method (JCRC) is proposed in this paper by exploiting the property of two NC combined CRC codewords. Firstly, the detection probability bounds of the proposed method are derived to prove its efficiency in evaluating the reliability of NC signals. On the basis of that, three JCRC aided TW DNF NC schemes are proposed, and the corresponding PEP performances are also derived. Numerical results reveal that JCRC aided TW DNF NC has similar PEP comparing with the separate CRC one, while the complexity is reduced to half. Besides, it demonstrates that the proposed schemes outperform the conventional one with log-likelihood ratio threshold. PMID:25247205

  8. Telemetry degradation due to a CW RFI induced carrier tracking error for the block IV receiving system with maximum likelihood convolution decoding

    NASA Technical Reports Server (NTRS)

    Sue, M. K.

    1981-01-01

    Models to characterize the behavior of the Deep Space Network (DSN) Receiving System in the presence of a radio frequency interference (RFI) are considered. A simple method to evaluate the telemetry degradation due to the presence of a CW RFI near the carrier frequency for the DSN Block 4 Receiving System using the maximum likelihood convolutional decoding assembly is presented. Analytical and experimental results are given.

  9. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  10. Pre- and post-experimental manipulation assessments confirm the increase in number of birds due to the addition of nest boxes

    PubMed Central

    Cuatianquiz Lima, Cecilia

    2016-01-01

    Secondary cavity nesting (SCN) birds breed in holes that they do not excavate themselves. This is possible where there are large trees whose size and age permit the digging of holes by primary excavators and only rarely happens in forest plantations, where we expected a deficit of both breeding holes and SCN species. We assessed whether the availability of tree cavities influenced the number of SCNs in two temperate forest types, and evaluated the change in number of SCNs after adding nest boxes. First, we counted all cavities within each of our 25-m radius sampling points in mature and young forest plots during 2009. We then added nest boxes at standardised locations during 2010 and 2011 and conducted fortnightly bird counts (January–October 2009–2011). In 2011 we added two extra plots of each forest type, where we also conducted bird counts. Prior to adding nest boxes, counts revealed more SCNs in mature than in young forest. Following the addition of nest boxes, the number of SCNs increased significantly in the points with nest boxes in both types of forest. Counts in 2011 confirmed the increase in number of birds due to the addition of nest boxes. Given the likely benefits associated with a richer bird community we propose that, as is routinely done in some countries, forest management programs preserve old tree stumps and add nest boxes to forest plantations in order to increase bird numbers and bird community diversity. PMID:26998410

  11. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  12. Variation in mechanical behavior due to different build directions of Titanium6Aluminum4Vanadium fabricated by electron beam additive manufacturing technology

    NASA Astrophysics Data System (ADS)

    Roy, Lalit

    Titanium has always been a metal of great interest since its discovery especially for critical applications because of its excellent mechanical properties such as light weight (almost half of that of the steel), low density (4.4 gm/cc) and high strength (almost similar to steel). It creates a stable and adherent oxide layer on its surface upon exposure to air or water which gives it a great resistance to corrosion and has made it a great choice for structures in severe corrosive environment and sea water. Its non-allergic property has made it suitable for biomedical application for manufacturing implants. Having a very high melting temperature, it has a very good potential for high temperature applications. But high production and processing cost has limited its application. Ti6Al4V is the most used titanium alloy for which it has acquired the title as `workhouse' of the Ti family. Additive layer Manufacturing (ALM) has brought revolution in manufacturing industries. Today, this additive manufacturing has developed into several methods and formed a family. This method fabricates a product by adding layer after layer as per the geometry given as input into the system. Though the conception was developed to fabricate prototypes and making tools initially, but its highly economic aspect i.e., very little waste material for less machining and comparatively lower production lead time, obviation of machine tools have drawn attention for its further development towards mass production. Electron Beam Melting (EBM) is the latest addition to ALM family developed by Arcam, ABRTM located in Sweden. The electron beam that is used as heat source melts metal powder to form layers. For this thesis work, three different types of specimens have been fabricated using EBM system. These specimens differ in regard of direction of layer addition. Mechanical properties such as ultimate tensile strength, elastic modulus and yield strength, have been measured and compared with standard data

  13. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  14. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  15. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  16. 38 CFR 3.361 - Benefits under 38 U.S.C. 1151(a) for additional disability or death due to hospital care, medical...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., examination, training and rehabilitation services, or compensated work therapy program. 3.361 Section 3.361..., examination, training and rehabilitation services, or compensated work therapy program. (a) Claims subject to... § 3.358. (2) Compensated Work Therapy. With respect to claims alleging disability or death due...

  17. Suitability of live yeast addition to alleviate the adverse effects due to the restriction of the time of access to feed in sheep fed only pasture.

    PubMed

    Pérez-Ruchel, A; Repetto, J L; Cajarville, C

    2013-12-01

    The effect of yeast addition on intake and digestive utilization of pasture was studied in ovines under restricted time of access to forage. Eighteen wethers housed in metabolic cages and fed fresh forage (predominantly Lotus corniculatus) were randomly assigned to three treatments: forage available all day (AD); forage available only 6 h/day (R) and forage available only 6 h/day plus live Saccharomyces cerevisiae yeast (RY). Feed intake and digestibility, feeding behaviour, kinetics of passage, ruminal pH and ammonia concentration, nitrogen balance and microbial nitrogen synthesis (MNS) were determined in vivo, and ruminal liquor activity of animals was evaluated in vitro. Restricted animals consumed less than those fed all day but achieved more than 75% of the intake and spent less time ruminating (p = 0.014). Although animals without restriction consumed more feed, they had a lower rate of passage (p = 0.030). The addition of yeast did affect neither intake nor feeding behaviour, but increased digestibility. Organic matter digestibility tended to increase 11% by yeast addition (p = 0.051), mainly by a rise in NDF (27%, p = 0.032) and ADF digestibility (37%, p = 0.051). Ingested and retained N was lower in restricted animals, as MNS (p ≤ 0.045). The use of yeasts did not significantly change the N balance or MNS, but retained N tended to be higher in supplemented animals (p = 0.090). Neither ruminal pH nor ammonia concentrations were affected by the restriction, but restricted animals had a lower ruminal activity evidenced by a lower volume of gas (p = 0.020). The addition of yeast overcame this limitation, noted by a higher volume of gas of inocula from supplemented animals (p = 0.015). Yeast addition emerged as a useful tool to improve digestibility of forage cell walls in ovines under restricted time of access to forage. PMID:23020124

  18. Ferrite Formation Dynamics and Microstructure Due to Inclusion Engineering in Low-Alloy Steels by Ti2O3 and TiN Addition

    NASA Astrophysics Data System (ADS)

    Mu, Wangzhong; Shibata, Hiroyuki; Hedström, Peter; Jönsson, Pär Göran; Nakajima, Keiji

    2016-08-01

    The dynamics of intragranular ferrite (IGF) formation in inclusion engineered steels with either Ti2O3 or TiN addition were investigated using in situ high temperature confocal laser scanning microscopy. Furthermore, the chemical composition of the inclusions and the final microstructure after continuous cooling transformation was investigated using electron probe microanalysis and electron backscatter diffraction, respectively. It was found that there is a significant effect of the chemical composition of the inclusions, the cooling rate, and the prior austenite grain size on the phase fractions and the starting temperatures of IGF and grain boundary ferrite (GBF). The fraction of IGF is larger in the steel with Ti2O3 addition compared to the steel with TiN addition after the same thermal cycle has been imposed. The reason for this difference is the higher potency of the TiO x phase as nucleation sites for IGF formation compared to the TiN phase, which was supported by calculations using classical nucleation theory. The IGF fraction increases with increasing prior austenite grain size, while the fraction of IGF in both steels was the highest for the intermediate cooling rate of 70 °C/min, since competing phase transformations were avoided, the structure of the IGF was though refined with increasing cooling rate. Finally, regarding the starting temperatures of IGF and GBF, they decrease with increasing cooling rate and the starting temperature of GBF decreases with increasing grain size, while the starting temperature of IGF remains constant irrespective of grain size.

  19. Ferrite Formation Dynamics and Microstructure Due to Inclusion Engineering in Low-Alloy Steels by Ti2O3 and TiN Addition

    NASA Astrophysics Data System (ADS)

    Mu, Wangzhong; Shibata, Hiroyuki; Hedström, Peter; Jönsson, Pär Göran; Nakajima, Keiji

    2016-03-01

    The dynamics of intragranular ferrite (IGF) formation in inclusion engineered steels with either Ti2O3 or TiN addition were investigated using in situ high temperature confocal laser scanning microscopy. Furthermore, the chemical composition of the inclusions and the final microstructure after continuous cooling transformation was investigated using electron probe microanalysis and electron backscatter diffraction, respectively. It was found that there is a significant effect of the chemical composition of the inclusions, the cooling rate, and the prior austenite grain size on the phase fractions and the starting temperatures of IGF and grain boundary ferrite (GBF). The fraction of IGF is larger in the steel with Ti2O3 addition compared to the steel with TiN addition after the same thermal cycle has been imposed. The reason for this difference is the higher potency of the TiO x phase as nucleation sites for IGF formation compared to the TiN phase, which was supported by calculations using classical nucleation theory. The IGF fraction increases with increasing prior austenite grain size, while the fraction of IGF in both steels was the highest for the intermediate cooling rate of 70 °C/min, since competing phase transformations were avoided, the structure of the IGF was though refined with increasing cooling rate. Finally, regarding the starting temperatures of IGF and GBF, they decrease with increasing cooling rate and the starting temperature of GBF decreases with increasing grain size, while the starting temperature of IGF remains constant irrespective of grain size.

  20. Observing the Coupling of the Toroidal Plasma Rotation Due to m/n = 2/1 and m/n = 3/2 Neoclassical Tearing Modes by Uncorrected n = 2 Error Field in DIII-D

    NASA Astrophysics Data System (ADS)

    Okabayashi, M.; Tobias, B. J.; Strait, E. J.; La Haye, R. J.; Paz-Soldan, C.; Shiraki, D.; Hanson, J. M.

    2014-10-01

    Injection of electromagnetic torque by tearing mode rotation control feedback can sustain rotation of the 2/1 NTM, avoiding mode locking for several seconds after the mode appearance. This feedback process optimizes the phasing of rotating applied n = 1 field relative to the mode, hence preventing the locking and simultaneously compensating the n = 1 error field (EF). In high beta discharges, the large amplitude sustained 2/1 NTM reduces the local toroidal rotation to near zero at the q = 3/2 surface and =1, implying the angular momentum is coupled between the two rational surfaces. The mode at the q = 3/2 surface is identified as a m/n = 3/2. The mode is presumably affected by n = 2 EF as well as remaining uncorrected n = 1 EF. A possible process of sustained NTM with velocity shear due to the Er buildup by large size magnetic islands will also be discussed. Work supported by the US Department of Energy under DE-AC02-09CH11466, DE-FC02-04ER54698, DE-AC05-00OR22725, and DE-FG02-04ER54761.

  1. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  2. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  3. Reduced Cardiac Contractile Force Due to Sympathovagal Dysfunction Mediates the Additive Hypotensive Effects of Limited-Access Regimens of Ethanol and Clonidine in Spontaneously Hypertensive Rats

    PubMed Central

    El-Mas, Mahmoud M.

    2010-01-01

    Our previous attempts to investigate the long-term hemodynamic interaction between ethanol and clonidine in telemetered spontaneously hypertensive rats (SHRs) were hampered by the lack of a sustained hypotensive response to continuous clonidine exposure. This limitation was circumvented when we adopted a limited-access clonidine (8:30 AM–4:30 PM) paradigm in a recent study. The latter paradigm was employed here to evaluate the ethanol-clonidine interaction and possible roles of myocardial function and autonomic control in this interaction. Changes in blood pressure (BP), heart rate, maximum rate of rise in BP wave (+dP/dtmax), and spectral cardiovascular autonomic profiles were measured by radiotelemetry in pair-fed SHRs receiving clonidine (150 μg/kg/day), ethanol [2.5% (w/v)], or their combination during the day for 12 weeks. Ethanol or clonidine elicited long-term decreases in BP, and their combination caused additive hypotensive response. Significant reductions in +dP/dtmax were observed upon concurrent treatment with ethanol and clonidine, in contrast to no effect for individual treatment. In addition, the combined treatment increased the high-frequency (HF) spectral band of interbeat interval (IBI-HFnu, 0.75–3 Hz) and decreased low-frequency (IBI-LFnu, 0.2–0.75 Hz) bands and IBILF/HF ratios. Clonidine-evoked reductions in plasma and urine norepinephrine and BP-LF spectral power (measure of vasomotor sympathetic tone) were not affected by ethanol. In conclusion, concurrent treatment with ethanol and clonidine shifts the sympathovagal balance toward parasympathetic dominance and elicits exaggerated hypotension as a result of a reduction in cardiac contractile force. PMID:20864507

  4. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  5. Examining food additives and spices for their anti-oxidant ability to counteract oxidative damage due to chronic exposure to free radicals from environmental pollutants

    NASA Astrophysics Data System (ADS)

    Martinez, Raul A., III

    The main objective of this work was to examine food additives and spices (from the Apiaceae family) to determine their antioxidant properties to counteract oxidative stress (damage) caused by Environmental pollutants. Environmental pollutants generate Reactive Oxygen species and Reactive Nitrogen species. Star anise essential oil showed lower antioxidant activity than extracts using DPPH scavenging. Dill Seed -- Anethum Graveolens -the monoterpene components of dill showed to activate the enzyme glutathione-S-transferase , which helped attach the antioxidant molecule glutathione to oxidized molecules that would otherwise do damage in the body. The antioxidant activity of extracts of dill was comparable with ascorbic acid, alpha-tocopherol, and quercetin in in-vitro systems. Black Cumin -- Nigella Sativa: was evaluated the method 1,1-diphenyl2-picrylhhydrazyl (DPPH) radical scavenging activity. Positive correlations were found between the total phenolic content in the black cumin extracts and their antioxidant activities. Caraway -- Carum Carvi: The antioxidant activity was evaluated by the scavenging effects of 1,1'-diphenyl-2-picrylhydrazyl (DPPH). Caraway showed strong antioxidant activity. Cumin -- Cuminum Cyminum - the major polyphenolic were extracted and separated by HPTLC. The antioxidant activity of the cumin extract was tested on 1,1'-diphenyl-2- picrylhydrazyl (DPPH) free radical scavenging. Coriander -- Coriandrum Sativum - the antioxidant and free-radical-scavenging property of the seeds was studied and also investigated whether the administration of seeds curtails oxidative stress. Coriander seed powder not only inhibited the process of Peroxidative damage, but also significantly reactivated the antioxidant enzymes and antioxidant levels. The seeds also showed scavenging activity against superoxides and hydroxyl radicals. The total polyphenolic content of the seeds was found to be 12.2 galic acid equivalents (GAE)/g while the total flavonoid content

  6. Examining food additives and spices for their anti-oxidant ability to counteract oxidative damage due to chronic exposure to free radicals from environmental pollutants

    NASA Astrophysics Data System (ADS)

    Martinez, Raul A., III

    The main objective of this work was to examine food additives and spices (from the Apiaceae family) to determine their antioxidant properties to counteract oxidative stress (damage) caused by Environmental pollutants. Environmental pollutants generate Reactive Oxygen species and Reactive Nitrogen species. Star anise essential oil showed lower antioxidant activity than extracts using DPPH scavenging. Dill Seed -- Anethum Graveolens -the monoterpene components of dill showed to activate the enzyme glutathione-S-transferase , which helped attach the antioxidant molecule glutathione to oxidized molecules that would otherwise do damage in the body. The antioxidant activity of extracts of dill was comparable with ascorbic acid, alpha-tocopherol, and quercetin in in-vitro systems. Black Cumin -- Nigella Sativa: was evaluated the method 1,1-diphenyl2-picrylhhydrazyl (DPPH) radical scavenging activity. Positive correlations were found between the total phenolic content in the black cumin extracts and their antioxidant activities. Caraway -- Carum Carvi: The antioxidant activity was evaluated by the scavenging effects of 1,1'-diphenyl-2-picrylhydrazyl (DPPH). Caraway showed strong antioxidant activity. Cumin -- Cuminum Cyminum - the major polyphenolic were extracted and separated by HPTLC. The antioxidant activity of the cumin extract was tested on 1,1'-diphenyl-2- picrylhydrazyl (DPPH) free radical scavenging. Coriander -- Coriandrum Sativum - the antioxidant and free-radical-scavenging property of the seeds was studied and also investigated whether the administration of seeds curtails oxidative stress. Coriander seed powder not only inhibited the process of Peroxidative damage, but also significantly reactivated the antioxidant enzymes and antioxidant levels. The seeds also showed scavenging activity against superoxides and hydroxyl radicals. The total polyphenolic content of the seeds was found to be 12.2 galic acid equivalents (GAE)/g while the total flavonoid content

  7. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors. PMID:26592783

  8. Triangulation Error Analysis for the Barium Ion Cloud Experiment. M.S. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1973-01-01

    The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.

  9. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices

  10. Dose error analysis for a scanned proton beam delivery system.

    PubMed

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm(3) target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy. PMID:21076200

  11. Error analysis in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  12. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  13. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  14. Robustness and modeling error characterization

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.; Castanon, D. A.; Sandell, N. R., Jr.; Levy, B. C.; Athans, M.; Stein, G.

    1984-01-01

    The results on robustness theory presented here are extensions of those given in Lehtomaki et al., (1981). The basic innovation in these new results is that they utilize minimal additional information about the structure of the modeling error, as well as its magnitude, to assess the robustness of feedback systems for which robustness tests based on the magnitude of modeling error alone are inconclusive.

  15. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  16. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  17. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  18. Statistical errors in Monte Carlo estimates of systematic errors

    NASA Astrophysics Data System (ADS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  19. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2014-12-01

    Physically based models provide insights into key hydrologic processes, but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology. Here we employ global sensitivity analysis to explore how different error types (i.e., bias, random errors), different error distributions, and different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use Sobol' global sensitivity analysis, which is typically used for model parameters, but adapted here for testing model sensitivity to co-existing errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 520 000 Monte Carlo simulations across four sites and four different scenarios. Model outputs were generally (1) more sensitive to forcing biases than random errors, (2) less sensitive to forcing error distributions, and (3) sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a significant impact depending on forcing error magnitudes. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  20. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  1. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  2. Sources of Error in UV Radiation Measurements

    PubMed Central

    Larason, Thomas C.; Cromer, Christopher L.

    2001-01-01

    Increasing commercial, scientific, and technical applications involving ultraviolet (UV) radiation have led to the demand for improved understanding of the performance of instrumentation used to measure this radiation. There has been an effort by manufacturers of UV measuring devices (meters) to produce simple, optically filtered sensor systems to accomplish the varied measurement needs. We address common sources of measurement errors using these meters. The uncertainty in the calibration of the instrument depends on the response of the UV meter to the spectrum of the sources used and its similarity to the spectrum of the quantity to be measured. In addition, large errors can occur due to out-of-band, non-linear, and non-ideal geometric or spatial response of the UV meters. Finally, in many applications, how well the response of the UV meter approximates the presumed action spectrum needs to be understood for optimal use of the meters.

  3. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  4. Food additives.

    PubMed

    Berglund, F

    1978-01-01

    The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI

  5. Errors in laparoscopic surgery: what surgeons should know.

    PubMed

    Galleano, R; Franceschi, A; Ciciliot, M; Falchero, F; Cuschieri, A

    2011-04-01

    Some two decades after its introduction, minimal access surgery (MAS) is still evolving. Undoubtedly, its significant uptake world wide is due to its clinical benefits to patient outcome. These benefits include reduced traumatic insult, reduction of pain, earlier return to bowel function, decrease disability, shorter hospitalization and better cosmetic results. Nonetheless complications due to the laparoscopic approach are not rare as documented by several studies on task specific or procedure related MAS morbidity. In all these instances, error analysis research has demonstrated that an understanding of the underlying causes of these complications requires a comprehensive approach addressing the entire system related to the procedure for identification and characterization of the errors ultimately responsible for the morbidity. The present review covers definition, taxonomy and incidence of errors in medicine with special reference to MAS. In addition, possible root causes of adverse events in laparoscopy are explored and existing methods to study errors are reviewed. Finally specific areas requiring further human factors research to enhance safety of patients undergoing laparoscopic operations are identified. The hope is that awareness of causes and mechanisms of errors may reduce incidence of errors in clinical practice for the final benefit of the patients. PMID:21593712

  6. Hybrid Models for Trajectory Error Modelling in Urban Environments

    NASA Astrophysics Data System (ADS)

    Angelatsa, E.; Parés, M. E.; Colomina, I.

    2016-06-01

    This paper tackles the first step of any strategy aiming to improve the trajectory of terrestrial mobile mapping systems in urban environments. We present an approach to model the error of terrestrial mobile mapping trajectories, combining deterministic and stochastic models. Due to urban specific environment, the deterministic component will be modelled with non-continuous functions composed by linear shifts, drifts or polynomial functions. In addition, we will introduce a stochastic error component for modelling residual noise of the trajectory error function. First step for error modelling requires to know the actual trajectory error values for several representative environments. In order to determine as accurately as possible the trajectories error, (almost) error less trajectories should be estimated using extracted nonsemantic features from a sequence of images collected with the terrestrial mobile mapping system and from a full set of ground control points. Once the references are estimated, they will be used to determine the actual errors in terrestrial mobile mapping trajectory. The rigorous analysis of these data sets will allow us to characterize the errors of a terrestrial mobile mapping system for a wide range of environments. This information will be of great use in future campaigns to improve the results of the 3D points cloud generation. The proposed approach has been evaluated using real data. The data originate from a mobile mapping campaign over an urban and controlled area of Dortmund (Germany), with harmful GNSS conditions. The mobile mapping system, that includes two laser scanner and two cameras, was mounted on a van and it was driven over a controlled area around three hours. The results show the suitability to decompose trajectory error with non-continuous deterministic and stochastic components.

  7. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  8. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  9. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  10. Reducing medical errors through barcoding at the point of care.

    PubMed

    Nichols, James H; Bartholomew, Cathy; Brunton, Mary; Cintron, Carlos; Elliott, Sheila; McGirr, Joan; Morsi, Deborah; Scott, Sue; Seipel, Joseph; Sinha, Daisy

    2004-01-01

    Medical errors are a major concern in health care today. Errors in point-of-care testing (POCT) are particularly problematic because the test is conducted by clinical operators at the site of patient care and immediate medical action is taken on the results prior to review by the laboratory. The Performance Improvement Program at Baystate Health System, Springfield, Massachusetts, noted a number of identification errors occurring with glucose and blood gas POCT devices. Incorrect patient account numbers that were attached to POCT results prevented the results from being transmitted to the patient's medical record and appropriately billed. In the worst case, they could lead to results being transferred to the wrong patient's chart and inappropriate medical treatment. Our first action was to lock-out operators who repeatedly made identification errors (3-Strike Rule), requiring operators to be counseled and retrained after their third error. The 3-Strike Rule significantly decreased our glucose meter errors (p = 0.014) but did not have an impact on the rate of our blood gas errors (p = 0.378). Neither device approached our ultimate goal of zero tolerance. A Failure Mode and Effects Analysis (FMEA) was conducted to determine the various processes that could lead to an identification error. A primary source of system failure was the manual entry of 14 digits for each test, five numbers for operator and nine numbers for patient account identification. Patient barcoding was implemented to automate the data entry process, and after an initial familiarization period, resulted in significant improvements in error rates for both the glucose (p = 0.0007) and blood gas devices (p = 0.048). Despite the improvements, error rates with barcoding still did not achieve zero errors. Operators continued to utilize manual data entry when the barcode scan was unsuccessful or unavailable, and some patients were found to have incorrect patient account numbers due to hospital transfer

  11. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  12. Medical Error and Moral Luck.

    PubMed

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613

  13. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  14. Error Properties of Argos Satellite Telemetry Locations Using Least Squares and Kalman Filtering

    PubMed Central

    Boyd, Janice D.; Brightsmith, Donald J.

    2013-01-01

    Study of animal movements is key for understanding their ecology and facilitating their conservation. The Argos satellite system is a valuable tool for tracking species which move long distances, inhabit remote areas, and are otherwise difficult to track with traditional VHF telemetry and are not suitable for GPS systems. Previous research has raised doubts about the magnitude of position errors quoted by the satellite service provider CLS. In addition, no peer-reviewed publications have evaluated the usefulness of the CLS supplied error ellipses nor the accuracy of the new Kalman filtering (KF) processing method. Using transmitters hung from towers and trees in southeastern Peru, we show the Argos error ellipses generally contain <25% of the true locations and therefore do not adequately describe the true location errors. We also find that KF processing does not significantly increase location accuracy. The errors for both LS and KF processing methods were found to be lognormally distributed, which has important repercussions for error calculation, statistical analysis, and data interpretation. In brief, “good” positions (location codes 3, 2, 1, A) are accurate to about 2 km, while 0 and B locations are accurate to about 5–10 km. However, due to the lognormal distribution of the errors, larger outliers are to be expected in all location codes and need to be accounted for in the user’s data processing. We evaluate five different empirical error estimates and find that 68% lognormal error ellipses provided the most useful error estimates. Longitude errors are larger than latitude errors by a factor of 2 to 3, supporting the use of elliptical error ellipses. Numerous studies over the past 15 years have also found fault with the CLS-claimed error estimates yet CLS has failed to correct their misleading information. We hope this will be reversed in the near future. PMID:23690980

  15. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  16. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  17. Evaluating the prevalence and impact of examiner errors on the Wechsler scales of intelligence: A meta-analysis.

    PubMed

    Styck, Kara M; Walsh, Shana M

    2016-01-01

    The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. PMID:26011479

  18. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  19. Issues in automatic OCR error classification

    SciTech Connect

    Esakov, J.; Lopresti, D.P.; Sandberg, J.S.; Zhou, J.

    1994-12-31

    In this paper we present the surprising result that OCR errors are not always uniformly distributed across a page. Under certain circumstances, 30% or more of the errors incurred can be attributed to a single, avoidable phenomenon in the scanning process. This observation has important ramifications for work that explicitly or implicitly assumes a uniform error distribution. In addition, our experiments show that not just the quantity but also the nature of the errors is affected. This could have an impact on strategies used for post-process error correction. Results such as these can be obtained only by analyzing large quantities of data in a controlled way. To this end, we also describe our algorithm for classifying OCR errors. This is based on a well-known dynamic programming approach for determining string edit distance which we have extended to handle the types of character segmentation errors inherent to OCR.

  20. Refractive errors in children.

    PubMed

    Tongue, A C

    1987-12-01

    Optical correction of refractive errors in infants and young children is indicated when the refractive errors are sufficiently large to cause unilateral or bilateral amblyopia, if they are impairing the child's ability to function normally, or if the child has accommodative strabismus. Screening for refractive errors is important and should be performed as part of the annual physical examination in all verbal children. Screening for significant refractive errors in preverbal children is more difficult; however, the red reflex test of Bruckner is useful for the detection of anisometropic refractive errors. The photorefraction test, which is an adaptation of Bruckner's red reflex test, may prove to be a useful screening device for detecting bilateral as well as unilateral refractive errors. Objective testing as well as subjective testing enables ophthalmologists to prescribe proper optical correction for refractive errors for infants and children of any age. PMID:3317238

  1. Error-prone signalling.

    PubMed

    Johnstone, R A; Grafen, A

    1992-06-22

    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  2. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  3. Clover: Compiler directed lightweight soft error resilience

    DOE PAGESBeta

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  4. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  5. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  6. Measurement errors induced by axis tilt of biplates in dual-rotating compensator Mueller matrix ellipsometers

    NASA Astrophysics Data System (ADS)

    Gu, Honggang; Zhang, Chuanwei; Jiang, Hao; Chen, Xiuguo; Li, Weiqi; Liu, Shiyuan

    2015-06-01

    Dual-rotating compensator Mueller matrix ellipsometer (DRC-MME) has been designed and applied as a powerful tool for the characterization of thin films and nanostructures. The compensators are indispensable optical components and their performances affect the precision and accuracy of DRC-MME significantly. Biplates made of birefringent crystals are commonly used compensators in the DRC-MME, and their optical axes invariably have tilt errors due to imperfect fabrication and improper installation in practice. The axis tilt error between the rotation axis and the light beam will lead to a continuous vibration in the retardance of the rotating biplate, which further results in significant measurement errors in the Mueller matrix. In this paper, we propose a simple but valid formula for the retardance calculation under arbitrary tilt angle and azimuth angle to analyze the axis tilt errors in biplates. We further study the relations between the measurement errors in the Mueller matrix and the biplate axis tilt through simulations and experiments. We find that the axis tilt errors mainly affect the cross-talk from linear polarization to circular polarization and vice versa. In addition, the measurement errors in Mueller matrix increase acceleratively with the axis tilt errors in biplates, and the optimal retardance for reducing these errors is about 80°. This work can be expected to provide some guidences for the selection, installation and commissioning of the biplate compensator in DRC-MME design.

  7. Effects of Structural Errors on Parameter Estimates

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Paper introduces concept of near equivalence in probability between different parameters or mathematical models of physical system. One in series of papers, each establishes different part of rigorous theory of mathematical modeling based on concepts of structural error, identifiability, and equivalence. This installment focuses upon effects of additive structural errors on degree of bias in estimates parameters.

  8. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  9. Flux Sampling Errors for Aircraft and Towers

    NASA Technical Reports Server (NTRS)

    Mahrt, Larry

    1998-01-01

    Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.

  10. Food additives

    MedlinePlus

    Food additives are substances that become part of a food product when they are added during the processing or making of that food. "Direct" food additives are often added during processing to: Add nutrients ...

  11. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  12. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  13. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  14. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  15. Analysis of Medication Error Reports

    SciTech Connect

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  16. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  17. Medical error and disclosure.

    PubMed

    White, Andrew A; Gallagher, Thomas H

    2013-01-01

    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370

  18. Measurement error in geometric morphometrics.

    PubMed

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025

  19. Image pre-filtering for measurement error reduction in digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  20. Errors Disrupt Subsequent Early Attentional Processes

    PubMed Central

    Van der Borght, Liesbet; Schevernels, Hanne; Burle, Boris; Notebaert, Wim

    2016-01-01

    It has been demonstrated that target detection is impaired following an error in an unrelated flanker task. These findings support the idea that the occurrence or processing of unexpected error-like events interfere with subsequent information processing. In the present study, we investigated the effect of errors on early visual ERP components. We therefore combined a flanker task and a visual discrimination task. Additionally, the intertrial interval between both tasks was manipulated in order to investigate the duration of these negative after-effects. The results of the visual discrimination task indicated that the amplitude of the N1 component, which is related to endogenous attention, was significantly decreased following an error, irrespective of the intertrial interval. Additionally, P3 amplitude was attenuated after an erroneous trial, but only in the long-interval condition. These results indicate that low-level attentional processes are impaired after errors. PMID:27050303

  1. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  2. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    PubMed

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  3. Factors Affecting Blood Glucose Monitoring: Sources of Errors in Measurement

    PubMed Central

    Ginsberg, Barry H.

    2009-01-01

    Glucose monitoring has become an integral part of diabetes care but has some limitations in accuracy. Accuracy may be limited due to strip manufacturing variances, strip storage, and aging. They may also be due to limitations on the environment such as temperature or altitude or to patient factors such as improper coding, incorrect hand washing, altered hematocrit, or naturally occurring interfering substances. Finally, exogenous interfering substances may contribute errors to the system evaluation of blood glucose. In this review, I discuss the measurement of error in blood glucose, the sources of error, and their mechanism and potential solutions to improve accuracy in the hands of the patient. I also discuss the clinical measurement of system accuracy and methods of judging the suitability of clinical trials and finally some methods of overcoming the inaccuracies. I have included comments about additional information or education that could be done today by manufacturers in the appropriate sections. Areas that require additional work are discussed in the final section. PMID:20144340

  4. Negligence, genuine error, and litigation.

    PubMed

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  5. Techniques for containing error propagation in compression/decompression schemes

    NASA Technical Reports Server (NTRS)

    Kobler, Ben

    1991-01-01

    Data compression has the potential for increasing the risk of data loss. It can also cause bit error propagation, resulting in catastrophic failures. There are a number of approaches possible for containing error propagation due to data compression: (1) data retransmission; (2) data interpolation; (3) error containment; and (4) error correction. The most fruitful techniques will be ones where error containment and error correction are integrated with data compression to provide optimal performance for both. The error containment characteristics of existing compression schemes should be analyzed for their behavior under different data and error conditions. The error tolerance requirements of different data sets need to be understood, so guidelines can then be developed for matching error requirements to suitable compression algorithms.

  6. Insulin use: preventable errors.

    PubMed

    2014-01-01

    Insulin is vital for patients with type 1 diabetes and useful for certain patients with type 2 diabetes. The serious consequences of insulin-related medication errors are overdose, resulting in severe hypoglycaemia, causing seizures, coma and even death; or underdose, resulting in hyperglycaemia and sometimes ketoacidosis. Errors associated with the preparation and administration of insulin are often reported, both outside and inside the hospital setting. These errors are preventable. By analysing reports from organisations devoted to medication error prevention and from poison control centres, as well as a few studies and detailed case reports of medication errors, various types of error associated with insulin use have been identified, especially in the hospital setting. Generally, patients know more about the practicalities of their insulin treatment than healthcare professionals with intermittent involvement. Medication errors involving insulin can occur at each step of the medication-use process: prescribing, data entry, preparation, dispensing and administration. When prescribing insulin, wrong-dose errors have been caused by the use of abbreviations, especially "U" instead of the word "units" (often resulting in a 10-fold overdose because the "U" is read as a zero), or by failing to write the drug's name correctly or in full. In electronic prescribing, the sheer number of insulin products is a source of confusion and, ultimately, wrong-dose errors, and often overdose. Prescribing, dispensing or administration software is rarely compatible with insulin prescriptions in which the dose is adjusted on the basis of the patient's subsequent capillary blood glucose readings, and can therefore generate errors. When preparing and dispensing insulin, a tuberculin syringe is sometimes used instead of an insulin syringe, leading to overdose. Other errors arise from confusion created by similar packaging, between different insulin products or between insulin and other

  7. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  8. The energetics of error-growth and the predictability analysis in precipitation prediction

    NASA Astrophysics Data System (ADS)

    Luo, Yu; Zhang, Lifeng; Zhang, Yun

    2012-02-01

    Sensitivity simulations are conducted in AREM (Advanced Regional Eta-Coordinate numerical heavy-rain prediction Model) for a torrential precipitation in June 2008 along South China to investigate the effect of initial uncertainty on precipitation predictability. It is found that the strong initial-condition sensitivity for precipitation prediction can be attributed to the upscale evolution of error growth. However, different modality of error growth can be observed in lower and upper layers. Compared with lower-level, significant error growth in the upper-layer appears over both convective area and high jet stream. It thus indicates that the error growth depends on both moist convection due to convective instability and the wind shear associated with dynamic instability. As heavy rainfall process can be described as a series of energy conversion, it reveals that the advection-term and latent heating serve as significant energy sources. Moreover, the dominant source terms of error-energy growth are nonlinearity advection (ADVT) and difference in latent heating (DLHT), with the latter being largely responsible for the rapid error growth in the initial stage. In this sense, the occurrence of precipitation and error-growth share the energy source, which implies the inherent predictability of heavy rainfall. In addition, a decomposition of ADVT further indicates that the flow-dependent error growth is closely related to the atmospheric instability. Thus the system growing from unstable flow regime has its intrinsic predictability.

  9. Voice Onset Time in Consonant Cluster Errors: Can Phonetic Accommodation Differentiate Cognitive from Motor Errors?

    ERIC Educational Resources Information Center

    Pouplier, Marianne; Marin, Stefania; Waltl, Susanne

    2014-01-01

    Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…

  10. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  11. Facts about Refractive Errors

    MedlinePlus

    ... the lens can cause refractive errors. What is refraction? Refraction is the bending of light as it passes ... rays entering the eye, causing a more precise refraction or focus. In many cases, contact lenses provide ...

  12. Errors in prenatal diagnosis.

    PubMed

    Anumba, Dilly O C

    2013-08-01

    Prenatal screening and diagnosis are integral to antenatal care worldwide. Prospective parents are offered screening for common fetal chromosomal and structural congenital malformations. In most developed countries, prenatal screening is routinely offered in a package that includes ultrasound scan of the fetus and the assay in maternal blood of biochemical markers of aneuploidy. Mistakes can arise at any point of the care pathway for fetal screening and diagnosis, and may involve individual or corporate systemic or latent errors. Special clinical circumstances, such as maternal size, fetal position, and multiple pregnancy, contribute to the complexities of prenatal diagnosis and to the chance of error. Clinical interventions may lead to adverse outcomes not caused by operator error. In this review I discuss the scope of the errors in prenatal diagnosis, and highlight strategies for their prevention and diagnosis, as well as identify areas for further research and study to enhance patient safety. PMID:23725900

  13. Error mode prediction.

    PubMed

    Hollnagel, E; Kaarstad, M; Lee, H C

    1999-11-01

    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  14. Pronominal Case-Errors

    ERIC Educational Resources Information Center

    Kaper, Willem

    1976-01-01

    Contradicts a previous assertion by C. Tanz that children commit substitution errors usually using objective pronoun forms for nominative ones. Examples from Dutch and German provide evidence that substitutions are made in both directions. (CHK)

  15. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  16. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    . Moreover, the PES effect appears across tasksets with distinct stimuli and response rules in the context of observed errors, reflecting a generic process. Additionally, the slowing effect and improved accuracy in the post-observed error trial do not occur together, suggesting that they are independent behavioral adjustments in the context of observed errors. PMID:26934579

  17. Error-Compensated Telescope

    NASA Technical Reports Server (NTRS)

    Meinel, Aden B.; Meinel, Marjorie P.; Stacy, John E.

    1989-01-01

    Proposed reflecting telescope includes large, low-precision primary mirror stage and small, precise correcting mirror. Correcting mirror machined under computer control to compensate for error in primary mirror. Correcting mirror machined by diamond cutting tool. Computer analyzes interferometric measurements of primary mirror to determine shape of surface of correcting mirror needed to compensate for errors in wave front reflected from primary mirror and commands position and movement of cutting tool accordingly.

  18. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  19. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory

  20. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  1. Loading errors in cone-plate rheometry

    NASA Astrophysics Data System (ADS)

    Davies, A. J.

    2015-12-01

    Errors arising from the under- and overfilling of cone-plate geometries have been investigated for combinations of smooth and micro-roughened cone-plate geometries. We observed experimentally that 0.1 ml deviations in the loading volume, such as can occur due to subjective filling or evaporation, will proportionally change the measured viscosity by 2-3%. We also give a simple method to avoid these errors during routine measurements.

  2. Patient cueing, a type of diagnostic error

    PubMed Central

    2016-01-01

    Diagnostic failure can be due to a variety of psychological errors on the part of the diagnostician. An erroneous diagnosis rendered by previous clinicians can lead a diagnostician to the wrong diagnosis. This report is the case of a patient who misdiagnosed herself and then led an emergency room physician and subsequent treating physicians to the wrong diagnosis. This mechanism of diagnostic error can be called patient cueing. PMID:27284538

  3. Antenna pointing systematic error model derivations

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.; Lansing, F. L.; Riggs, R.

    1987-01-01

    The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

  4. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  5. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  6. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE PAGESBeta

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less

  7. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    SciTech Connect

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.

  8. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  9. Phosphazene additives

    SciTech Connect

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  10. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  11. Addition of the Neurokinin-1-Receptor Antagonist (RA) Aprepitant to a 5-Hydroxytryptamine-RA and Dexamethasone in the Prophylaxis of Nausea and Vomiting Due to Radiation Therapy With Concomitant Cisplatin

    SciTech Connect

    Jahn, Franziska; Jahn, Patrick; Sieker, Frank; Vordermark, Dirk; Jordan, Karin

    2015-08-01

    Purpose: To assess, in a prospective, observational study, the safety and efficacy of the addition of the neurokinin-1-receptor antagonist (NK1-RA) aprepitant to concomitant radiochemotherapy, for the prophylaxis of radiation therapy–induced nausea and vomiting. Patients and Methods: This prospective observational study compared the antiemetic efficacy of an NK1-RA (aprepitant), a 5-hydroxytryptamine-RA, and dexamethasone (aprepitant regimen) versus a 5-hydroxytryptamine-RA and dexamethasone (control regimen) in patients receiving concomitant radiochemotherapy with cisplatin at the Department of Radiation Oncology, University Hospital Halle (Saale), Germany. The primary endpoint was complete response in the overall phase, defined as no vomiting and no use of rescue therapy in this period. Results: Fifty-nine patients treated with concomitant radiochemotherapy with cisplatin were included in this study. Thirty-one patients received the aprepitant regimen and 29 the control regimen. The overall complete response rates for cycles 1 and 2 were 75.9% and 64.5% for the aprepitant group and 60.7% and 54.2% for the control group, respectively. Although a 15.2% absolute difference was reached in cycle 1, a statistical significance was not detected (P=.22). Furthermore maximum nausea was 1.58 ± 1.91 in the control group and 0.73 ± 1.79 in the aprepitant group (P=.084); for the head-and-neck subset, 2.23 ± 2.13 in the control group and 0.64 ± 1.77 in the aprepitant group, respectively (P=.03). Conclusion: This is the first study of an NK1-RA–containing antiemetic prophylaxis regimen in patients receiving concomitant radiochemotherapy. Although the primary endpoint was not obtained, the absolute difference of 10% in efficacy was reached, which is defined as clinically meaningful for patients by international guidelines groups. Randomized phase 3 studies are necessary to further define the potential role of an NK1-RA in this setting.

  12. Performance of focused error control codes

    NASA Astrophysics Data System (ADS)

    Alajaji, Fady; Fuja, Thomas

    1994-02-01

    Consider an additive noise channel with inputs and outputs in the field GF(q) where qgreater than 2; every time a symbol is transmitted over such a channel, there are q - 1 different errors that can occur, corresponding to the q - 1 non-zero elements that the channel can add to the transmitted symbol. In many data communication/storage systems, there are some errors that occur much more frequently than others; however, traditional error correcting codes - designed with respect to the Hamming metric - treat each of these q - 1 errors the same. Fuja and Heegard have designed a class of codes, called focused error control codes, that offer different levels of protection against common and uncommon errors; the idea is to define the level of protection in a way based not only on the number of errors, but the kind as well. In this paper, the performance of these codes is analyzed with respect to idealized 'skewed' channels as well as realistic non-binary modulation schemes. It is shown that focused codes, used in conjunction with PSK and QAM signaling, can provide more than 1.0 dB of additional coding gain when compared with Reed-Solomon codes for small blocklengths.

  13. Error modes in implicit Monte Carlo

    SciTech Connect

    Martin, William Russell,; Brown, F. B.

    2001-01-01

    The Implicit Monte Carlo (IMC) method of Fleck and Cummings [1] has been used for years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Larsen and Mercier [2] have shown that the IMC method violates a maximum principle that is satisfied by the exact solution to the radiative transfer equation. Except for [2] and related papers regarding the maximum principle, there have been no other published results regarding the analysis of errors or convergence properties for the IMC method. This work presents an exact error analysis for the IMC method by using the analytical solutions for infinite medium geometry (0-D) to determine closed form expressions for the errors. The goal is to gain insight regarding the errors inherent in the IMC method by relating the exact 0-D errors to multi-dimensional geometry. Additional work (not described herein) has shown that adding a leakage term (i.e., a 'buckling' term) to the 0-D equations has relatively little effect on the IMC errors analyzed in this paper, so that the 0-D errors should provide useful guidance for the errors observed in multi-dimensional simulations.

  14. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  15. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  16. Measurement error revisited

    NASA Astrophysics Data System (ADS)

    Henderson, Robert K.

    1999-12-01

    It is widely accepted in the electronics industry that measurement gauge error variation should be no larger than 10% of the related specification window. In a previous paper, 'What Amount of Measurement Error is Too Much?', the author used a framework from the process industries to evaluate the impact of measurement error variation in terms of both customer and supplier risk (i.e., Non-conformance and Yield Loss). Application of this framework in its simplest form suggested that in many circumstances the 10% criterion might be more stringent than is reasonably necessary. This paper reviews the framework and results of the earlier work, then examines some of the possible extensions to this framework suggested in that paper, including variance component models and sampling plans applicable in the photomask and semiconductor businesses. The potential impact of imperfect process control practices will be examined as well.

  17. Surprise beyond prediction error

    PubMed Central

    Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst

    2014-01-01

    Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400

  18. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1999-10-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm--to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lessons learned along the way.

  19. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1998-12-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm - to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lesions learned along the way.

  20. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  1. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  2. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    PubMed Central

    Wei, Na; Fang, Rongxin

    2016-01-01

    With sparse and uneven site distribution, Global Positioning System (GPS) data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP) with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM) approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources. PMID:27187392

  3. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data.

    PubMed

    Wei, Na; Fang, Rongxin

    2016-01-01

    With sparse and uneven site distribution, Global Positioning System (GPS) data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6-7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP) with no additional regularization. The optimal truncation degree should be decreased to degree 4-5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM) approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources. PMID:27187392

  4. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  5. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  6. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  7. NLO error propagation exercise: statistical results

    SciTech Connect

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.

  8. Bond additivity corrections for quantum chemistry methods

    SciTech Connect

    C. F. Melius; M. D. Allendorf

    1999-04-01

    In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method due to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.

  9. Airplane wing vibrations due to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Pastel, R. L.; Caruthers, J. E.; Frost, W.

    1981-01-01

    The magnitude of error introduced due to wing vibration when measuring atmospheric turbulence with a wind probe mounted at the wing tip was studied. It was also determined whether accelerometers mounted on the wing tip are needed to correct this error. A spectrum analysis approach is used to determine the error. Estimates of the B-57 wing characteristics are used to simulate the airplane wing, and von Karman's cross spectrum function is used to simulate atmospheric turbulence. It was found that wing vibration introduces large error in measured spectra of turbulence in the frequency's range close to the natural frequencies of the wing.

  10. [The error, source of learning].

    PubMed

    Joyeux, Stéphanie; Bohic, Valérie

    2016-05-01

    The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. PMID:27155272

  11. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  12. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  13. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  14. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  15. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  16. Mismatch-mediated error prone repair at the Immunoglobulin genes

    PubMed Central

    Chahwan, Richard; Edelmann, Winfried; Scharff, Matthew D; Roa, Sergio

    2011-01-01

    The generation of effective antibodies depends upon somatic hypermutation (SHM) and class-switch recombination (CSR) of antibody genes by activation induced cytidine deaminase (AID) and the subsequent recruitment of error prone base excision and mismatch repair. While AID initiates and is required for SHM, more than half of the base changes that accumulate in V regions are not due to the direct deamination of dC to dU by AID, but rather arise through the recruitment of the mismatch repair complex (MMR) to the U:G mismatch created by AID and the subsequent perversion of mismatch repair from a high fidelity process to one that is very error prone. In addition, the generation of double-strand breaks (DSBs) is essential during CSR, and the resolution of AID-generated mismatches by MMR to promote such DSBs is critical for the efficiency of the process. While a great deal has been learned about how AID and MMR cause hypermutations and DSBs, it is still unclear how the error prone aspect of these processes is largely restricted to antibody genes. The use of knockout models and mice expressing mismatch repair proteins with separation-of-function point mutations have been decisive in gaining a better understanding of the roles of each of the major MMR proteins and providing further insight into how mutation and repair are coordinated. Here, we review the cascade of MMR factors and repair signals that are diverted from their canonical error free role and hijacked by B cells to promote genetic diversification of the Ig locus. This error prone process involves AID as the inducer of enzymatically-mediated DNA mismatches, and a plethora of downstream MMR factors acting as sensors, adaptors and effectors of a complex and tightly regulated process from much of which is not yet well understood. PMID:22100214

  17. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  18. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  19. A Review of the Literature on Computational Errors With Whole Numbers. Mathematics Education Diagnostic and Instructional Centre (MEDIC).

    ERIC Educational Resources Information Center

    Burrows, J. K.

    Research on error patterns associated with whole number computation is reviewed. Details of the results of some of the individual studies cited are given in the appendices. In Appendix A, 33 addition errors, 27 subtraction errors, 41 multiplication errors, and 41 division errors are identified, and the frequency of these errors made by 352…

  20. Effects of various experimental parameters on errors in triangulation solution of elongated object in space. [barium ion cloud

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1975-01-01

    The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.

  1. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  2. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  3. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  4. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  5. Locked modes and magnetic field errors in MST

    SciTech Connect

    Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.

    1992-06-01

    In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.

  6. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  7. Horizon Sensor Errors Calculated By Computer Models Compared With Errors Measured In Orbit

    NASA Astrophysics Data System (ADS)

    Ward, Kenneth A.; Hogan, Roger; Andary, James

    1982-06-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-7). The k)recLicted performance is compared with actual flight history.

  8. Errors of fourier chemical-shift imaging and their corrections

    NASA Astrophysics Data System (ADS)

    Wang, Zhiyue; Bolinger, Lizann; Subramanian, V. Harihara; Leigh, John S.

    From a finite and discrete Fourier transform point of view, we discuss the sources of localization errors in Fourier chemical-shift imaging, and demonstrate them explicitly by computer simulations for simple cases. Errors arise from intravoxel dephasing and the intravoxel asymmetry. The spectral leakage due to intravoxel dephasing is roughly 6-8% from one voxel to one of its nearest neighbors. Neighbors further away are influenced less significantly. The loss of localization due to intravoxel asymmetry effect is also severe. Fortunately, these errors can be corrected under certain conditions. The method for correcting the errors by postprocessing the data is described.

  9. Evaluating a medical error taxonomy.

    PubMed Central

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting. PMID:12463789

  10. Improved modelling of tool tracking errors by modelling dependent marker errors.

    PubMed

    Thompson, Stephen; Penney, Graeme; Dasgupta, Prokar; Hawkes, David

    2013-02-01

    Accurate understanding of equipment tracking error is essential for decision making in image guided surgery. For tools tracked using markers attached to a rigid body, existing error estimation methods use the assumption that the individual marker errors are independent random variables. This assumption is not valid for all tracking systems. This paper presents a method to estimate a more accurate tracking error function, consisting of a systematic and random component. The proposed method does not require detailed knowledge of the tracking system physics. Results from a pointer calibration are used to demonstrate that the proposed method provides a better match to observed results than the existing state of the art. A simulation of the pointer calibration process is then used to show that existing methods can underestimate the pointer calibration error by a factor of two. A further simulation of laparoscopic camera tracking is used to show that existing methods cannot model important variations in system performance due to the angular arrangement of the tracking markers. By arranging the markers such that the systematic errors are nearly identical for all markers, the rotational component of the tracking error can be reduced, resulting in a significant reduction in target tracking errors. PMID:22961298

  11. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  12. Analysis of discretization errors in LES

    NASA Technical Reports Server (NTRS)

    Ghosal, Sandip

    1995-01-01

    All numerical simulations of turbulence (DNS or LES) involve some discretization errors. The integrity of such simulations therefore depend on our ability to quantify and control such errors. In the classical literature on analysis of errors in partial differential equations, one typically studies simple linear equations (such as the wave equation or Laplace's equation). The qualitative insight gained from studying such simple situations is then used to design numerical methods for more complex problems such as the Navier-Stokes equations. Though such an approach may seem reasonable as a first approximation, it should be recognized that strongly nonlinear problems, such as turbulence, have a feature that is absent in linear problems. This feature is the simultaneous presence of a continuum of space and time scales. Thus, in an analysis of errors in the one dimensional wave equation, one may, without loss of generality, rescale the equations so that the dependent variable is always of order unity. This is not possible in the turbulence problem since the amplitudes of the Fourier modes of the velocity field have a continuous distribution. The objective of the present research is to provide some quantitative measures of numerical errors in such situations. Though the focus of this work is LES, the methods introduced here can be just as easily applied to DNS. Errors due to discretization of the time-variable are neglected for the purpose of this analysis.

  13. Evaluating operating system vulnerability to memory errors.

    SciTech Connect

    Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  14. Bond additivity corrections for quantum chemistry methods

    SciTech Connect

    Melius, C.F.; Allendorf, M.D.

    2000-03-23

    New bond additivity correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid density functional theory (DFT) Moller-Plesset (MP)2 method, BAC-hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method due to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-hybrid and BAC-MP4. The BAC-hybrid method is expected to scale well for large molecules. The BAC-hybrid method uses the differences between the DFT and MP2 predictions as an indication of the method's accuracy, whereas the BAC-G2 method uses its internal methods (G1 and G2MP2) to accomplish this. A statistical analysis of the error in each of the methods is presented on the basis of calculations performed for large sets (more than 120) of molecules.

  15. Error analysis and data reduction for interferometric surface measurements

    NASA Astrophysics Data System (ADS)

    Zhou, Ping

    High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.

  16. Study for compensation of unexpected image placement error caused by VSB mask writer deflector

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-joo; Choi, Min-kyu; Moon, Seong-yong; Cho, Han-Ku; Doh, Jonggul; Ahn, Jinho

    2012-11-01

    The Electron Optical System (EOS) is designed for the electron beam machine employing a vector scanned variable shaped beam (VSB) with the deflector. Most VSB systems utilize multi stage deflection architecture to obtain a high precision and a high-speed deflection at the same time. Many companies use the VSB mask writer and they have a lot of experiences about Image Placement (IP) error suffering from contaminated EOS deflector. And also most of VSB mask writer users are having already this error. In order to use old VSB mask writer, we introduce the method how to compensate unexpected IP error from VSB mask writer. There are two methods to improve this error due to contaminated deflector. The one is the usage of 2nd stage grid correction in addition to the original stage grid. And the other is the usage of uncontaminated area in the deflector. According to the results of this paper, 30% of IP error can be reduced by 2nd stage grid correction and the change of deflection area in deflector. It is the effective method to reduce the deflector error at the VSB mask writer. And it can be the one of the solution for the long-term production of photomask.

  17. The calculation of moment uncertainties from velocity distribution functions with random errors

    NASA Astrophysics Data System (ADS)

    Gershman, Daniel J.; Dorelli, John C.; F.-Viñas, Adolfo; Pollock, Craig J.

    2015-08-01

    Instrumentation that detects individual plasma particles is susceptible to random counting errors. These errors propagate into the calculations of moments of measured particle velocity distribution functions. Although rules of thumb exist for the effects of random errors on the calculation of lower order moments (e.g., density, velocity, and temperature) of Maxwell-Boltzmann distributions, they do not generally apply to nonthermal distributions or to higher-order moments. To date, such errors have only been estimated using brute force Monte Carlo techniques, i.e., repeated (~50) samplings of distribution functions. Here we present a mathematical formalism for analytically obtaining uncertainty estimates of plasma moments due to random errors either measured in situ by instruments or synthesized by particle simulations. Our uncertainty estimates precisely match the statistical variation of simulated plasma moments and carry the computational cost equivalent of only ~15 Monte Carlo samplings. In addition, we provide the means to calculate a covariance matrix that can be reported along with typical plasma moments. This matrix enables the propagation of statistical errors into arbitrary coordinate systems or functions of plasma moments without the need to reanalyze full distribution functions. Our methodology, which is applied to electron data from Plasma Electron and Current Experiment on the Cluster spacecraft as an example, is relevant to both existing and future data sets and requires only instrument-measured counts and phase space densities reported for a set of calibrated energy-angle targets.

  18. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  19. Proper handling of random errors and distortions in astronomical data analysis

    NASA Astrophysics Data System (ADS)

    Cardiel, Nicolas; Gorgas, Javier; Gallego, Jess; Serrano, Angel; Zamorano, Jaime; Garcia-Vargas, Maria-Luisa; Gomez-Cambronero, Pedro; Filgueira, Jose M.

    2002-12-01

    The aim of a data reduction process is to minimize the influence of data acquisition imperfections on the estimation of the desired astronomical quantity. For this purpose, one must perform appropriate manipulations with data and calibration frames. In addition, random-error frames (computed from first principles: expected statistical distribution of photo-electrons, detector gain, readout-noise, etc.), corresponding to the raw-data frames, can also be properly reduced. This parallel treatment of data and errors guarantees the correct propagation of random errors due to the arithmetic manipulations throughout the reduction procedure. However, due to the unavoidable fact that the information collected by detectors is physically sampled, this approach collides with a major problem: errors are correlated when applying image manipulations involving non-integer pixel shifts of data. Since this is actually the case for many common reduction steps (wavelength calibration into a linear scale, image rectification when correcting for geometric distortions,...), we discuss the benefits of considering the data reduction as the full characterization of the raw-data frames, but avoiding, as far as possible, the arithmetic manipulation of that data until the final measure of the image properties with a scientific meaning for the astronomer. For this reason, it is essential that the software tools employed for the analysis of the data perform their work using that characterization. In that sense, the real reduction of the data should be performed during the analysis, and not before, in order to guarantee the proper treatment of errors.

  20. Neutron multiplication error in TRU waste measurements

    SciTech Connect

    Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  1. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  2. Magnetic nanoparticle thermometer: an investigation of minimum error transmission path and AC bias error.

    PubMed

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  3. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  4. Addressing medical errors in hand surgery.

    PubMed

    Johnson, Shepard P; Adkinson, Joshua M; Chung, Kevin C

    2014-09-01

    Influential think tanks such as the Institute of Medicine have raised awareness about the implications of medical errors. In response, organizations, medical societies, and hospitals have initiated programs to decrease the incidence and prevent adverse effects of these errors. Surgeons deal with the direct implications of adverse events involving patients. In addition to managing the physical consequences, they are confronted with ethical and social issues when caring for a harmed patient. Although there is considerable effort to implement system-wide changes, there is little guidance for hand surgeons on how to address medical errors. Admitting an error by a physician is difficult, but a transparent environment where patients are notified of errors and offered consolation and compensation is essential to maintain physician-patient trust. Furthermore, equipping hand surgeons with a guide for addressing medical errors will help identify system failures, provide learning points for safety improvement, decrease litigation against physicians, and demonstrate a commitment to ethical and compassionate medical care. PMID:25154576

  5. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  6. Standard Errors for Matrix Correlations.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  7. State and model error estimation for distributed parameter systems. [in large space structure control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors in order to detect inevitable deficiencies in large structure controller/estimator models is discussed. Such an estimation process is particularly applicable in the area of shape control system design required to maintain a prescribed static structural shape and, in addition, suppress dynamic disturbances due to the vehicle vibrational modes. The paper outlines a solution to the problem of static shape estimation where the vehicle shape must be reconstructed from a set of measurements discretely located throughout the structure. The estimation process is based on the principle of least-squares that inherently contains the definition and explicit computation of model error estimates that are optimal in some sense. Consequently, a solution is provided for the problem of estimation of static model errors (e.g., external loads). A generalized formulation applicable to distributed parameters systems is first worked out and then applied to a one-dimensional beam-like structural configuration.

  8. Effect of channel errors on delta modulation transmission

    NASA Technical Reports Server (NTRS)

    Rosenberg, W. J.

    1973-01-01

    We have considered the response of a variable step size delta modulator communication system, to errors caused by a noisy channel. For the particular adaptive delta modulation scheme proposed by Song, Garodnick, and Schilling (1971), we have a simple analytic formulation of the output error propagation due to a single channel error. It is shown that single channel errors cause a change in the amplitude and dc level of the output, but do not otherwise affect the shape of the output waveform. At low channel error rates, these effects do not cause any degradation in audio transmission. Higher channel error rates cause overflow or saturation of the step size register. We present relationships between channel error rate, register size, and the probability of register overflow.

  9. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  10. Grammatical Errors and Communication Breakdown.

    ERIC Educational Resources Information Center

    Tomiyama, Machiko

    This study investigated the relationship between grammatical errors and communication breakdown by examining native speakers' ability to correct grammatical errors. The assumption was that communication breakdown exists to a certain degree if a native speaker cannot correct the error or if the correction distorts the information intended to be…

  11. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  12. Medical device error.

    PubMed

    Goodman, Gerald R

    2002-12-01

    This article discusses principal concepts for the analysis, classification, and reporting of problems involving medical device technology. We define a medical device in regulatory terminology and define and discuss concepts and terminology used to distinguish the causes and sources of medical device problems. Database classification systems for medical device failure tracking are presented, as are sources of information on medical device failures. The importance of near-accident reporting is discussed to alert users that reported medical device errors are typically limited to those that have caused an injury or death. This can represent only a fraction of the true number of device problems. This article concludes with a summary of the most frequently reported medical device failures by technology type, clinical application, and clinical setting. PMID:12400632

  13. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and

  14. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  15. Error Threshold of Fully Random Eigen Model

    NASA Astrophysics Data System (ADS)

    Li, Duo-Fang; Cao, Tian-Guang; Geng, Jin-Peng; Qiao, Li-Hua; Gu, Jian-Zhong; Zhan, Yong

    2015-01-01

    Species evolution is essentially a random process of interaction between biological populations and their environments. As a result, some physical parameters in evolution models are subject to statistical fluctuations. In this work, two important parameters in the Eigen model, the fitness and mutation rate, are treated as Gaussian distributed random variables simultaneously to examine the property of the error threshold. Numerical simulation results show that the error threshold in the fully random model appears as a crossover region instead of a phase transition point, and as the fluctuation strength increases the crossover region becomes smoother and smoother. Furthermore, it is shown that the randomization of the mutation rate plays a dominant role in changing the error threshold in the fully random model, which is consistent with the existing experimental data. The implication of the threshold change due to the randomization for antiviral strategies is discussed.

  16. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  17. Identification of Error Patterns in Terminal-Area ATC Communications

    NASA Technical Reports Server (NTRS)

    Quinn, Cheryl; Walter, Kim E.; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    Advancing air traffic management technologies have enabled a greater number of aircraft to use the same airspace more effectively. As aircraft separations are reduced and final approaches are more finely timed, there is less room for error. The present study examined 122 terminal-area, loss-of-separation and procedure violation incidents reported to the Aviation Safety Reporting System (ASRS) by air traffic controllers. Narrative description codes were used for the incidents for type of violation, contributing factors, recovery strategies, and consequences. Usually multiple errors occurred prior to the violation. Error sequences were analyzed and common patterns of errors were identified. In half of the incidents, errors were noticed in time to correct mistakes. Of these, almost 43% committed additional errors during the recovery attempt. This analysis shows that redundancies in the present air traffic control system may not be sufficient to support large increases in traffic density. Error prevention and design considerations for air traffic management systems are discussed.

  18. Structured error recovery for code-word-stabilized quantum codes

    SciTech Connect

    Li Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.

    2010-05-15

    Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3{sup t} times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.

  19. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  20. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  1. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  2. Correction of an active space telescope mirror using a gradient approach and an additional deformable mirror

    NASA Astrophysics Data System (ADS)

    Allen, Matthew R.; Kim, Jae Jun; Agrawal, Brij N.

    2015-09-01

    High development cost is a challenge for space telescopes and imaging satellites. One of the primary reasons for this high cost is the development of the primary mirror, which must meet diffraction limit surface figure requirements. Recent efforts to develop lower cost, lightweight, replicable primary mirrors include development of silicon carbide actuated hybrid mirrors and carbon fiber mirrors. The silicon carbide actuated hybrid mirrors at the Naval Postgraduate School do not meet the surface quality required for an optical telescope due to high spatial frequency residual surface errors. A technique under investigation at the Naval Postgraduate School is to correct the residual surface figure error using a deformable mirror in the optical path. We present a closed loop feedback gradient controller to actively control a SMT active segment and an additional deformable mirror to reduce residual wavefront error. The simulations and experimental results show that the gradient controller reduces the residual wavefront error more than an integral controller.

  3. New Gear Transmission Error Measurement System Designed

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  4. Social aspects of clinical errors.

    PubMed

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors. PMID:19201405

  5. Diagnostic Errors Study Findings

    MedlinePlus

    ... ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/office-testing-toolkit/ . Additionally, the Office of the National ... Assistance on Health Initiatives Measurement & Reporting Tools Research Tools & ... Centers & Programs Centers & Offices Initiatives About AHRQ Portfolios ...

  6. Gender Bender: Gender Errors in L2 Pronoun Production

    ERIC Educational Resources Information Center

    Anton-Mendez, Ines

    2010-01-01

    To address questions about information processing at the message level, pronoun errors of second language (L2) speakers of English were studied. Some L2 pronoun errors--"he/she" confusions by Spanish speakers of L2 English--could be due to differences in the informational requirements of the speakers' two languages, providing a window into the…

  7. Characterizing Radar Raingauge Errors for NWP Assimilation

    NASA Astrophysics Data System (ADS)

    Dance, S.; Seed, A.

    2012-04-01

    The statistical characterisation of errors in quantitative precipitation estimates (QPE) is needed when generating QPE ensembles, combining multiple radars into a single mosaic, and when assimilating QPE into numerical weather prediction (NWP) models. The first step in the analysis was to characterise the errors at pixel resolution (1 km) as a function of radar specification, geographical location under the radar, and meteorology using data from 18 radars and 1500 rain gauges over a two-year period. The probability distribution of the radar - rain gauge residuals was evaluated and, as expected, the log-Normal distribution was found to fit the data better than the Normal distribution. Therefore the subsequent analysis was performed on the residuals expressed as decibels. The impact of beam width on the estimation errors was evaluated by comparing the errors from a one-degree S band radar (S1) with a two-degree S band radar (S2) for the same location (Brisbane) and time period. The standard deviation of the errors was found to increase by 0.2 dB per km for the S2 radar while the standard deviation for the S1 radar was constant out to the maximum range of 150 km. When data from all the S1 radars over the two years were pooled and compared with the S2 radars the standard deviation of the errors for the S1 radars increased by 0.1 dB per km compared with 0.25 dB per km for the S2 radars. The mean of the errors was found to vary significantly with range for all radars with underestimation at close range (< 30 km) and at far range (> 100 km). We think that this points to artefacts in the data due to clutter suppression at close range and over shooting the echo tops at the far range. The spatial distribution of the errors as a function of the altitude and roughness of the topography was investigated using the data from the S1 and S2 radars in Brisbane, but no relationship was found although there is clearly structure in the field. We also attempted to quantify the

  8. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  9. Experimental quantum error correction with high fidelity

    NASA Astrophysics Data System (ADS)

    Zhang, Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-01

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.81.2152 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ɛ to ˜ɛ2. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  10. Experimental quantum error correction with high fidelity

    SciTech Connect

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-15

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from {epsilon} to {approx}{epsilon}{sup 2}. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  11. Improved astigmatic focus error detection method

    NASA Technical Reports Server (NTRS)

    Bernacki, Bruce E.

    1992-01-01

    All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.

  12. Scaling prediction errors to reward variability benefits error-driven learning in humans

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease “adapters'” accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. PMID:26180123

  13. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  14. The 13 errors.

    PubMed

    Flower, J

    1998-01-01

    The reality is that most change efforts fail. McKinsey & Company carried out a fascinating research project on change to "crack the code" on creating and managing change in large organizations. One of the questions they asked--and answered--is why most organizations fail in their efforts to manage change. They found that 80 percent of these failures could be traced to 13 common errors. They are: (1) No winning strategy; (2) failure to make a compelling and urgent case for change; (3) failure to distinguish between decision-driven and behavior-dependent change; (4) over-reliance on structure and systems to change behavior; (5) lack of skills and resources; (6) failure to experiment; (7) leaders' inability or unwillingness to confront how they and their roles must change; (8) failure to mobilize and engage pivotal groups; (9) failure to understand and shape the informal organization; (10) inability to integrate and align all the initiatives; (11) no performance focus; (12) excessively open-ended process; and (13) failure to make the whole process transparent and meaningful to individuals. PMID:10351717

  15. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  16. Error analysis for encoding a qubit in an oscillator

    SciTech Connect

    Glancy, S.; Knill, E.

    2006-01-15

    In Phys. Rev. A 64, 012310 (2001), Gottesman, Kitaev, and Preskill described a method to encode a qubit in the continuous Hilbert space of an oscillator's position and momentum variables. This encoding provides a natural error-correction scheme that can correct errors due to small shifts of the position or momentum wave functions (i.e., use of the displacement operator). We present bounds on the size of correctable shift errors when both qubit and ancilla states may contain errors. We then use these bounds to constrain the quality of input qubit and ancilla states.

  17. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  18. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  19. Knowledge of error-correcting protocol helps in individual eavesdropping

    NASA Astrophysics Data System (ADS)

    Horoshko, D. B.

    2007-06-01

    The quantum key distribution protocol ΒΒ84 combined with the repetition protocol for error correction are analyzed from the viewpoint of security against individual eavesdropping empowered by quantum memory. We show that a mere knowledge of the error correction protocol changes the optimal attack and provides the eavesdropper with additional information about the generated key.

  20. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  1. Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography

    NASA Technical Reports Server (NTRS)

    Withers, Paul; Lorenz, R. D.; Neumann, G. A.

    2002-01-01

    Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.

  2. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    PubMed Central

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  3. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity.

    PubMed

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  4. Random errors in egocentric networks.

    PubMed

    Almquist, Zack W

    2012-10-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  5. Random errors in egocentric networks

    PubMed Central

    Almquist, Zack W.

    2013-01-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5–20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  6. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  7. [Error factors in spirometry].

    PubMed

    Quadrelli, S A; Montiel, G C; Roncoroni, A J

    1994-01-01

    Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690

  8. Medical errors; causes, consequences, emotional response and resulting behavioral change

    PubMed Central

    Bari, Attia; Khan, Rehan Ahmed; Rathore, Ahsan Waheed

    2016-01-01

    Objective: To determine the causes of medical errors, the emotional and behavioral response of pediatric medicine residents to their medical errors and to determine their behavior change affecting their future training. Methods: One hundred thirty postgraduate residents were included in the study. Residents were asked to complete questionnaire about their errors and responses to their errors in three domains: emotional response, learning behavior and disclosure of the error. The names of the participants were kept confidential. Data was analyzed using SPSS version 20. Results: A total of 130 residents were included. Majority 128(98.5%) of these described some form of error. Serious errors that occurred were 24(19%), 63(48%) minor, 24(19%) near misses,2(2%) never encountered an error and 17(12%) did not mention type of error but mentioned causes and consequences. Only 73(57%) residents disclosed medical errors to their senior physician but disclosure to patient’s family was negligible 15(11%). Fatigue due to long duty hours 85(65%), inadequate experience 66(52%), inadequate supervision 58(48%) and complex case 58(45%) were common causes of medical errors. Negative emotions were common and were significantly associated with lack of knowledge (p=0.001), missing warning signs (p=<0.001), not seeking advice (p=0.003) and procedural complications (p=0.001). Medical errors had significant impact on resident’s behavior; 119(93%) residents became more careful, increased advice seeking from seniors 109(86%) and 109(86%) started paying more attention to details. Intrinsic causes of errors were significantly associated with increased information seeking behavior and vigilance (p=0.003) and (p=0.01) respectively. Conclusion: Medical errors committed by residents have inadequate disclosure to senior physicians and result in negative emotions but there was positive change in their behavior, which resulted in improvement in their future training and patient care. PMID:27375682

  9. A QUANTITATIVE MODEL OF ERROR ACCUMULATION DURING PCR AMPLIFICATION

    PubMed Central

    Pienaar, E; Theron, M; Nelson, M; Viljoen, HJ

    2006-01-01

    The amplification of target DNA by the polymerase chain reaction (PCR) produces copies which may contain errors. Two sources of errors are associated with the PCR process: (1) editing errors that occur during DNA polymerase-catalyzed enzymatic copying and (2) errors due to DNA thermal damage. In this study a quantitative model of error frequencies is proposed and the role of reaction conditions is investigated. The errors which are ascribed to the polymerase depend on the efficiency of its editing function as well as the reaction conditions; specifically the temperature and the dNTP pool composition. Thermally induced errors stem mostly from three sources: A+G depurination, oxidative damage of guanine to 8-oxoG and cytosine deamination to uracil. The post-PCR modifications of sequences are primarily due to exposure of nucleic acids to elevated temperatures, especially if the DNA is in a single-stranded form. The proposed quantitative model predicts the accumulation of errors over the course of a PCR cycle. Thermal damage contributes significantly to the total errors; therefore consideration must be given to thermal management of the PCR process. PMID:16412692

  10. Teratogenic inborn errors of metabolism.

    PubMed Central

    Leonard, J. V.

    1986-01-01

    Most children with inborn errors of metabolism are born healthy without malformations as the fetus is protected by the metabolic activity of the placenta. However, certain inborn errors of the fetus have teratogenic effects although the mechanisms responsible for the malformations are not generally understood. Inborn errors in the mother may also be teratogenic. The adverse effects of these may be reduced by improved metabolic control of the biochemical disorder. PMID:3540927

  11. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-01

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors. PMID:18545594

  12. Tomographic Errors From Wavefront Healing

    NASA Astrophysics Data System (ADS)

    Malcolm, A. E.; Trampert, J.

    2008-12-01

    Despite recent advances in full-waveform modeling ray theory is still, for good reasons, the preferred method in global tomography. It is well known that ray theory is most accurate for anomalies that are large compared to the wavelength. Exactly what errors result from the failure of this assumption is less well understood, in spite of the fact that anomalies found in the Earth from ray-based tomography methods are often outside the regime in which ray theory is known to be valid. Using the spectral element method, we have computed exact delay times and compared them to ray-theoretical traveltimes for two classic anomalies, one large and disk-shaped near the core mantle boundary, and the other a plume-like structure extending throughout the mantle. Wavefront healing is apparent in the traveltime anomalies generated by these structures; its effects are strongly asymmetric between P and S arrivals due to wavelength differences and source directionality. Simple computations in two dimensions allow us to develop the intuition necessary to understand how diffractions around the anomalies explain these results. When inverting the exact travel time anomalies with ray theory we expect wavefront healing to have a strong influence on the resulting structures. We anticipate that the asymmetry will be of particular importance in anomalies in the bulk velocity structure.

  13. Gross error detection and stage efficiency estimation in a separation process

    SciTech Connect

    Serth, R.W.; Srikanth, B. . Dept. of Chemical and Natural Gas Engineering); Maronga, S.J. . Dept. of Chemical and Process Engineering)

    1993-10-01

    Accurate process models are required for optimization and control in chemical plants and petroleum refineries. These models involve various equipment parameters, such as stage efficiencies in distillation columns, the values of which must be determined by fitting the models to process data. Since the data contain random and systematic measurement errors, some of which may be large (gross errors), they must be reconciled to obtain reliable estimates of equipment parameters. The problem thus involves parameter estimation coupled with gross error detection and data reconciliation. MacDonald and Howat (1988) studied the above problem for a single-stage flash distillation process. Their analysis was based on the definition of stage efficiency due to Hausen, which has some significant disadvantages in this context, as discussed below. In addition, they considered only data sets which contained no gross errors. The purpose of this article is to extend the above work by considering alternative definitions of state efficiency and efficiency estimation in the presence of gross errors.

  14. Error Sensitivity to Environmental Noise in Quantum Circuits for Chemical State Preparation.

    PubMed

    Sawaya, Nicolas P D; Smelyanskiy, Mikhail; McClean, Jarrod R; Aspuru-Guzik, Alán

    2016-07-12

    Calculating molecular energies is likely to be one of the first useful applications to achieve quantum supremacy, performing faster on a quantum than a classical computer. However, if future quantum devices are to produce accurate calculations, errors due to environmental noise and algorithmic approximations need to be characterized and reduced. In this study, we use the high performance qHiPSTER software to investigate the effects of environmental noise on the preparation of quantum chemistry states. We simulated 18 16-qubit quantum circuits under environmental noise, each corresponding to a unitary coupled cluster state preparation of a different molecule or molecular configuration. Additionally, we analyze the nature of simple gate errors in noise-free circuits of up to 40 qubits. We find that, in most cases, the Jordan-Wigner (JW) encoding produces smaller errors under a noisy environment as compared to the Bravyi-Kitaev (BK) encoding. For the JW encoding, pure dephasing noise is shown to produce substantially smaller errors than pure relaxation noise of the same magnitude. We report error trends in both molecular energy and electron particle number within a unitary coupled cluster state preparation scheme, against changes in nuclear charge, bond length, number of electrons, noise types, and noise magnitude. These trends may prove to be useful in making algorithmic and hardware-related choices for quantum simulation of molecular energies. PMID:27254482

  15. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  16. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  17. Retransmission error control with memory

    NASA Technical Reports Server (NTRS)

    Sindhu, P. S.

    1977-01-01

    In this paper, an error control technique that is a basic improvement over automatic-repeat-request ARQ is presented. Erroneously received blocks in an ARQ system are used for error control. The technique is termed ARQ-with-memory (MRQ). The general MRQ system is described, and simple upper and lower bounds are derived on the throughput achievable by MRQ. The performance of MRQ with respect to throughput, message delay and probability of error is compared to that of ARQ by simulating both systems using error data from a VHF satellite channel being operated in the ALOHA packet broadcasting mode.

  18. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  19. Physical examination. Frequently observed errors.

    PubMed

    Wiener, S; Nathanson, M

    1976-08-16

    A method allowing for direct observation of intern and resident physicians while interviewing and examining patients has been in use on our medical wards for the last five years. A large number of errors in the performance of the medical examination by young physicians were noted and a classification of these errors into those of technique, omission, detection, interpretation, and recording was made. An approach to detection and correction of each of these kinds of errors is presented, as well as a discussion of possible reasons for the occurrence of these errors in physician performance. PMID:947266

  20. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  1. Roundoff error in long-term planetary orbit integrations

    NASA Astrophysics Data System (ADS)

    Quinn, T.; Tremaine, S.

    1990-03-01

    Possible sources of roundoff error in solar system integrations are studied. It is suggested than when floating point arithmetic is optimal, the majority of roundoff error arises from two sources: the approximate representation of the numerical coefficients used in multistep integration formulas and the additions required to evaluate these formulas. An algorithm to remove these two sources of error in computers with optimal arithmetic is presented. It is shown that the corrections result in a substantial reduction in the energy error at the cost of less than a factor of 2 increase in computing time.

  2. Approaches to relativistic positioning around Earth and error estimations

    NASA Astrophysics Data System (ADS)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  3. A posteriori error estimator and error control for contact problems

    NASA Astrophysics Data System (ADS)

    Weiss, Alexander; Wohlmuth, Barbara I.

    2009-09-01

    In this paper, we consider two error estimators for one-body contact problems. The first error estimator is defined in terms of H( div ) -conforming stress approximations and equilibrated fluxes while the second is a standard edge-based residual error estimator without any modification with respect to the contact. We show reliability and efficiency for both estimators. Moreover, the error is bounded by the first estimator with a constant one plus a higher order data oscillation term plus a term arising from the contact that is shown numerically to be of higher order. The second estimator is used in a control-based AFEM refinement strategy, and the decay of the error in the energy is shown. Several numerical tests demonstrate the performance of both estimators.

  4. Spatial frequency domain error budget

    SciTech Connect

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  5. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  6. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  7. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times. PMID:26209956

  8. Investigation of Measurement Errors in Doppler Global Velocimetry

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Lee, Joseph W.

    1999-01-01

    While the initial development phase of Doppler Global Velocimetry (DGV) has been successfully completed, there remains a critical next phase to be conducted, namely the determination of an error budget to provide quantitative bounds for measurements obtained by this technology. This paper describes a laboratory investigation that consisted of a detailed interrogation of potential error sources to determine their contribution to the overall DGV error budget. A few sources of error were obvious; e.g., iodine vapor adsorption lines, optical systems, and camera characteristics. However, additional non-obvious sources were also discovered; e.g., laser frequency and single-frequency stability, media scattering characteristics, and interference fringes. This paper describes each identified error source, its effect on the overall error budget, and where possible, corrective procedures to reduce or eliminate its effect.

  9. Automatic pronunciation error detection in non-native speech: the case of vowel errors in Dutch.

    PubMed

    van Doremalen, Joost; Cucchiarini, Catia; Strik, Helmer

    2013-08-01

    This research is aimed at analyzing and improving automatic pronunciation error detection in a second language. Dutch vowels spoken by adult non-native learners of Dutch are used as a test case. A first study on Dutch pronunciation by L2 learners with different L1s revealed that vowel pronunciation errors are relatively frequent and often concern subtle acoustic differences between the realization and the target sound. In a second study automatic pronunciation error detection experiments were conducted to compare existing measures to a metric that takes account of the error patterns observed to capture relevant acoustic differences. The results of the two studies do indeed show that error patterns bear information that can be usefully employed in weighted automatic measures of pronunciation quality. In addition, it appears that combining such a weighted metric with existing measures improves the equal error rate by 6.1 percentage points from 0.297, for the Goodness of Pronunciation (GOP) algorithm, to 0.236. PMID:23927130

  10. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  11. Dyslexia and Oral Reading Errors

    ERIC Educational Resources Information Center

    Singleton, Chris

    2005-01-01

    Thomson was the first of very few researchers to have studied oral reading errors as a means of addressing the question: Are dyslexic readers different to other readers? Using the Neale Analysis of Reading Ability and Goodman's taxonomy of oral reading errors, Thomson concluded that dyslexic readers are different, but he found that they do not…

  12. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  13. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  14. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  15. Measurement Errors in Organizational Surveys.

    ERIC Educational Resources Information Center

    Dutka, Solomon; Frankel, Lester R.

    1993-01-01

    Describes three classes of measurement techniques: (1) interviewing methods; (2) record retrieval procedures; and (3) observation methods. Discusses primary reasons for measurement error. Concludes that, although measurement error can be defined and controlled for, there are other design factors that also must be considered. (CFR)

  16. Barriers to Medical Error Reporting

    PubMed Central

    Poorolajal, Jalal; Rezaie, Shirin; Aghighi, Negar

    2015-01-01

    Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan, Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%), lack of proper reporting form (51.8%), lack of peer supporting a person who has committed an error (56.0%), and lack of personal attention to the importance of medical errors (62.9%). The rate of committing medical errors was higher in men (71.4%), age of 50–40 years (67.6%), less-experienced personnel (58.7%), educational level of MSc (87.5%), and staff of radiology department (88.9%). Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement. PMID:26605018

  17. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  18. Error suppression and correction for quantum annealing

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel

    While adiabatic quantum computing and quantum annealing enjoy a certain degree of inherent robustness against excitations and control errors, there is no escaping the need for error correction or suppression. In this talk I will give an overview of our work on the development of such error correction and suppression methods. We have experimentally tested one such method combining encoding, energy penalties and decoding, on a D-Wave Two processor, with encouraging results. Mean field theory shows that this can be explained in terms of a softening of the closing of the gap due to the energy penalty, resulting in protection against excitations that occur near the quantum critical point. Decoding recovers population from excited states and enhances the success probability of quantum annealing. Moreover, we have demonstrated that using repetition codes with increasing code distance can lower the effective temperature of the annealer. References: K.L. Pudenz, T. Albash, D.A. Lidar, ``Error corrected quantum annealing with hundreds of qubits'', Nature Commun. 5, 3243 (2014). K.L. Pudenz, T. Albash, D.A. Lidar, ``Quantum annealing correction for random Ising problems'', Phys. Rev. A. 91, 042302 (2015). S. Matsuura, H. Nishimori, T. Albash, D.A. Lidar, ``Mean Field Analysis of Quantum Annealing Correction''. arXiv:1510.07709. W. Vinci et al., in preparation.

  19. Non-Abelian quantum error correction

    NASA Astrophysics Data System (ADS)

    Feng, Weibo

    A quantum computer is a proposed device which would be capable of initializing, coherently manipulating, and measuring quantum states with sufficient accuracy to carry out new kinds of computations. In the standard scenario, a quantum computer is built out of quantum bits, or qubits, two-level quantum systems which replace the ordinary classical bits of a classical computer. Quantum computation is then carried out by applying quantum gates, the quantum equivalent of Boolean logic gates, to these qubits. The most fundamental barrier to building a quantum computer is the inevitable errors which occur when carrying out quantum gates and the loss of quantum coherence of the qubits due to their coupling to the environment (decoherence). Remarkably, it has been shown that in a quantum computer such errors and decoherence can be actively fought using what is known as quantum error correction. A closely related proposal for fighting errors and decoherence in a quantum computer is to build the computer out of so-called topologically ordered states of matter. These are states of matter which allow for the storage and manipulation of quantum states with a built in protection from error and decoherence. The excitations of these states are non-Abelian anyons, particle-like excitations which satisfy non-Abelian statistics, meaning that when two excitations are interchanged the result is not the usual +1 and -1 associated with identical Bosons or Fermions, but rather a unitary operation which acts on a multidimensional Hilbert space. It is therefore possible to envision computing with these anyons by braiding their world-lines in 2+1-dimensional spacetime. In this Dissertation we present explicit procedures for a scheme which lives at the intersection of these two approaches. In this scheme we envision a functioning ``conventional" quantum computer consisting of an array of qubits and the ability to carry out quantum gates on these qubits. We then give explicit quantum circuits

  20. Atmospheric Pressure Error of GRACE in Antarctic Ice Mass Change

    NASA Astrophysics Data System (ADS)

    Kim, B.; Eom, J.; Seo, K. W.

    2014-12-01

    As GRACE has observed time-varying gravity longer than a decade, long-term mass changes have been emerged. In particular, linear trends and accelerated patterns in Antarctica were reported and paid attention for the projection of sea level rise. The cause of accelerated ice mass loss in Antarctica is not known since its amplitude is not significantly larger than ice mass change associated with natural climate variations. In this study, we consider another uncertainty in Antarctic ice mass loss acceleration due to unmodeled atmospheric pressure field. We first compare GRACE AOD product with in-situ atmospheric pressure data from SCAR READER project. GRACE AOD (ECMWF) shows spurious jump near Transantarctic Mountains, which is due to the regular model update of ECMWF. In addition, GRACE AOD shows smaller variations than in-situ observation in coastal area. This is possibly due to the lower resolution of GRACE AOD, and thus relatively stable ocean bottom pressure associated with inverted barometric effect suppresses the variations of atmospheric pressure near coast. On the other hand, GRACE AOD closely depicts in-situ observations far from oceans. This is probably because GRACE AOD model (ECMWF) is assimilated with in-situ observations. However, the in-situ observational sites in interior of Antarctica are sparse, and thus it is still uncertain the reliability of GRACE AOD for most region of Antarctica. To examine this, we cross-validate three different reanalysis; ERA Interim, NCEP DOE and MERRA. Residual atmospheric pressure fields as a measure of atmospheric pressure errors, NCEP DOE - ERA Interim or MERRA - ERA Interim, show long-term changes, and the estimated uncertainty in acceleration of Antarctic ice mass change is about 9 Gton/yr^2 from 2003 to 2012. This result implies that the atmospheric surface pressure error likely hinders the accurate estimate of the ice mass loss acceleration in Antarctica.

  1. Reducing latent errors, drift errors, and stakeholder dissonance.

    PubMed

    Samaras, George M

    2012-01-01

    Healthcare information technology (HIT) is being offered as a transformer of modern healthcare delivery systems. Some believe that it has the potential to improve patient safety, increase the effectiveness of healthcare delivery, and generate significant cost savings. In other industrial sectors, information technology has dramatically influenced quality and profitability - sometimes for the better and sometimes not. Quality improvement efforts in healthcare delivery have not yet produced the dramatic results obtained in other industrial sectors. This may be that previously successful quality improvement experts do not possess the requisite domain knowledge (clinical experience and expertise). It also appears related to a continuing misconception regarding the origins and meaning of work errors in healthcare delivery. The focus here is on system use errors rather than individual user errors. System use errors originate in both the development and the deployment of technology. Not recognizing stakeholders and their conflicting needs, wants, and desires (NWDs) may lead to stakeholder dissonance. Mistakes translating stakeholder NWDs into development or deployment requirements may lead to latent errors. Mistakes translating requirements into specifications may lead to drift errors. At the sharp end, workers encounter system use errors or, recognizing the risk, expend extensive and unanticipated resources to avoid them. PMID:22317001

  2. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  3. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  4. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  5. Relationships between GPS-signal propagation errors and EISCAT observations

    NASA Astrophysics Data System (ADS)

    Jakowski, N.; Sardon, E.; Engler, E.; Jungstand, A.; Klähn, D.

    1996-12-01

    When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges -20°leqleq40°E and 32.5°leqleq70°N in longitude and latitude, respectively. The derived TEC maps over Europe contribute to the study of horizontal coupling and transport proces- ses during significant ionospheric events. Due to their comprehensive information about the high-latitude ionosphere, EISCAT observations may help to study the influence of ionospheric phenomena upon propagation errors in GPS navigation systems. Since there are still some accuracy limiting problems to be solved in TEC determination using GPS, data comparison of TEC with vertical electron density profiles derived from EISCAT observations is valuable to enhance the accuracy of propagation-error estimations. This is evident both for absolute TEC calibration as well as for the conversion of ray-path-related observations to vertical TEC. The combination of EISCAT data and GPS-derived TEC data enables a better understanding of large-scale ionospheric processes. Acknowledgements. This work has been supported by the UK Particle-Physics and Astronomy Research Council. The assistance of the director and staff of the EISCAT Scientific Association, the staff of the Norsk Polarinstitutt

  6. Video Error Concealment Using Fidelity Tracking

    NASA Astrophysics Data System (ADS)

    Yoneyama, Akio; Takishima, Yasuhiro; Nakajima, Yasuyuki; Hatori, Yoshinori

    We propose a method to prevent the degradation of decoded MPEG pictures caused by video transmission over error-prone networks. In this paper, we focus on the error concealment that is processed at the decoder without using any backchannels. Though there have been various approaches to this problem, they generally focus on minimizing the degradation measured frame by frame. Although this frame-level approach is effective in evaluating individual frame quality, in the sense of human perception, the most noticeable feature is the spatio-temporal discontinuity of the image feature in the decoded video image. We propose a novel error concealment algorithm comprising the combination of i) A spatio-temporal error recovery function with low processing cost, ii) A MB-based image fidelity tracking scheme, and iii) An adaptive post-filter using the fidelity information. It is demonstrated by experimental results that the proposed algorithm can significantly reduce the subjective degradation of corrupted MPEG video quality with about 30% of additional decoding processing power.

  7. Quantum rms error and Heisenberg's error-disturbance relation

    NASA Astrophysics Data System (ADS)

    Busch, Paul

    2014-09-01

    Reports on experiments recently performed in Vienna [Erhard et al, Nature Phys. 8, 185 (2012)] and Toronto [Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)] include claims of a violation of Heisenberg's error-disturbance relation. In contrast, a Heisenberg-type tradeoff relation for joint measurements of position and momentum has been formulated and proven in [Phys. Rev. Lett. 111, 160405 (2013)]. Here I show how the apparent conflict is resolved by a careful consideration of the quantum generalization of the notion of root-mean-square error. The claim of a violation of Heisenberg's principle is untenable as it is based on a historically wrong attribution of an incorrect relation to Heisenberg, which is in fact trivially violated. We review a new general trade-off relation for the necessary errors in approximate joint measurements of incompatible qubit observables that is in the spirit of Heisenberg's intuitions. The experiments mentioned may directly be used to test this new error inequality.

  8. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  9. Evaluating concentration estimation errors in ELISA microarray experiments

    SciTech Connect

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.; Anderson, Kevin K.; Zangar, Richard C.

    2005-01-26

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Although propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.

  10. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  11. Accurate identification and compensation of geometric errors of 5-axis CNC machine tools using double ball bar

    NASA Astrophysics Data System (ADS)

    Lasemi, Ali; Xue, Deyi; Gu, Peihua

    2016-05-01

    Five-axis CNC machine tools are widely used in manufacturing of parts with free-form surfaces. Geometric errors of machine tools have significant effects on the quality of manufactured parts. This research focuses on development of a new method to accurately identify geometric errors of 5-axis CNC machines, especially the errors due to rotary axes, using the magnetic double ball bar. A theoretical model for identification of geometric errors is provided. In this model, both position-independent errors and position-dependent errors are considered as the error sources. This model is simplified by identification and removal of the correlated and insignificant error sources of the machine. Insignificant error sources are identified using the sensitivity analysis technique. Simulation results reveal that the simplified error identification model can result in more accurate estimations of the error parameters. Experiments on a 5-axis CNC machine tool also demonstrate significant reduction in the volumetric error after error compensation.

  12. Human error mitigation initiative (HEMI) : summary report.

    SciTech Connect

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.; Brannon, Nathan Gregory

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operations indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.

  13. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  14. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  15. Interindividual Differences in Mid-Adolescents in Error Monitoring and Post-Error Adjustment

    PubMed Central

    Rodehacke, Sarah; Mennigen, Eva; Müller, Kathrin U.; Ripke, Stephan; Jacob, Mark J.; Hübner, Thomas; Schmidt, Dirk H. K.; Goschke, Thomas; Smolka, Michael N.

    2014-01-01

    A number of studies have concluded that cognitive control is not fully established until late adolescence. The precise differences in brain function between adults and adolescents with respect to cognitive control, however, remain unclear. To address this issue, we conducted a study in which 185 adolescents (mean age (SD) 14.6 (0.3) years) and 28 adults (mean age (SD) 25.2 (6.3) years) performed a single task that included both a stimulus-response (S-R) interference component and a task-switching component. Behavioural responses (i.e. reaction time, RT; error rate, ER) and brain activity during correct, error and post-error trials, detected by functional magnetic resonance imaging (fMRI), were measured. Behaviourally, RT and ER were significantly higher in incongruent than in congruent trials and in switch than in repeat trials. The two groups did not differ in RT during correct trials, but adolescents had a significantly higher ER than adults. In line with similar RTs, brain responses during correct trials did not differ between groups, indicating that adolescents and adults engage the same cognitive control network to successfully overcome S-R interference or task switches. Interestingly, adolescents with stronger brain activation in the bilateral insulae during error trials and in fronto-parietal regions of the cognitive control network during post-error trials did have lower ERs. This indicates that those mid-adolescents who commit fewer errors are better at monitoring their performance, and after detecting errors are more capable of flexibly allocating further cognitive control resources. Although we did not detect a convincing neural correlate of the observed behavioural differences between adolescents and adults, the revealed interindividual differences in adolescents might at least in part be due to brain development. PMID:24558455

  16. Addressing the use of phylogenetics for identification of sequences in error in the SWGDAM mitochondrial DNA database.

    PubMed

    Budowle, Bruce; Polanskey, Deborah; Allard, Marc W; Chakraborty, Ranajit

    2004-11-01

    The SWGDAM mtDNA database is a publicly available reference source that is used for estimating the rarity of an evidence mtDNA profile. Because of the current processes for generating population data, it is unlikely that population databases are error free. The majority of the errors are due to human error and are transcriptional in nature. Phylogenetic analysis of data sets can identify some potential errors, and coupled with a review of the sequence data or alignment sheets can be a very useful tool. Seven sequences with errors have been identified by phylogenetic analysis. In addition, two samples were inadvertently modified when placed in the SWGDAM database. The corrected sequences are provided so that users can modify appropriately the current iteration of the SWGDAM database. From a practical perspective, upper bound estimates of the percentage of matching profiles obtained from a database search containing an incorrect sequence and those of a database containing the corrected sequence are not substantially different. Community wide access and review has enabled identification of errors in the SWGDAM data set and will continue to do so. The result of public accessibility is that the quality of the SWGDAM forensic dataset is always improving. PMID:15568698

  17. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  18. Error-resilient method for robust video transmissions

    NASA Astrophysics Data System (ADS)

    Choi, Dong-Hwan; Lim, Tae-Gyun; Lee, Sang-Hak; Hwang, Chan-Sik

    2003-06-01

    In this paper we address the problems of video transmission in error prone environments. A novel error-resilient method is proposed that uses a data embedding scheme for header parameters in video coding standards, such as MPEG-2 and H.263. In case of requiring taking the loss of data information into account except for header errors, the video decoder hides visual degradation as well as possible, employing an error concealment method using an affine transform. Header information is very important because syntax elements, tables, and decoding processes all depend on the values of the header information. Therefore, transmission errors in header information can result in serious visual degradation of the output video and also cause an abnormal decoding process. In the proposed method, the header parameters are embedded into the least significant bits (LSB) of the quantized DCT coefficients. Then, when errors occur in the header field of the compressed bitstream, the decoder can accurately recover the corrupted header parameters if the embedded information is extracted correctly. The error concealment technique employed in this paper uses motion estimation considering actual motions, such as rotation, magnification, reduction, and parallel motion, in moving pictures. Experimental results show that the proposed error-resilient method can effectively reconstruct the original video sequence without any additional bits or modifications to the video coding standard and the error concealment method can produce a higher PSNR value and better subjective video quality, estimating the motion of lost data more accurately.

  19. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  20. Orbital and Geodetic Error Analysis

    NASA Technical Reports Server (NTRS)

    Felsentreger, T.; Maresca, P.; Estes, R.

    1985-01-01

    Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

  1. Prospective errors determine motor learning.

    PubMed

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model's novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  2. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  3. Quantum error correction beyond qubits

    NASA Astrophysics Data System (ADS)

    Aoki, Takao; Takahashi, Go; Kajiya, Tadashi; Yoshikawa, Jun-Ichi; Braunstein, Samuel L.; van Loock, Peter; Furusawa, Akira

    2009-08-01

    Quantum computation and communication rely on the ability to manipulate quantum states robustly and with high fidelity. To protect fragile quantum-superposition states from corruption through so-called decoherence noise, some form of error correction is needed. Therefore, the discovery of quantum error correction (QEC) was a key step to turn the field of quantum information from an academic curiosity into a developing technology. Here, we present an experimental implementation of a QEC code for quantum information encoded in continuous variables, based on entanglement among nine optical beams. This nine-wave-packet adaptation of Shor's original nine-qubit scheme enables, at least in principle, full quantum error correction against an arbitrary single-beam error.

  4. Reflection error correction of gas turbine blade temperature

    NASA Astrophysics Data System (ADS)

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan

    2016-03-01

    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  5. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  6. Detecting Soft Errors in Stencil based Computations

    SciTech Connect

    Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  7. Continuous quantum error correction through local operations

    SciTech Connect

    Mascarenhas, Eduardo; Franca Santos, Marcelo; Marques, Breno; Terra Cunha, Marcelo

    2010-09-15

    We propose local strategies to protect global quantum information. The protocols, which are quantum error-correcting codes for dissipative systems, are based on environment measurements, direct feedback control, and simple encoding of the logical qubits into physical qutrits whose decaying transitions are indistinguishable and equally probable. The simple addition of one extra level in the description of the subsystems allows for local actions to fully and deterministically protect global resources such as entanglement. We present codes for both quantum jump and quantum state diffusion measurement strategies and test them against several sources of inefficiency. The use of qutrits in information protocols suggests further characterization of qutrit-qutrit disentanglement dynamics, which we also give together with simple local environment measurement schemes able to prevent distillability sudden death and even enhance entanglement in situations in which our feedback error correction is not possible.

  8. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  9. Observations of TOPEX/Poseidon Orbit Errors Due to Gravitational and Tidal Modeling Errors Using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Haines, B.; Christensen, E.; Guinn, J.; Norman, R.; Marshall, J.

    1995-01-01

    Satellite altimetry must measure variations in ocean topography with cm-level accuracy. The TOPEX/Poseidon mission is designed to do this by measuring the radial component of the orbit with an accuracy of 13 cm or better RMS. Recent advances, however, have improved this accuracy by about an order of magnitude.

  10. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    SciTech Connect

    Hess-Flores, Mauricio

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  11. Sensitivity of a laser-driven-grating linac to grating errors

    SciTech Connect

    Kroll, N.M.

    1982-04-01

    The effect of grating errors on transverse beam stability is analyzed. We characterize grating errors by random groove displacements and find that transverse displacements due to such errors approach limiting values of the same order as the grating displacements themselves. It therefore appears that transverse stability requirements will not impose unusually stringent precision requirements on the grating structure.

  12. Correcting Errors in Catchment-Scale Satellite Rainfall Accumulation Using Microwave Satellite Soil Moisture Products

    NASA Astrophysics Data System (ADS)

    Ryu, D.; Crow, W. T.

    2011-12-01

    Streamflow forecasting in the poorly gauged or ungauged catchments is very difficult mainly due to the absence of the input forcing data for forecasting models. This challenge poses a threat to human safety and industry in the areas where proper warning system is not provided. Currently, a number of studies are in progress to calibrate streamflow models without relying on ground observations as an effort to construct a streamflow forecasting systems in the ungauged catchments. Also, recent advances in satellite altimetry and innovative application of the optical has enabled mapping streamflow rate and flood extent in the remote areas. In addition, remotely sensed hydrological variables such as the real-time satellite precipitation data, microwave soil moisture retrievals, and surface thermal infrared observations have the great potential to be used as a direct input or signature information to run the forecasting models. In this work, we evaluate a real-time satellite precipitation product, TRMM 3B42RT, and correct errors of the product using the microwave satellite soil moisture products over 240 catchments in Australia. The error correction is made by analyzing the difference between output soil moisture of a simple model forced by the TRMM product and the satellite retrievals of soil moisture. The real-time satellite precipitation products before and after the error correction are compared with the daily gauge-interpolated precipitation data produced by the Australian Bureau of Meteorology. The error correction improves overall accuracy of the catchment-scale satellite precipitation, especially the root mean squared error (RMSE), correlation, and the false alarm ratio (FAR), however, only a marginal improvement is observed in the probability of detection (POD). It is shown that the efficiency of the error correction is affected by the surface vegetation density and the annual precipitation of the catchments.

  13. Disclosing harmful medical errors to patients: tackling three tough cases.

    PubMed

    Gallagher, Thomas H; Bell, Sigall K; Smith, Kelly M; Mello, Michelle M; McDonald, Timothy B

    2009-09-01

    A gap exists between recommendations to disclose errors to patients and current practice. This gap may reflect important, yet unanswered questions about implementing disclosure principles. We explore some of these unanswered questions by presenting three real cases that pose challenging disclosure dilemmas. The first case involves a pancreas transplant that failed due to the pancreas graft being discarded, an error that was not disclosed partly because the family did not ask clarifying questions. Relying on patient or family questions to determine the content of disclosure is problematic. We propose a standard of materiality that can help clinicians to decide what information to disclose. The second case involves a fatal diagnostic error that the patient's widower was unaware had happened. The error was not disclosed out of concern that disclosure would cause the widower more harm than good. This case highlights how institutions can overlook patients' and families' needs following errors and emphasizes that benevolent deception has little role in disclosure. Institutions should consider whether involving neutral third parties could make disclosures more patient centered. The third case presents an intraoperative cardiac arrest due to a large air embolism where uncertainty around the clinical event was high and complicated the disclosure. Uncertainty is common to many medical errors but should not deter open conversations with patients and families about what is and is not known about the event. Continued discussion within the medical profession about applying disclosure principles to real-world cases can help to better meet patients' and families' needs following medical errors. PMID:19736193

  14. Error localization in RHIC by fitting difference orbits

    SciTech Connect

    Liu C.; Minty, M.; Ptitsyn, V.

    2012-05-20

    The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.

  15. First-order approximation error analysis of Risley-prism-based beam directing system.

    PubMed

    Zhao, Yanyan; Yuan, Yan

    2014-12-01

    To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration. PMID:25607958

  16. Influence of litho patterning on DSA placement errors

    NASA Astrophysics Data System (ADS)

    Wuister, Sander; Druzhinina, Tamara; Ambesi, Davide; Laenens, Bart; Yi, Linda He; Finders, Jo

    2014-03-01

    Directed self-assembly of block copolymers is currently being investigated as a shrinking technique complementary to lithography. One of the critical issues about this technique is that DSA induces the placement error. In this paper, study of the relation between confinement by lithography and the placement error induced by DSA is demonstrated. Here, both 193i and EUV pre-patterns are created using a simple algorithm to confine two contact holes formed by DSA on a pitch of 45nm. Full physical numerical simulations were used to compare the impact of the confinement on DSA related placement error, pitch variations due to pattern variations and phase separation defects.

  17. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the

  18. Error Reduction for Weigh-In-Motion

    SciTech Connect

    Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T

    2009-01-01

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).

  19. Error Reduction in Weigh-In-Motion

    2007-09-21

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bounding and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with lessmore » effort (elimination of redundant weighing)« less

  20. Measurement error in biomarkers: sources, assessment, and impact on studies.

    PubMed

    White, Emily

    2011-01-01

    Measurement error in a biomarker refers to the error of a biomarker measure applied in a specific way to a specific population, versus the true (etiologic) exposure. In epidemiologic studies, this error includes not only laboratory error, but also errors (variations) introduced during specimen collection and storage, and due to day-to-day, month-to-month, and year-to-year within-subject variability of the biomarker. Validity and reliability studies that aim to assess the degree of biomarker error for use of a specific biomarker in epidemiologic studies must be properly designed to measure all of these sources of error. Validity studies compare the biomarker to be used in an epidemiologic study to a perfect measure in a group of subjects. The parameters used to quantify the error in a binary marker are sensitivity and specificity. For continuous biomarkers, the parameters used are bias (the mean difference between the biomarker and the true exposure) and the validity coefficient (correlation of the biomarker with the true exposure). Often a perfect measure of the exposure is not available, so reliability (repeatability) studies are conducted. These are analysed using kappa for binary biomarkers and the intraclass correlation coefficient for continuous biomarkers. Equations are given which use these parameters from validity or reliability studies to estimate the impact of nondifferential biomarker measurement error on the risk ratio in an epidemiologic study that will use the biomarker. Under nondifferential error, the attenuation of the risk ratio is towards the null and is often quite substantial, even for reasonably accurate biomarker measures. Differential biomarker error between cases and controls can bias the risk ratio in any direction and completely invalidate an epidemiologic study. PMID:22997860

  1. Errors and Their Mitigation at the Kirchhoff-Law-Johnson-Noise Secure Key Exchange

    PubMed Central

    Saez, Yessica; Kish, Laszlo B.

    2013-01-01

    A method to quantify the error probability at the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange is introduced. The types of errors due to statistical inaccuracies in noise voltage measurements are classified and the error probability is calculated. The most interesting finding is that the error probability decays exponentially with the duration of the time window of single bit exchange. The results indicate that it is feasible to have so small error probabilities of the exchanged bits that error correction algorithms are not required. The results are demonstrated with practical considerations. PMID:24303033

  2. Refractive Error, Axial Length, and Relative Peripheral Refractive Error before and after the Onset of Myopia

    PubMed Central

    Mutti, Donald O.; Hayes, John R.; Mitchell, G. Lynn; Jones, Lisa A.; Moeschberger, Melvin L.; Cotter, Susan A.; Kleinstein, Robert N.; Manny, Ruth E.; Twelker, J. Daniel; Zadnik, Karla

    2009-01-01

    year after onset, whereas axial length and myopic refractive error continued to elongate and to progress, respectively, although at slower rates compared with the rate at onset. Conclusions A more negative refractive error, longer axial length, and more hyperopic relative peripheral refractive error in addition to faster rates of change in these variables may be useful for predicting the onset of myopia, but only within a span of 2 to 4 years before onset. Becoming myopic does not appear to be characterized by a consistent rate of increase in refractive error and expansion of the globe. Acceleration in myopia progression, axial elongation, and peripheral hyperopia in the year prior to onset followed by relatively slower, more stable rates of change after onset suggests that more than one factor may influence ocular expansion during myopia onset and progression. PMID:17525178

  3. On the Chemical Basis of Trotter-Suzuki Errors in Quantum Chemistry Simulation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan

    2015-03-01

    Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to Trotterization in terms of the norm of the error operator and analyzed scaling with respect to the number of spin-orbitals. However, we find that these error bounds can be loose by up to sixteen orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground state error and number of spin-orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and to estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.

  4. Chemical basis of Trotter-Suzuki errors in quantum chemistry simulation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan

    2015-02-01

    Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to discretization of the time evolution (known as "Trotterization") in terms of the norm of the error operator and analyzed scaling with respect to the number of spin orbitals. However, we find that these error bounds can be loose by up to 16 orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground-state error and number of spin orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.

  5. Impacts of double-ended beam-pointing error on system performance

    NASA Astrophysics Data System (ADS)

    Horkin, Phil R.

    2000-05-01

    Optical Intersatellite links have been investigated for many years, but to date have enjoyed few spaceborne applications. The literature is rich in articles describing system issues such as jitter and pointing effects, but this author believes that simplifications generally made lead to significant errors. Simplifications made, for example, due to the complexity of joint distribution functions are easily overcome with widely available computer tools. Satellite- based data transport systems must offer similar Quality of Service (QoS) parameters as fiber-based transport. The movement to packet-based protocols adds additional constraints not often considered in past papers. BER may no longer be the dominant concern; packet loss, misdelivery, or severely corrupted packets can easily dominate the error budgets. The aggregation of static and dynamic pointing errors on both ends of such a link dramatically reduces the QoS. The approach described in this paper provides the terminal designer the methodology to analytically balance the impacts of these error sources against implementation solutions.

  6. Simulation of Radar Rainfall Fields: A Random Error Model

    NASA Astrophysics Data System (ADS)

    Aghakouchak, A.; Habib, E.; Bardossy, A.

    2008-12-01

    Precipitation is a major input in hydrological and meteorological models. It is believed that uncertainties due to input data will propagate in modeling hydrologic processes. Stochastically generated rainfall data are used as input to hydrological and meteorological models to assess model uncertainties and climate variability in water resources systems. The superposition of random errors of different sources is one of the main factors in uncertainty of radar estimates. One way to express these uncertainties is to stochastically generate random error fields to impose them on radar measurements in order to obtain an ensemble of radar rainfall estimates. In the method introduced here, the random error consists of two components: purely random error and dependent error on the indicator variable. Model parameters of the error model are estimated using a heteroscedastic maximum likelihood model in order to account for variance heterogeneity in radar rainfall error estimates. When reflectivity values are considered, the exponent and multiplicative factor of the Z-R relationship are estimated simultaneously with the model parameters. The presented model performs better compared to the previous approaches that generally result in unaccounted heteroscedasticity in error fields and thus radar ensemble.

  7. Errors in imaging the pregnant patient with acute abdomen.

    PubMed

    Casciani, Emanuele; De Vincentiis, Chiara; Mazzei, Maria Antonietta; Masselli, Gabriele; Guerrini, Susanna; Polettini, Elisabetta; Pinto, Antonio; Gualdi, Gianfranco

    2015-10-01

    Pregnant women with an acute abdomen present a critical issue due to the necessity for an immediate diagnosis and treatment; in fact, a diagnostic delay could worsen the outcome for both the mother and the fetus. There is evidence that emergencies during pregnancy are subject to mismanagement; however, the percentage of errors in the diagnosis of emergencies in pregnancy has not been studied in depth. The purpose of this article is to review the most common imaging error emergencies. The topics covered are divided into gynecological and non-gynecological entities and, for each pathology, possible errors have been dealt with in the diagnostic pathway, the possible technical errors in the exam execution, and finally the possible errors in the interpretation of the images. These last two entities are often connected owing to a substandard examination, which can cause errors in the interpretation. Consequently, the systemization of errors reduces the possibility of reoccurrences in the future by providing a valid approach in helping to learn from these errors. PMID:26194813

  8. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors

    PubMed Central

    Murphy, Peter R.; van Moort, Marianne L.; Nieuwenhuis, Sander

    2016-01-01

    Reaction time (RT) is commonly observed to slow down after an error. This post-error slowing (PES) has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES. PMID:27010472

  9. Error Detection Processes in Problem Solving.

    ERIC Educational Resources Information Center

    Allwood, Carl Martin

    1984-01-01

    Describes a study which analyzed problem solvers' error detection processes by instructing subjects to think aloud when solving statistical problems. Effects of evaluative episodes on error detection, detection of different error types, error detection processes per se, and relationship of error detection behavior to problem-solving proficiency…

  10. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  11. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements. PMID:27250375

  12. Statistical Error Analysis for Digital Recursive Filters

    NASA Astrophysics Data System (ADS)

    Wu, Kevin Chi-Rung

    The study of arithmetic roundoff error has attracted many researchers to investigate how the signal-to-noise ratio (SNR) is affected by algorithmic parameters, especially since the VLSI (Very Large Scale Integrated circuits) technologies have become more promising for digital signal processing. Typically, digital signal processing involving, either with or without matrix inversion, will have tradeoffs on speed and processor cost. Hence, the problems of an area-time efficient matrix computation and roundoff error behavior analysis will play an important role in this dissertation. A newly developed non-Cholesky square-root matrix will be discussed which precludes the arithmetic roundoff error over some interesting operations, such as complex -valued matrix inversion with its SNR analysis and error propagation effects. A non-CORDIC parallelism approach for complex-valued matrix will be presented to upgrade speed at the cost of moderate increase of processor. The lattice filter will also be looked into, in such a way, that one can understand the SNR behavior under the conditions of different inputs in the joint process system. Pipelining technique will be demonstrated to manifest the possibility of high-speed non-matrix-inversion lattice filter. Floating point arithmetic modelings used in this study have been focused on effective methodologies that have been proved to be reliable and feasible. With the models in hand, we study the roundoff error behavior based on some statistical assumptions. Results are demonstrated by carrying out simulation to show the feasibility of SNR analysis. We will observe that non-Cholesky square-root matrix has advantage of saving a time of O(n^3) as well as a reduced realization cost. It will be apparent that for a Kalman filter the register size is increasing significantly, if pole of the system matrix is moving closer to the edge of the unit circle. By comparing roundoff error effect due to floating-point and fixed-point arithmetics, we

  13. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900

  14. [Errors in surgery. Strategies to improve surgical safety].

    PubMed

    Arenas-Márquez, Humberto; Anaya-Prado, Roberto

    2008-01-01

    Surgery is an extreme experience for both patient and surgeon. The patient has to be rescued from something so serious that it may justify the surgeon to violate his/her integrity in order to resolve the problem. Nevertheless, both physician and patient recognize that the procedure has some risks. Medical errors are the 8th cause of death in the U.S., and malpractice can be documented in >50% of the legal prosecutions in Mexico. Of special interest is the specialty of general surgery where legal responsibility can be confirmed in >80% of the cases. Interest in mortality attributed to medical errors has existed since the 19th century; clearly identifying the lack of knowledge, abilities, and poor surgical and diagnostic judgment as the cause of errors. Currently, poor organization, lack of team work, and physician/ patient-related factors are recognized as the cause of medical errors. Human error is unavoidable and health care systems and surgeons should adopt the culture of error analysis openly, inquisitively and permanently. Errors should be regarded as an opportunity to learn that health care should to be patient centered and not surgeon centered. In this review, we analyze the causes of complications and errors that can develop during routine surgery. Additionally, we propose measures that will allow improvements in the safety of surgical patients. PMID:18778549

  15. Laser tracker error determination using a network measurement

    NASA Astrophysics Data System (ADS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-04-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.

  16. Coded modulation with unequal error protection

    NASA Astrophysics Data System (ADS)

    Wei, Lee-Fang

    1993-10-01

    It is always desirable to maintain communications in difficult situations, even though fewer messages can get across. This paper provides such capabilities for one-way broadcast media, such as the envisioned terrestrial broadcasting of digital high-definition television signals. In this television broadcasting, the data from video source encoders are not equally important. It is desirable that the important data be recovered by each receiver even under poor receiving conditions. Two approaches for providing such unequal error protection to different classes of data are presented. Power-efficient and bandwidth-efficient coded modulation is used in both approaches. The first approach is based on novel signal constellations with nonuniformly spaced signal points. The second uses time division multiplexing of different conventional coded modulation schemes. Both approaches can provide error protection for the important data to an extent that can hardly be achieved using conventional coded modulation with equal error protection. For modest amounts of important data, the first approach has, additionally, the potential of providing immunity from impulse noise through simple bit or signal-point interleaving.

  17. Photocephalometry: errors of projection and landmark location.

    PubMed

    Phillips, C; Greer, J; Vig, P; Matteson, S

    1984-09-01

    A method called photocephalometry was recently described for the possible soft-tissue evaluation of orthognathic surgery patients by the superimposition of coordinated cephalographs and photographs. A grid analysis was performed to determine the accuracy of the superimposition method. In addition, the reliability of landmark identification was analyzed by the method error of Baumrind and Frantz, using three replicates of twelve patients' photographs. Comparison of twenty-one grid intervals showed that the magnification of the photographic image for any given grid plane is not correlated to that of the radiographic image. Accurate comparisons between soft- and hard-tissue anatomy by simply superimposing the images are not feasible because of the difference in the enlargement factors between the photographs and x-ray films. As was noted by Baumrind and Frantz, a wide range exists in the variability of estimating the location of landmarks. Sixty-six percent of the lateral photographic landmarks and 57% of the frontal landmarks had absolute mean errors for all twelve patients that were less than or equal to 2.0 mm. In general, the envelope of error for most landmarks was not circular. Although the photocephalometric apparatus as described by Hohl and colleagues does not yield the desired quantitative correlation between hard and soft tissues, valuable quantitative information on soft tissue can be easily obtained with the standardization and replication possible with the camera setup and enlarged photographs. PMID:6591803

  18. Quantum computations: algorithms and error correction

    NASA Astrophysics Data System (ADS)

    Kitaev, A. Yu

    1997-12-01

    Contents §0. Introduction §1. Abelian problem on the stabilizer §2. Classical models of computations2.1. Boolean schemes and sequences of operations2.2. Reversible computations §3. Quantum formalism3.1. Basic notions and notation3.2. Transformations of mixed states3.3. Accuracy §4. Quantum models of computations4.1. Definitions and basic properties4.2. Construction of various operators from the elements of a basis4.3. Generalized quantum control and universal schemes §5. Measurement operators §6. Polynomial quantum algorithm for the stabilizer problem §7. Computations with perturbations: the choice of a model §8. Quantum codes (definitions and general properties)8.1. Basic notions and ideas8.2. One-to-one codes8.3. Many-to-one codes §9. Symplectic (additive) codes9.1. Algebraic preparation9.2. The basic construction9.3. Error correction procedure9.4. Torus codes §10. Error correction in the computation process: general principles10.1. Definitions and results10.2. Proofs §11. Error correction: concrete procedures11.1. The symplecto-classical case11.2. The case of a complete basis Bibliography

  19. Underlying Cause(s) of Letter Perseveration Errors

    PubMed Central

    Fischer-Baum, Simon; Rapp, Brenda

    2011-01-01

    Perseverations, the inappropriate intrusion of elements from a previous response into a current response, are commonly observed in individuals with acquired deficits. This study specifically investigates the contribution of failure-to activate and failure-to-inhibit deficit(s) in the generation of letter perseveration errors in acquired dysgraphia. We provide evidence from the performance 12 dysgraphic individuals indicating that a failure to activate graphemes for a target word gives rise to letter perseveration errors. In addition, we also provide evidence that, in some individuals, a failure-to-inhibit deficit may also contribute to the production of perseveration errors. PMID:22178232

  20. Shuttle orbit IMU alignment. Single-precision computation error

    NASA Technical Reports Server (NTRS)

    Mcclain, C. R.

    1980-01-01

    The source of computational error in the inertial measurement unit (IMU) onorbit alignment software was investigated. Simulation runs were made on the IBM 360/70 computer with the IMU orbit alignment software coded in hal/s. The results indicate that for small IMU misalignment angles (less than 600 arc seconds), single precision computations in combination with the arc cosine method of eigen rotation angle extraction introduces an additional misalignment error of up to 230 arc seconds per axis. Use of the arc sine method, however, produced negligible misalignment error. As a result of this study, the arc sine method was recommended for use in the IMU onorbit alignment software.

  1. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  2. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  3. Synthetic aperture interferometry: error analysis

    SciTech Connect

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  4. A Simple Approximation for the Symbol Error Rate of Triangular Quadrature Amplitude Modulation

    NASA Astrophysics Data System (ADS)

    Duy, Tran Trung; Kong, Hyung Yun

    In this paper, we consider the error performance of the regular triangular quadrature amplitude modulation (TQAM). In particular, using an accurate exponential bound of the complementary error function, we derive a simple approximation for the average symbol error rate (SER) of TQAM over Additive White Gaussian Noise (AWGN) and fading channels. The accuracy of our approach is verified by some simulation results.

  5. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. PMID:27184070

  6. 20 Tips to Help Prevent Medical Errors

    MedlinePlus

    ... Prevent Medical Errors 20 Tips to Help Prevent Medical Errors: Patient Fact Sheet This information is for ... current information. Select to Download PDF (295 KB). Medical errors can occur anywhere in the health care ...

  7. Ligation errors in DNA computing.

    PubMed

    Aoi, Y; Yoshinobu, T; Tanizawa, K; Kinoshita, K; Iwasaki, H

    1999-10-01

    DNA computing is a novel method of computing proposed by Adleman (1994), in which the data is encoded in the sequences of oligonucleotides. Massively parallel reactions between oligonucleotides are expected to make it possible to solve huge problems. In this study, reliability of the ligation process employed in the DNA computing is tested by estimating the error rate at which wrong oligonucleotides are ligated. Ligation of wrong oligonucleotides would result in a wrong answer in the DNA computing. The dependence of the error rate on the number of mismatches between oligonucleotides and on the combination of bases is investigated. PMID:10636043

  8. Reduced error signalling in medication-naive children with ADHD: associations with behavioural variability and post-error adaptations

    PubMed Central

    Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom

    2016-01-01

    Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332

  9. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  10. Evaluating the Effect of Global Positioning System (GPS) Satellite Clock Error via GPS Simulation

    NASA Astrophysics Data System (ADS)

    Sathyamoorthy, Dinesh; Shafii, Shalini; Amin, Zainal Fitry M.; Jusoh, Asmariah; Zainun Ali, Siti

    2016-06-01

    This study is aimed at evaluating the effect of Global Positioning System (GPS) satellite clock error using GPS simulation. Two conditions of tests are used; Case 1: All the GPS satellites have clock errors within the normal range of 0 to 7 ns, corresponding to pseudorange error range of 0 to 2.1 m; Case 2: One GPS satellite suffers from critical failure, resulting in clock error in the pseudorange of up to 1 km. It is found that increase of GPS satellite clock error causes increase of average positional error due to increase of pseudorange error in the GPS satellite signals, which results in increasing error in the coordinates computed by the GPS receiver. Varying average positional error patterns are observed for the each of the readings. This is due to the GPS satellite constellation being dynamic, causing varying GPS satellite geometry over location and time, resulting in GPS accuracy being location / time dependent. For Case 1, in general, the highest average positional error values are observed for readings with the highest PDOP values, while the lowest average positional error values are observed for readings with the lowest PDOP values. For Case 2, no correlation is observed between the average positional error values and PDOP, indicating that the error generated is random.

  11. Intensive Treatment with Ultrasound Visual Feedback for Speech Sound Errors in Childhood Apraxia.

    PubMed

    Preston, Jonathan L; Leece, Megan C; Maas, Edwin

    2016-01-01

    Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients with additional knowledge about their tongue shapes when attempting to produce sounds that are erroneous. The additional feedback may assist children with childhood apraxia of speech (CAS) in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice) may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 years diagnosed with CAS attended 16 h of speech therapy over a 2-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor patterns and improved articulation of

  12. Intensive Treatment with Ultrasound Visual Feedback for Speech Sound Errors in Childhood Apraxia

    PubMed Central

    Preston, Jonathan L.; Leece, Megan C.; Maas, Edwin

    2016-01-01

    Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients with additional knowledge about their tongue shapes when attempting to produce sounds that are erroneous. The additional feedback may assist children with childhood apraxia of speech (CAS) in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice) may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10–14 years diagnosed with CAS attended 16 h of speech therapy over a 2-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor patterns and improved articulation

  13. Does naming accuracy improve through self-monitoring of errors?

    PubMed

    Schwartz, Myrna F; Middleton, Erica L; Brecher, Adelyn; Gagliardi, Maureen; Garvey, Kelly

    2016-04-01

    This study examined spontaneous self-monitoring of picture naming in people with aphasia. Of primary interest was whether spontaneous detection or repair of an error constitutes an error signal or other feedback that tunes the production system to the desired outcome. In other words, do acts of monitoring cause adaptive change in the language system? A second possibility, not incompatible with the first, is that monitoring is indicative of an item's representational strength, and strength is a causal factor in language change. Twelve PWA performed a 615-item naming test twice, in separate sessions, without extrinsic feedback. At each timepoint, we scored the first complete response for accuracy and error type and the remainder of the trial for verbalizations consistent with detection (e.g., "no, not that") and successful repair (i.e., correction). Data analysis centered on: (a) how often an item that was misnamed at one timepoint changed to correct at the other timepoint, as a function of monitoring; and (b) how monitoring impacted change scores in the Forward (Time 1 to Time 2) compared to Backward (Time 2 to Time 1) direction. The Strength hypothesis predicts significant effects of monitoring in both directions. The Learning hypothesis predicts greater effects in the Forward direction. These predictions were evaluated for three types of errors--Semantic errors, Phonological errors, and Fragments--using mixed-effects regression modeling with crossed random effects. Support for the Strength hypothesis was found for all three error types. Support for the Learning hypothesis was found for Semantic errors. All effects were due to error repair, not error detection. We discuss the theoretical and clinical implications of these novel findings. PMID:26863091

  14. Which forcing data errors matter most when modeling seasonal snowpacks?

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2014-12-01

    High quality forcing data are critical when modeling seasonal snowpacks and snowmelt, but their quality is often compromised due to measurement errors or deficiencies in gridded data products (e.g., spatio-temporal interpolation, empirical parameterizations, or numerical weather model outputs). To assess the relative impact of errors in different meteorological forcings, many studies have conducted sensitivity analyses where errors (e.g., bias) are imposed on one forcing at a time and changes in model output are compared. Although straightforward, this approach only considers simplistic error structures and cannot quantify interactions in different meteorological forcing errors (i.e., it assumes a linear system). Here we employ the Sobol' method of global sensitivity analysis, which allows us to test how co-existing errors in six meteorological forcings (i.e., air temperature, precipitation, wind speed, humidity, incoming shortwave and longwave radiation) impact specific modeled snow variables (i.e., peak snow water equivalent, snowmelt rates, and snow disappearance timing). Using the Sobol' framework across a large number of realizations (>100000 simulations annually at each site), we test how (1) the type (e.g., bias vs. random errors), (2) distribution (e.g., uniform vs. normal), and (3) magnitude (e.g., instrument uncertainty vs. field uncertainty) of forcing errors impact key outputs from a physically based snow model (the Utah Energy Balance). We also assess the role of climate by conducting the analysis at sites in maritime, intermountain, continental, and tundra snow zones. For all outputs considered, results show that (1) biases in forcing data are more important than random errors, (2) the choice of error distribution can enhance the importance of specific forcings, and (3) the level of uncertainty considered dictates the relative importance of forcings. While the relative importance of forcings varied with snow variable and climate, the results broadly

  15. Sensitivity Studies of the Radar-Rainfall Error Models

    NASA Astrophysics Data System (ADS)

    Villarini, G.; Krajewski, W. F.; Ciach, G. J.

    2007-12-01

    It is well acknowledged that there are large uncertainties associated with the operational quantitative precipitation estimates produced by the U.S. national network of WSR-88D radars. These errors are due to the measurement principles, parameter estimation, and not fully understood physical processes. Comprehensive quantitative evaluation of these uncertainties is still at an early stage. The authors proposed an empirically-based model in which the relation between true rainfall (RA) and radar-rainfall (RR) could be described as the product of a deterministic distortion function and a random component. However, how different values of the parameters in the radar-rainfall algorithms used to create these products impact the model results still remains an open question. In this study, the authors investigate the effects of different exponents in the Z-R relation (Marshall- Palmer, NEXRAD, and tropical) and of an anomalous propagation (AP) removal algorithm. Additionally, they generalize the model to describe the radar-rainfall uncertainties in the additive form. This approach is fully empirically based and rain gauge estimates are considered as an approximation of the true rainfall. The proposed results are based on a large sample (six years) of data from the Oklahoma City radar (KTLX) and processed through the Hydro-NEXRAD software system. The radar data are complemented with the corresponding rain gauge observations from the Oklahoma Mesonet, and the Agricultural Research Service Micronet.

  16. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  17. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  18. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  19. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  20. RM2: rms error comparisons

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1976-01-01

    The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.

  1. What Is a Reading Error?

    ERIC Educational Resources Information Center

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  2. Amplify Errors to Minimize Them

    ERIC Educational Resources Information Center

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  3. Typical errors of ESP users

    NASA Astrophysics Data System (ADS)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  4. Cascade Error Projection Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  5. Input/output error analyzer

    NASA Technical Reports Server (NTRS)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  6. A brief history of error.

    PubMed

    Murray, Andrew W

    2011-10-01

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it. PMID:21968991

  7. Verification of the Forecast Errors Based on Ensemble Spread

    NASA Astrophysics Data System (ADS)

    Vannitsem, S.; Van Schaeybroeck, B.

    2014-12-01

    The use of ensemble prediction systems allows for an uncertainty estimation of the forecast. Most end users do not require all the information contained in an ensemble and prefer the use of a single uncertainty measure. This measure is the ensemble spread which serves to forecast the forecast error. It is however unclear how best the quality of these forecasts can be performed, based on spread and forecast error only. The spread-error verification is intricate for two reasons: First for each probabilistic forecast only one observation is substantiated and second, the spread is not meant to provide an exact prediction for the error. Despite these facts several advances were recently made, all based on traditional deterministic verification of the error forecast. In particular, Grimit and Mass (2007) and Hopson (2014) considered in detail the strengths and weaknesses of the spread-error correlation, while Christensen et al (2014) developed a proper-score extension of the mean squared error. However, due to the strong variance of the error given a certain spread, the error forecast should be preferably considered as probabilistic in nature. In the present work, different probabilistic error models are proposed depending on the spread-error metrics used. Most of these models allow for the discrimination of a perfect forecast from an imperfect one, independent of the underlying ensemble distribution. The new spread-error scores are tested on the ensemble prediction system of the European Centre of Medium-range forecasts (ECMWF) over Europe and Africa. ReferencesChristensen, H. M., Moroz, I. M. and Palmer, T. N., 2014, Evaluation of ensemble forecast uncertainty using a new proper score: application to medium-range and seasonal forecasts. In press, Quarterly Journal of the Royal Meteorological Society. Grimit, E. P., and C. F. Mass, 2007: Measuring the ensemble spread-error relationship with a probabilistic approach: Stochastic ensemble results. Mon. Wea. Rev., 135, 203

  8. Toward a cognitive taxonomy of medical errors.

    PubMed Central

    Zhang, Jiajie; Patel, Vimla L.; Johnson, Todd R.; Shortliffe, Edward H.

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions. PMID:12463962

  9. Density Estimation Framework for Model Error Assessment

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Liu, Z.; Najm, H. N.; Safta, C.; VanBloemenWaanders, B.; Michelsen, H. A.; Bambha, R.

    2014-12-01

    In this work we highlight the importance of model error assessment in physical model calibration studies. Conventional calibration methods often assume the model is perfect and account for data noise only. Consequently, the estimated parameters typically have biased values that implicitly compensate for model deficiencies. Moreover, improving the amount and the quality of data may not improve the parameter estimates since the model discrepancy is not accounted for. In state-of-the-art methods model discrepancy is explicitly accounted for by enhancing the physical model with a synthetic statistical additive term, which allows appropriate parameter estimates. However, these statistical additive terms do not increase the predictive capability of the model because they are tuned for particular output observables and may even violate physical constraints. We introduce a framework in which model errors are captured by allowing variability in specific model components and parameterizations for the purpose of achieving meaningful predictions that are both consistent with the data spread and appropriately disambiguate model and data errors. Here we cast model parameters as random variables, embedding the calibration problem within a density estimation framework. Further, we calibrate for the parameters of the joint input density. The likelihood function for the associated inverse problem is degenerate, therefore we use Approximate Bayesian Computation (ABC) to build prediction-constraining likelihoods and illustrate the strengths of the method on synthetic cases. We also apply the ABC-enhanced density estimation to the TransCom 3 CO2 intercomparison study (Gurney, K. R., et al., Tellus, 55B, pp. 555-579, 2003) and calibrate 15 transport models for regional carbon sources and sinks given atmospheric CO2 concentration measurements.

  10. Factors that influence the generation of autobiographical memory conjunction errors.

    PubMed

    Devitt, Aleea L; Monk-Fromont, Edwin; Schacter, Daniel L; Addis, Donna Rose

    2016-01-01

    The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors. PMID:25611492

  11. Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Minnett, P. J.

    2014-12-01

    Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.

  12. Prospective, multidisciplinary recording of perioperative errors in cerebrovascular surgery: is error in the eye of the beholder?

    PubMed

    Michalak, Suzanne M; Rolston, John D; Lawton, Michael T

    2016-06-01

    OBJECT Surgery requires careful coordination of multiple team members, each playing a vital role in mitigating errors. Previous studies have focused on eliciting errors from only the attending surgeon, likely missing events observed by other team members. METHODS Surveys were administered to the attending surgeon, resident surgeon, anesthesiologist, and nursing staff immediately following each of 31 cerebrovascular surgeries; participants were instructed to record any deviation from optimal course (DOC). DOCs were categorized and sorted by reporter and perioperative timing, then correlated with delays and outcome measures. RESULTS Errors were recorded in 93.5% of the 31 cases surveyed. The number of errors recorded per case ranged from 0 to 8, with an average of 3.1 ± 2.1 errors (± SD). Overall, technical errors were most common (24.5%), followed by communication (22.4%), management/judgment (16.0%), and equipment (11.7%). The resident surgeon reported the most errors (52.1%), followed by the circulating nurse (31.9%), the attending surgeon (26.6%), and the anesthesiologist (14.9%). The attending and resident surgeons were most likely to report technical errors (52% and 30.6%, respectively), while anesthesiologists and circulating nurses mostly reported anesthesia errors (36%) and communication errors (50%), respectively. The overlap in reported errors was 20.3%. If this study had used only the surveys completed by the attending surgeon, as in prior studies, 72% of equipment errors, 90% of anesthesia and communication errors, and 100% of nursing errors would have been missed. In addition, it would have been concluded that errors occurred in only 45.2% of cases (rather than 93.5%) and that errors resulting in a delay occurred in 3.2% of cases instead of the 74.2% calculated using data from 4 team members. Compiled results from all team members yielded significant correlations between technical DOCs and prolonged hospital stays and reported and actual delays (p = 0

  13. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    SciTech Connect

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  14. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase.

    PubMed

    McInerney, Peter; Adams, Paul; Hadi, Masood Z

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  15. Sensitivity of actively damped structures to imperfections and modeling errors

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Kapania, Rakesh K.

    1989-01-01

    The sensitivity of actively damped response of structures with respect to errors in the structural modeling is studied. Two ways of representing errors are considered. The first approach assumes errors in the form of spatial variations (or imperfections) in the assumed mass and stiffness properties of the structures. The second approach assumes errors due to such factors as unknown joint stiffnesses, discretization errors, and nonlinearities. These errors are represented here as discrepancies between experimental and analytical mode shapes and frequencies. The actively damped system considered here is a direct-rate feedback regulator based on a number of colocated velocity sensors and force actuators. The response of the controlled structure is characterized by the eigenvalues of the closed-loop system. The effects of the modeling errors are thus presented as the sensitivity of the eigenvalues of the closed-loop system. Results are presented for two examples: (1) a three-span simply supported beam controlled by three sensors and actuators, and (2) a laboratory structure consisting of a cruciform beam supported by cables.

  16. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    PubMed Central

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  17. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  18. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1992-01-01

    The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.

  19. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  20. Novel view synthesis with residual error feedback for FTV

    NASA Astrophysics Data System (ADS)

    Furihata, Hisayoshi; Yendo, Tomohiro; Panahpour Tehrani, Mehrdad; Fujii, Toshiaki; Tanimoto, Masayuki

    2010-02-01

    The availability of multi-view images of a scene makes possible new and exciting applications, including Free-viewpoint TV (FTV). FTV allows us to change viewpoint freely in a 3D world, where the virtual viewpoint images are synthesized by Depth-Image-Based Rendering (DIBR). In this paper, we propose a new method of DIBR using multi-view images acquired in a linear camera arrangement. The proposed method improves virtual viewpoint images by predicting the residual errors. For virtual viewpoint image synthesis, it is necessary to estimate the depth maps with multi-view images. Some algorithms to estimate depth map were proposed, but it is difficult to estimate accurate depth map. As a result, rendered virtual viewpoint images have some errors due to the depth errors. Therefore, our proposed method takes into account those depth errors and improves the quality of the rendered virtual viewpoint images. In the proposed method, the virtual images of each camera position are generated using the real images from each other camera. Then, the residual errors can be calculated between the generated images and the real images acquired by the actual cameras. The residual errors are processed and fed back to predict the residual errors that can be happened to virtual viewpoint images generated by conventional method. In the experiments, PSNR could be improved for few decibels compared with the conventional method.

  1. Parotitis due to anaerobic bacteria.

    PubMed

    Matlow, A; Korentager, R; Keystone, E; Bohnen, J

    1988-01-01

    Although Staphylococcus aureus remains the pathogen most commonly implicated in acute suppurative parotitis, the pathogenic role of gram-negative facultative anaerobic bacteria and strict anaerobic organisms in this disease is becoming increasingly recognized. This report describes a case of parotitis due to Bacteroides disiens in an elderly woman with Sjögren's syndrome. Literature reports on seven additional cases of suppurative parotitis due to anaerobic bacteria are reviewed. Initial therapy of acute suppurative parotitis should include coverage for S. aureus and, in a very ill patient, coverage of gram-negative facultative organisms with antibiotics such as cloxacillin and an aminoglycoside. A failure to respond clinically to such a regimen or isolation of anaerobic bacteria should lead to the consideration of the addition of clindamycin or penicillin. PMID:3287567

  2. Systematic parameter errors in inspiraling neutron star binaries.

    PubMed

    Favata, Marc

    2014-03-14

    The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled. PMID:24679276

  3. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  4. Systematic Parameter Errors in Inspiraling Neutron Star Binaries

    NASA Astrophysics Data System (ADS)

    Favata, Marc

    2014-03-01

    The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled.

  5. Error resiliency of distributed video coding in wireless video communication

    NASA Astrophysics Data System (ADS)

    Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj

    2008-08-01

    Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.

  6. Transfer Error and Correction Approach in Mobile Network

    NASA Astrophysics Data System (ADS)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  7. Classifying and Predicting Errors of Inpatient Medication Reconciliation

    PubMed Central

    Pippins, Jennifer R.; Gandhi, Tejal K.; Hamann, Claus; Ndumele, Chima D.; Labonville, Stephanie A.; Diedrichsen, Ellen K.; Carty, Marcy G.; Karson, Andrew S.; Bhan, Ishir; Coley, Christopher M.; Liang, Catherine L.; Turchin, Alexander; McCarthy, Patricia C.

    2008-01-01

    Background Failure to reconcile medications across transitions in care is an important source of potential harm to patients. Little is known about the predictors of unintentional medication discrepancies and how, when, and where they occur. Objective To determine the reasons, timing, and predictors of potentially harmful medication discrepancies. Design Prospective observational study. Patients Admitted general medical patients. Measurements Study pharmacists took gold-standard medication histories and compared them with medical teams’ medication histories, admission and discharge orders. Blinded teams of physicians adjudicated all unexplained discrepancies using a modification of an existing typology. The main outcome was the number of potentially harmful unintentional medication discrepancies per patient (potential adverse drug events or PADEs). Results Among 180 patients, 2066 medication discrepancies were identified, and 257 (12%) were unintentional and had potential for harm (1.4 per patient). Of these, 186 (72%) were due to errors taking the preadmission medication history, while 68 (26%) were due to errors reconciling the medication history with discharge orders. Most PADEs occurred at discharge (75%). In multivariable analyses, low patient understanding of preadmission medications, number of medication changes from preadmission to discharge, and medication history taken by an intern were associated with PADEs. Conclusions Unintentional medication discrepancies are common and more often due to errors taking an accurate medication history than errors reconciling this history with patient orders. Focusing on accurate medication histories, on potential medication errors at discharge, and on identifying high-risk patients for more intensive interventions may improve medication safety during and after hospitalization. PMID:18563493

  8. Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.

    ERIC Educational Resources Information Center

    Guth, David

    1990-01-01

    Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their application…

  9. Error analysis for retrieval of Venus' IR surface emissivity from VIRTIS/VEX measurements

    NASA Astrophysics Data System (ADS)

    Kappel, David; Haus, Rainer; Arnold, Gabriele

    2015-08-01

    Venus' surface emissivity data in the infrared can serve to explore the planet's geology. The only global data with high spectral, spatial, and temporal resolution and coverage at present is supplied by nightside emission measurements acquired by the Visible and InfraRed Thermal Imaging Spectrometer VIRTIS-M-IR (1.0 - 5.1 μm) aboard ESA's Venus Express. A radiative transfer simulation and a retrieval algorithm can be used to determine surface emissivity in the nightside spectral transparency windows located at 1.02, 1.10, and 1.18 μm. To obtain satisfactory fits to measured spectra, the retrieval pipeline also determines auxiliary parameters describing cloud properties from a certain spectral range. But spectral information content is limited, and emissivity is difficult to retrieve due to strong interferences from other parameters. Based on a selection of representative synthetic VIRTIS-M-IR spectra in the range 1.0 - 2.3 μm, this paper investigates emissivity retrieval errors that can be caused by interferences of atmospheric and surface parameters, by measurement noise, and by a priori data, and which retrieval pipeline leads to minimal errors. Retrieval of emissivity from a single spectrum is shown to fail due to extremely large errors, although the fits to the reference spectra are very good. Neglecting geologic activity, it is suggested to apply a multi-spectrum retrieval technique to retrieve emissivity relative to an initial value as a parameter that is common to several measured spectra that cover the same surface bin. Retrieved emissivity maps of targets with limited extension (a few thousand km) are then additively renormalized to remove spatially large scale deviations from the true emissivity map that are due to spatially slowly varying interfering parameters. Corresponding multi-spectrum retrieval errors are estimated by a statistical scaling of the single-spectrum retrieval errors and are listed for 25 measurement repetitions. For the best of the

  10. Enthalpy difference between conformations of normal alkanes: effects of basis set and chain length on intramolecular basis set superposition error

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.

    2011-03-01

    The quantum chemistry of conformation equilibrium is a field where great accuracy (better than 100 cal mol-1) is needed because the energy difference between molecular conformers rarely exceeds 1000-3000 cal mol-1. The conformation equilibrium of straight-chain (normal) alkanes is of particular interest and importance for modern chemistry. In this paper, an extra error source for high-quality ab initio (first principles) and DFT calculations of the conformation equilibrium of normal alkanes, namely the intramolecular basis set superposition error (BSSE), is discussed. In contrast to out-of-plane vibrations in benzene molecules, diffuse functions on carbon and hydrogen atoms were found to greatly reduce the relative BSSE of n-alkanes. The corrections due to the intramolecular BSSE were found to be almost identical for the MP2, MP4, and CCSD(T) levels of theory. Their cancelation is expected when CCSD(T)/CBS (CBS, complete basis set) energies are evaluated by addition schemes. For larger normal alkanes (N > 12), the magnitude of the BSSE correction was found to be up to three times larger than the relative stability of the conformer; in this case, the basis set superposition error led to a two orders of magnitude difference in conformer abundance. No error cancelation due to the basis set superposition was found. A comparison with amino acid, peptide, and protein data was provided.

  11. Least Squares Evaluations for Form and Profile Errors of Ellipse Using Coordinate Data

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan

    2016-04-01

    To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.

  12. Neuromotor Noise Is Malleable by Amplifying Perceived Errors.

    PubMed

    Hasson, Christopher J; Zhang, Zhaoran; Abe, Masaki O; Sternad, Dagmar

    2016-08-01

    Variability in motor performance results from the interplay of error correction and neuromotor noise. This study examined whether visual amplification of error, previously shown to improve performance, affects not only error correction, but also neuromotor noise, typically regarded as inaccessible to intervention. Seven groups of healthy individuals, with six participants in each group, practiced a virtual throwing task for three days until reaching a performance plateau. Over three more days of practice, six of the groups received different magnitudes of visual error amplification; three of these groups also had noise added. An additional control group was not subjected to any manipulations for all six practice days. The results showed that the control group did not improve further after the first three practice days, but the error amplification groups continued to decrease their error under the manipulations. Analysis of the temporal structure of participants' corrective actions based on stochastic learning models revealed that these performance gains were attained by reducing neuromotor noise and, to a considerably lesser degree, by increasing the size of corrective actions. Based on these results, error amplification presents a promising intervention to improve motor function by decreasing neuromotor noise after performance has reached an asymptote. These results are relevant for patients with neurological disorders and the elderly. More fundamentally, these results suggest that neuromotor noise may be accessible to practice interventions. PMID:27490197

  13. Neuromotor Noise Is Malleable by Amplifying Perceived Errors

    PubMed Central

    Zhang, Zhaoran; Abe, Masaki O.; Sternad, Dagmar

    2016-01-01

    Variability in motor performance results from the interplay of error correction and neuromotor noise. This study examined whether visual amplification of error, previously shown to improve performance, affects not only error correction, but also neuromotor noise, typically regarded as inaccessible to intervention. Seven groups of healthy individuals, with six participants in each group, practiced a virtual throwing task for three days until reaching a performance plateau. Over three more days of practice, six of the groups received different magnitudes of visual error amplification; three of these groups also had noise added. An additional control group was not subjected to any manipulations for all six practice days. The results showed that the control group did not improve further after the first three practice days, but the error amplification groups continued to decrease their error under the manipulations. Analysis of the temporal structure of participants’ corrective actions based on stochastic learning models revealed that these performance gains were attained by reducing neuromotor noise and, to a considerably lesser degree, by increasing the size of corrective actions. Based on these results, error amplification presents a promising intervention to improve motor function by decreasing neuromotor noise after performance has reached an asymptote. These results are relevant for patients with neurological disorders and the elderly. More fundamentally, these results suggest that neuromotor noise may be accessible to practice interventions. PMID:27490197

  14. Error control coding for multi-frequency modulation

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.

    1990-06-01

    Multi-frequency modulation (MFM) has been developed at NPS using both quadrature-phase-shift-keyed (QPSK) and quadrature-amplitude-modulated (QAM) signals with good bit error performance at reasonable signal-to-noise ratios. Improved performance can be achieved by the introduction of error control coding. This report documents a FORTRAN simulation of the implementation of error control coding into an MFM communication link with additive white Gaussian noise. Four Reed-Solomon codes were incorporated, two for 16-QAM and two for 32-QAM modulation schemes. The error control codes used were modified from the conventional Reed-Solomon codes in that one information symbol was sacrificed to parity in order to use a simplified decoding algorithm which requires no iteration and enhances error detection capability. Bit error rates as a function of SNR and E(sub b)/N(sub 0) were analyzed, and bit error performance was weighed against reduction in information rate to determine the value of the codes.

  15. Correcting false memories: Errors must be noticed and replaced.

    PubMed

    Mullet, Hillary G; Marsh, Elizabeth J

    2016-04-01

    Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them. PMID:26576564

  16. Measuring worst-case errors in a robot workcell

    SciTech Connect

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  17. Toward improved statistical treatments of wind power forecast errors

    NASA Astrophysics Data System (ADS)

    Hart, E.; Jacobson, M. Z.

    2011-12-01

    The ability of renewable resources to reliably supply electric power demand is of considerable interest in the context of growing renewable portfolio standards and the potential for future carbon markets. Toward this end, a number of probabilistic models have been applied to the problem of grid integration of intermittent renewables, such as wind power. Most of these models rely on simple Markov or autoregressive models of wind forecast errors. While these models generally capture the bulk statistics of wind forecast errors, they often fail to reproduce accurate ramp rate distributions and do not accurately describe extreme forecast error events, both of which are of considerable interest to those seeking to comment on system reliability. The problem often lies in characterizing and reproducing not only the magnitude of wind forecast errors, but also the timing or phase errors (ie. when a front passes over a wind farm). Here we compare time series wind power data produced using different forecast error models to determine the best approach for capturing errors in both magnitude and phase. Additionally, new metrics are presented to characterize forecast quality with respect to both considerations.

  18. Medical Errors: Tips to Help Prevent Them

    MedlinePlus

    ... to Web version Medical Errors: Tips to Help Prevent Them Medical Errors: Tips to Help Prevent Them Medical errors are one of the nation's ... single most important way you can help to prevent errors is to be an active member of ...

  19. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  20. Characterization of errors in a coupled snow hydrology-microwave emission model

    USGS Publications Warehouse

    Andreadis, K.M.; Liang, D.; Tsang, L.; Lettenmaier, D.P.; Josberger, E.G.

    2008-01-01

    Traditional approaches to the direct estimation of snow properties from passive microwave remote sensing have been plagued by limitations such as the tendency of estimates to saturate for moderately deep snowpacks and the effects of mixed land cover within remotely sensed pixels. An alternative approach is to assimilate satellite microwave emission observations directly, which requires embedding an accurate microwave emissions model into a hydrologic prediction scheme, as well as quantitative information of model and observation errors. In this study a coupled snow hydrology [Variable Infiltration Capacity (VIC)] and microwave emission [Dense Media Radiative Transfer (DMRT)] model are evaluated using multiscale brightness temperature (TB) measurements from the Cold Land Processes Experiment (CLPX). The ability of VIC to reproduce snowpack properties is shown with the use of snow pit measurements, while TB model predictions are evaluated through comparison with Ground-Based Microwave Radiometer (GBMR), air-craft [Polarimetric Scanning Radiometer (PSR)], and satellite [Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E)] TB measurements. Limitations of the model at the point scale were not as evident when comparing areal estimates. The coupled model was able to reproduce the TB spatial patterns observed by PSR in two of three sites. However, this was mostly due to the presence of relatively dense forest cover. An interesting result occurs when examining the spatial scaling behavior of the higher-resolution errors; the satellite-scale error is well approximated by the mode of the (spatial) histogram of errors at the smaller scale. In addition, TB prediction errors were almost invariant when aggregated to the satellite scale, while forest-cover fractions greater than 30% had a significant effect on TB predictions. ?? 2008 American Meteorological Society.

  1. Analyzing the errors of DFT approximations for compressed water systems

    SciTech Connect

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-07

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid

  2. QuorUM: An Error Corrector for Illumina Reads

    PubMed Central

    Marçais, Guillaume; Yorke, James A.; Zimin, Aleksey

    2015-01-01

    Motivation Illumina Sequencing data can provide high coverage of a genome by relatively short (most often 100 bp to 150 bp) reads at a low cost. Even with low (advertised 1%) error rate, 100 × coverage Illumina data on average has an error in some read at every base in the genome. These errors make handling the data more complicated because they result in a large number of low-count erroneous k-mers in the reads. However, there is enough information in the reads to correct most of the sequencing errors, thus making subsequent use of the data (e.g. for mapping or assembly) easier. Here we use the term “error correction” to denote the reduction in errors due to both changes in individual bases and trimming of unusable sequence. We developed an error correction software called QuorUM. QuorUM is mainly aimed at error correcting Illumina reads for subsequent assembly. It is designed around the novel idea of minimizing the number of distinct erroneous k-mers in the output reads and preserving the most true k-mers, and we introduce a composite statistic π that measures how successful we are at achieving this dual goal. We evaluate the performance of QuorUM by correcting actual Illumina reads from genomes for which a reference assembly is available. Results We produce trimmed and error-corrected reads that result in assemblies with longer contigs and fewer errors. We compared QuorUM against several published error correctors and found that it is the best performer in most metrics we use. QuorUM is efficiently implemented making use of current multi-core computing architectures and it is suitable for large data sets (1 billion bases checked and corrected per day per core). We also demonstrate that a third-party assembler (SOAPdenovo) benefits significantly from using QuorUM error-corrected reads. QuorUM error corrected reads result in a factor of 1.1 to 4 improvement in N50 contig size compared to using the original reads with SOAPdenovo for the data sets investigated

  3. Papilledema Due to Mirtazapine

    PubMed Central

    Ceylan, Mehmet Emin; Evrensel, Alper; Cömert, Gökçe

    2016-01-01

    Background: Mirtazapine is a tetracyclic antidepressant that enhances both noradrenergic and serotonergic transmission. The most common cause of papilledema is increased intracranial pressure due to brain tumor. Also it may occur as a result of idiopathic intracranial hypertension (IIH, pseudo tumor cerebri). Moreover, papilledema may also develop due to retinitis, vasculitis, Graves’ disease, hypertension, leukemia, lymphoma, diabetes mellitus and radiation. Case Report: In this article, a patient who developed papilledema while under treatment with mirtazapine (30 mg/day) for two years and recovered with termination of mirtazapine treatment was discussed to draw the attention of clinicians to this side effect of mirtazapine. Conclusion: Idiopathic intracranial hypertension and papilledema due to psychotropic drugs has been reported in the literature. Mirtazapine may rarely cause peripheral edema. However, papilledema due to mirtazapine has not been previously reported. Although papilledema is a very rare side effect of an antidepressant treatment, fundoscopic examinations of patients must be performed regularly. PMID:27308085

  4. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  5. Entropic error-disturbance relations

    NASA Astrophysics Data System (ADS)

    Coles, Patrick; Furrer, Fabian

    2014-03-01

    We derive an entropic error-disturbance relation for a sequential measurement scenario as originally considered by Heisenberg, and we discuss how our relation could be tested using existing experimental setups. Our relation is valid for discrete observables, such as spin, as well as continuous observables, such as position and momentum. The novel aspect of our relation compared to earlier versions is its clear operational interpretation and the quantification of error and disturbance using entropic quantities. This directly relates the measurement uncertainty, a fundamental property of quantum mechanics, to information theoretical limitations and offers potential applications in for instance quantum cryptography. PC is funded by National Research Foundation Singapore and Ministry of Education Tier 3 Grant ``Random numbers from quantum processes'' (MOE2012-T3-1-009). FF is funded by Japan Society for the Promotion of Science, KAKENHI grant No. 24-02793.

  6. Robot learning and error correction

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  7. Effects of Setup Errors and Shape Changes on Breast Radiotherapy

    SciTech Connect

    Mourik, Anke van; Kranen, Simon van; Hollander, Suzanne den; Sonke, Jan-Jakob; Herk, Marcel van; Vliet-Vroegindeweij, Corine van

    2011-04-01

    Purpose: The purpose of the present study was to quantify the robustness of the dose distributions from three whole-breast radiotherapy (RT) techniques involving different levels of intensity modulation against whole patient setup inaccuracies and breast shape changes. Methods and Materials: For 19 patients (one computed tomography scan and five cone beam computed tomography scans each), three treatment plans were made (wedge, simple intensity-modulated RT [IMRT], and full IMRT). For each treatment plan, four dose distributions were calculated. The first dose distribution was the original plan. The other three included the effects of patient setup errors (rigid displacement of the bony anatomy) or breast errors (e.g., rotations and shape changes of the breast with respect to the bony anatomy), or both, and were obtained through deformable image registration and dose accumulation. Subsequently, the effects of the plan type and error sources on target volume coverage, mean lung dose, and excess dose were determined. Results: Systematic errors of 1-2 mm and random errors of 2-3 mm (standard deviation) were observed for both patient- and breast-related errors. Planning techniques involving glancing fields (wedge and simple IMRT) were primarily affected by patient errors ({approx}6% loss of coverage near the dorsal field edge and {approx}2% near the skin). In contrast, plan deterioration due to breast errors was primarily observed in planning techniques without glancing fields (full IMRT, {approx}2% loss of coverage near the dorsal field edge and {approx}4% near the skin). Conclusion: The influences of patient and breast errors on the dose distributions are comparable in magnitude for whole breast RT plans, including glancing open fields, rendering simple IMRT the preferred technique. Dose distributions from planning techniques without glancing open fields were more seriously affected by shape changes of the breast, demanding specific attention in partial breast

  8. EC: an efficient error correction algorithm for short reads

    PubMed Central

    2015-01-01

    Background In highly parallel next-generation sequencing (NGS) techniques millions to billions of short reads are produced from a genomic sequence in a single run. Due to the limitation of the NGS technologies, there could be errors in the reads. The error rate of the reads can be reduced with trimming and by correcting the erroneous bases of the reads. It helps to achieve high quality data and the computational complexity of many biological applications will be greatly reduced if the reads are first corrected. We have developed a novel error correction algorithm called EC and compared it with four other state-of-the-art algorithms using both real and simulated sequencing reads. Results We have done extensive and rigorous experiments that reveal that EC is indeed an effective, scalable, and efficient error correction tool. Real reads that we have employed in our performance evaluation are Illumina-generated short reads of various lengths. Six experimental datasets we have utilized are taken from sequence and read archive (SRA) at NCBI. The simulated reads are obtained by picking substrings from random positions of reference genomes. To introduce errors, some of the bases of the simulated reads are changed to other bases with some probabilities. Conclusions Error correction is a vital problem in biology especially for NGS data. In this paper we present a novel algorithm, called Error Corrector (EC), for correcting substitution errors in biological sequencing reads. We plan to investigate the possibility of employing the techniques introduced in this research paper to handle insertion and deletion errors also. Software availability The implementation is freely available for non-commercial purposes. It can be downloaded from: http://engr.uconn.edu/~rajasek/EC.zip. PMID:26678663

  9. Systematic Errors of the Fsu Global Spectral Model

    NASA Astrophysics Data System (ADS)

    Surgi, Naomi

    Three 20 day winter forecasts have been carried out using the Florida State University Global Spectral Model to examine the systematic errors of the model. Most GCM's and global forecast models exhibit the same kind of error patterns even though the model formulations vary somewhat between them. Some of the dominant errors are a breakdown of the trade winds in the low latitudes, an over-prediction of the subtropical jets accompanied by an upward and poleward shift of the jets, an error in the mean sea-level pressure with over-intensification of the quasi-stationary oceanic lows and continental highs and a warming of the tropical mid and upper troposphere. In this study, a number of sensitivity experiments have been performed for which orography, model physics and initialization are considered as possible causes of these errors. A parameterization of the vertical distribution of momentum due to the sub-grid scale orography has been implemented in the model to address the model deficiencies associated with orographic forcing. This scheme incorporates the effects of moisture on the wave induced stress. The parameterization of gravity wave drag is shown to substantially reduce the large-scale wind and height errors in regions of direct forcing and well downstream of the mountainous regions. Also, a parameterization of the heat and moisture transport associated with shallow convection is found to have a positive impact on the errors particularly in the tropics. This is accomplished by the increase of moisture supply from the subtropics into the deep tropics and a subsequent enhancement of the secondary circulations. A dynamic relaxation was carried out to examine the impact of the long wave errors on the shorter wave. By constraining the long wave error, improvement is shown for wavenumbers 5-7 on medium to extended range time intervals. Thus, improved predictability of the transient flow is expected by applying this initialization procedure.

  10. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Lanber, J. K.; Cooper, G. E.

    1974-01-01

    This report is a brief description of research being undertaken by the National Aeronautics and Space Administration. The project is designed to seek out factors in the aviation system which contribute to human error, and to search for ways of minimizing the potential threat posed by these factors. The philosophy and assumptions underlying the study are discussed, together with an outline of the research plan.

  11. Clinical review: Medication errors in critical care

    PubMed Central

    Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas

    2008-01-01

    Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883

  12. Seeing Your Error Alters My Pointing: Observing Systematic Pointing Errors Induces Sensori-Motor After-Effects

    PubMed Central

    Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro

    2011-01-01

    During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649

  13. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  14. An approach for clustering gene expression data with error information

    PubMed Central

    Tjaden, Brian

    2006-01-01

    Background Clustering of gene expression patterns is a well-studied technique for elucidating trends across large numbers of transcripts and for identifying likely co-regulated genes. Even the best clustering methods, however, are unlikely to provide meaningful results if too much of the data is unreliable. With the maturation of microarray technology, a wealth of research on statistical analysis of gene expression data has encouraged researchers to consider error and uncertainty in their microarray experiments, so that experiments are being performed increasingly with repeat spots per gene per chip and with repeat experiments. One of the challenges is to incorporate the measurement error information into downstream analyses of gene expression data, such as traditional clustering techniques. Results In this study, a clustering approach is presented which incorporates both gene expression values and error information about the expression measurements. Using repeat expression measurements, the error of each gene expression measurement in each experiment condition is estimated, and this measurement error information is incorporated directly into the clustering algorithm. The algorithm, CORE (Clustering Of Repeat Expression data), is presented and its performance is validated using statistical measures. By using error information about gene expression measurements, the clustering approach is less sensitive to noise in the underlying data and it is able to achieve more accurate clusterings. Results are described for both synthetic expression data as well as real gene expression data from Escherichia coli and Saccharomyces cerevisiae. Conclusion The additional information provided by replicate gene expression measurements is a valuable asset in effective clustering. Gene expression profiles with high errors, as determined from repeat measurements, may be unreliable and may associate with different clusters, whereas gene expression profiles with low errors can be

  15. Systematic Error Estimation for Chemical Reaction Energies.

    PubMed

    Simm, Gregor N; Reiher, Markus

    2016-06-14

    For a theoretical understanding of the reactivity of complex chemical systems, accurate relative energies between intermediates and transition states are required. Despite its popularity, density functional theory (DFT) often fails to provide sufficiently accurate data, especially for molecules containing transition metals. Due to the huge number of intermediates that need to be studied for all but the simplest chemical processes, DFT is, to date, the only method that is computationally feasible. Here, we present a Bayesian framework for DFT that allows for error estimation of calculated properties. Since the optimal choice of parameters in present-day density functionals is strongly system dependent, we advocate for a system-focused reparameterization. While, at first sight, this approach conflicts with the first-principles character of DFT that should make it, in principle, system independent, we deliberately introduce system dependence to be able to assign a stochastically meaningful error to the system-dependent parametrization, which makes it nonarbitrary. By reparameterizing a functional that was derived on a sound physical basis to a chemical system of interest, we obtain a functional that yields reliable confidence intervals for reaction energies. We demonstrate our approach on the example of catalytic nitrogen fixation. PMID:27159007

  16. Robust, Error-Tolerant Photometric Projector Compensation.

    PubMed

    Grundhöfer, Anselm; Iwai, Daisuke

    2015-12-01

    We propose a novel error tolerant optimization approach to generate a high-quality photometric compensated projection. The application of a non-linear color mapping function does not require radiometric pre-calibration of cameras or projectors. This characteristic improves the compensation quality compared with related linear methods if this approach is used with devices that apply complex color processing, such as single-chip digital light processing projectors. Our approach consists of a sparse sampling of the projector's color gamut and non-linear scattered data interpolation to generate the per-pixel mapping from the projector to camera colors in real time. To avoid out-of-gamut artifacts, the input image's luminance is automatically adjusted locally in an optional offline optimization step that maximizes the achievable contrast while preserving smooth input gradients without significant clipping errors. To minimize the appearance of color artifacts at high-frequency reflectance changes of the surface due to usually unavoidable slight projector vibrations and movement (drift), we show that a drift measurement and analysis step, when combined with per-pixel compensation image optimization, significantly decreases the visibility of such artifacts. PMID:26390454

  17. Pulse Shaping Entangling Gates and Error Supression

    NASA Astrophysics Data System (ADS)

    Hucul, D.; Hayes, D.; Clark, S. M.; Debnath, S.; Quraishi, Q.; Monroe, C.

    2011-05-01

    Control of spin dependent forces is important for generating entanglement and realizing quantum simulations in trapped ion systems. Here we propose and implement a composite pulse sequence based on the Molmer-Sorenson gate to decrease gate infidelity due to frequency and timing errors. The composite pulse sequence uses an optical frequency comb to drive Raman transitions simultaneously detuned from trapped ion transverse motional red and blue sideband frequencies. The spin dependent force displaces the ions in phase space, and the resulting spin-dependent geometric phase depends on the detuning. Voltage noise on the rf electrodes changes the detuning between the trapped ions' motional frequency and the laser, decreasing the fidelity of the gate. The composite pulse sequence consists of successive pulse trains from counter-propagating frequency combs with phase control of the microwave beatnote of the lasers to passively suppress detuning errors. We present the theory and experimental data with one and two ions where a gate is performed with a composite pulse sequence. This work supported by the U.S. ARO, IARPA, the DARPA OLE program, the MURI program; the NSF PIF Program; the NSF Physics Frontier Center at JQI; the European Commission AQUTE program; and the IC postdoc program administered by the NGA.

  18. TU-C-BRE-08: IMRT QA: Selecting Meaningful Gamma Criteria Based On Error Detection Sensitivity

    SciTech Connect

    Steers, J; Fraass, B

    2014-06-15

    Purpose: To develop a strategy for defining meaningful tolerance limits and studying the sensitivity of IMRT QA gamma criteria by inducing known errors in QA plans. Methods: IMRT QA measurements (ArcCHECK, Sun Nuclear) were compared to QA plan calculations with induced errors. Many (>24) gamma comparisons between data and calculations were performed for each of several kinds of cases and classes of induced error types with varying magnitudes (e.g. MU errors ranging from -10% to +10%), resulting in over 3,000 comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using various gamma criteria. Results: This study demonstrates that random, case-specific, and systematic errors can be detected by the error curve analysis. Depending on location of the peak of the error curve (e.g., not centered about zero), 3%/3mm threshold=10% criteria may miss MU errors of up to 10% and random MLC errors of up to 5 mm. Additionally, using larger dose thresholds for specific devices may increase error sensitivity (for the same X%/Ymm criteria) by up to a factor of two. This analysis will allow clinics to select more meaningful gamma criteria based on QA device, treatment techniques, and acceptable error tolerances. Conclusion: We propose a strategy for selecting gamma parameters based on the sensitivity of gamma criteria and individual QA devices to induced calculation errors in QA plans. Our data suggest large errors may be missed using conventional gamma criteria and that using stricter criteria with an increased dose threshold may reduce the range of missed errors. This approach allows quantification of gamma criteria sensitivity and is straightforward to apply to other combinations of devices and treatment techniques.

  19. YIELD EDITOR: SOFTWARE FOR REMOVING ERRORS FROM CROP YIELD MAPS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Yield maps are a key component of precision agriculture, due to their usefulness in both development and evaluation of precision management strategies. The value of these yield maps can be compromised by the fact that raw yield maps contain a variety of inherent errors. Researchers have reported t...

  20. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…