Science.gov

Sample records for additional error due

  1. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  2. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  3. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  4. On quaternary DPSK error rates due to noise and interferences

    NASA Astrophysics Data System (ADS)

    Lye, K. M.; Tjhung, T. T.

    A method for computing the error rates of a quaternary, differentially encoded and detected, phase shift keyed (DPSK) system with Gaussian noise, intersymbol and adjacent channel interferences is presented. In the calculations, intersymbol effects due to the band-limiting IF filter were assumed to have come only from immediately adjacent symbols. Similarly, only immediately adjacent channels were assumed to have contributed toward interchannel interferences. Noise effects were handled by using a probability density formula for corrupted phase differences derived recently by Paula (1981). An experimental system was set up, and error rates measured to verify the analytical results. From the results, optimum receiver bandwidth and channel separation for quaternary DPSK systems can be determined.

  5. Evidence Report: Risk of Performance Errors Due to Training Deficiencies

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel

    2012-01-01

    The Risk of Performance Errors Due to Training Deficiencies is identified by the National Aeronautics and Space Administration (NASA) Human Research Program (HRP) as a recognized risk to human health and performance in space. The HRP Program Requirements Document (PRD) defines these risks. This Evidence Report provides a summary of the evidence that has been used to identify and characterize this risk. Given that training content, timing, intervals, and delivery methods must support crew task performance, and given that training paradigms will be different for long-duration missions with increased crew autonomy, there is a risk that operators will lack the skills or knowledge necessary to complete critical tasks, resulting in flight and ground crew errors and inefficiencies, failed mission and program objectives, and an increase in crew injuries.

  6. Errors in scatterometer-radiometer wind measurement due to rain

    NASA Technical Reports Server (NTRS)

    Moore, R. K.; Chaudhry, A. H.; Birrer, I. J.

    1983-01-01

    The behavior of radiometer corrections for the scatterometer is investigated by simulating simple situations using footprint sizes comparable with those used in the SEASAT-1 experiment and also actual footprints and rain rates from a hurricane observed by the SEASAT-1 system. The effects on correction due to attenuation and wind speed gradients are examined independently and jointly. It is shown that the error in the wind-speed estimate can be as large as 200% at higher wind speeds. The worst error occurs when the scatterometer footprint overlaps two or more radiometer footprints and the attenuation in the scatterometer footprint differs greatly from those in parts of the radiometer footprints. This problem could be overcome by using a true radiometer-scatterometer system having identical coincident footprints comparable in size with typical rain cells.

  7. [Medication error due to drug packaging: a case report].

    PubMed

    Hélène', Ginestet; David, Breton; Sophie, Spadoni; Vincent, Jandard; Michel, Paillet; Xavier, Bohand

    2009-12-01

    Nowadays, occurrence of medication errors is a public health concern at hospital. Drug packaging represent one of the important causes of medication errors. The authors report a medication error associated with an erroneous interpretation of drug packaging information. This error was detected during the pharmaceutical review of the medical prescription. The nursing staff in charge of drug administering must thus be particularly aware of this risk. The potential clinical significance of this type of medication error may be important.

  8. Geodetic secular velocity errors due to interannual surface loading deformation

    NASA Astrophysics Data System (ADS)

    Santamaría-Gómez, Alvaro; Mémin, Anthony

    2015-08-01

    Geodetic vertical velocities derived from data as short as 3 yr are often assumed to be representative of linear deformation over past decades to millennia. We use two decades of surface loading deformation predictions due to variations of atmospheric, oceanic and continental water mass to assess the effect on secular velocities estimated from short time-series. The interannual deformation is time-correlated at most locations over the globe, with the level of correlation depending mostly on the chosen continental water model. Using the most conservative loading model and 5-yr-long time-series, we found median vertical velocity errors of 0.5 mm yr-1 over the continents (0.3 mm yr-1 globally), exceeding 1 mm yr-1 in regions around the southern Tropic. Horizontal velocity errors were seven times smaller. Unless an accurate loading model is available, a decade of continuous data is required in these regions to mitigate the impact of the interannual loading deformation on secular velocities.

  9. Uncertainty in vertically integrated liquid water content due to radar reflectivity observation error

    NASA Technical Reports Server (NTRS)

    French, Mark N.; Andrieu, Herve; Krajewski, Witold F.

    1995-01-01

    Radar reflectivity is used to estimate meteorological quantities such as rainfall rate, liquid water content, and the related quantity, vertically integrated liquid (VIL) water content. The estimation of any of these quantities depends on several assumptions related to the characteristics of the physical processes controlling the occurrence and character of water in the atmosphere. Additionally, there are many sources of error associated with radar observations, such as those due to brighthand, hail, and drop size distribution approximations. This work addresses one error of interest, the radar reflectivity observation error; other error sources are assumed to be corrected or negligible. The result is a relationship between the uncertainty in VIL water content and radar reflectivity measurement error. An example application illustrates the estimation of VIL uncertainty from typical radar reflectivity observations and indicates that the coefficient of variation in VIL is much larger than the coefficient of variation in radar reflectivity.

  10. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  11. Evidence Report: Risk of Performance Errors Due to Training Deficiencies

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel; Dempsey, Donna L.

    2016-01-01

    Substantial evidence supports the claim that inadequate training leads to performance errors. Barshi and Loukopoulos (2012) demonstrate that even a task as carefully developed and refined over many years as operating an aircraft can be significantly improved by a systematic analysis, followed by improved procedures and improved training (see also Loukopoulos, Dismukes, & Barshi, 2009a). Unfortunately, such a systematic analysis of training needs rarely occurs during the preliminary design phase, when modifications are most feasible. Training is often seen as a way to compensate for deficiencies in task and system design, which in turn increases the training load. As a result, task performance often suffers, and with it, the operators suffer and so does the mission. On the other hand, effective training can indeed compensate for such design deficiencies, and can even go beyond to compensate for failures of our imagination to anticipate all that might be needed when we send our crew members to go where no one else has gone before. Much of the research literature on training is motivated by current training practices aimed at current training needs. Although there is some experience with operations in extreme environments on Earth, there is no experience with long-duration space missions where crews must practice semi-autonomous operations, where ground support must accommodate significant communication delays, and where so little is known about the environment. Thus, we must develop robust methodologies and tools to prepare our crews for the unknown. The research necessary to support such an endeavor does not currently exist, but existing research does reveal general challenges that are relevant to long-duration, high-autonomy missions. The evidence presented here describes issues related to the risk of performance errors due to training deficiencies. Contributing factors regarding training deficiencies may pertain to organizational process and training programs for

  12. Phase errors due to speckles in laser fringe projection.

    PubMed

    Rosendahl, Sara; Hällstig, Emil; Gren, Per; Sjödahl, Mikael

    2010-04-10

    When measuring a three-dimensional shape with triangulation and projected interference fringes it is of interest to reduce speckle contrast without destroying the coherence of the projected light. A moving aperture is used to suppress the speckles and thereby reduce the phase error in the fringe image. It is shown that the phase error depends linearly on the ratio between the speckle contrast and the modulation of the fringes. In this investigation the spatial carrier method was used to extract the phase, where the phase error also depends on filtering the Fourier spectrum. An analytical expression for the phase error is derived. Both the speckle reduction and the theoretical expressions for the phase error are verified by simulations and experiments. It was concluded that a movement of the aperture by three aperture diameters during exposure of the image reduces the speckle contrast and hence the phase error by 60%. In the experiments, a phase error of 0.2 rad was obtained.

  13. Reverse attenuation in interaction terms due to covariate measurement error.

    PubMed

    Muff, Stefanie; Keller, Lukas F

    2015-11-01

    Covariate measurement error may cause biases in parameters of regression coefficients in generalized linear models. The influence of measurement error on interaction parameters has, however, only rarely been investigated in depth, and if so, attenuation effects were reported. In this paper, we show that also reverse attenuation of interaction effects may emerge, namely when heteroscedastic measurement error or sampling variances of a mismeasured covariate are present, which are not unrealistic scenarios in practice. Theoretical findings are illustrated with simulations. A Bayesian approach employing integrated nested Laplace approximations is suggested to model the heteroscedastic measurement error and covariate variances, and an application shows that the method is able to reveal approximately correct parameter estimates.

  14. Error Due to Wing Bending in Single-Camera Photogrammetric Technique

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W., Jr.; Barrows, Danny A.

    2005-01-01

    The error due to wing bending introduced into single-camera photogrammetric computations used for the determination of wing twist or control surface angular deformation is described. It is shown that the error due to wing bending when determining main wing element-induced twist is typically less than 0.05deg at the wing tip and may not warrant additional correction. It is also shown that the angular error in control surface deformation due to bending can be as large as 1deg or more if the control surface is at a large deflection angle compared to the main wing element. A correction procedure suitable for control surface measurements is presented. Simulations of the error based on typical wind tunnel measurement geometry, and results from a controlled experimental test in the test section of the National Transonic Facility (NTF) are presented to confirm the validity of the method used for correction of control surface photogrammetric deformation data. An example of a leading edge (LE) slat measurement is presented to illustrate the error due to wing bending and its correction.

  15. Treatable newborn and infant seizures due to inborn errors of metabolism.

    PubMed

    Campistol, Jaume; Plecko, Barbara

    2015-09-01

    About 25% of seizures in the neonatal period have causes other than asphyxia, ischaemia or intracranial bleeding. Among these are primary genetic epileptic encephalopathies with sometimes poor prognosis and high mortality. In addition, some forms of neonatal infant seizures are due to inborn errors of metabolism that do not respond to common AEDs, but are amenable to specific treatment. In this situation, early recognition can allow seizure control and will prevent neurological deterioration and long-term sequelae. We review the group of inborn errors of metabolism that lead to newborn/infant seizures and epilepsy, of which the treatment with cofactors is very different to that used in typical epilepsy management.

  16. Tune shifts due to systematic errors in bend magnets

    SciTech Connect

    Douglas, D.

    1983-12-01

    The presence of systematic error multipoles in bend magnets, persistent currents at low magnet excitation, and saturation effects at high magnet excitation may all lead to tune shifts which could prove detrimental to the operation of the SSC. It is the purpose of this note to report estimates of the magnitude of these tune shifts and the corrector strengths required to circumvent them.

  17. Erasing Errors due to Alignment Ambiguity When Estimating Positive Selection

    PubMed Central

    Redelings, Benjamin

    2014-01-01

    Current estimates of diversifying positive selection rely on first having an accurate multiple sequence alignment. Simulation studies have shown that under biologically plausible conditions, relying on a single estimate of the alignment from commonly used alignment software can lead to unacceptably high false-positive rates in detecting diversifying positive selection. We present a novel statistical method that eliminates excess false positives resulting from alignment error by jointly estimating the degree of positive selection and the alignment under an evolutionary model. Our model treats both substitutions and insertions/deletions as sequence changes on a tree and allows site heterogeneity in the substitution process. We conduct inference starting from unaligned sequence data by integrating over all alignments. This approach naturally accounts for ambiguous alignments without requiring ambiguously aligned sites to be identified and removed prior to analysis. We take a Bayesian approach and conduct inference using Markov chain Monte Carlo to integrate over all alignments on a fixed evolutionary tree topology. We introduce a Bayesian version of the branch-site test and assess the evidence for positive selection using Bayes factors. We compare two models of differing dimensionality using a simple alternative to reversible-jump methods. We also describe a more accurate method of estimating the Bayes factor using Rao-Blackwellization. We then show using simulated data that jointly estimating the alignment and the presence of positive selection solves the problem with excessive false positives from erroneous alignments and has nearly the same power to detect positive selection as when the true alignment is known. We also show that samples taken from the posterior alignment distribution using the software BAli-Phy have substantially lower alignment error compared with MUSCLE, MAFFT, PRANK, and FSA alignments. PMID:24866534

  18. Compensation of overlay errors due to mask bending and non-flatness for EUV masks

    NASA Astrophysics Data System (ADS)

    Chandhok, Manish; Goyal, Sanjay; Carson, Steven; Park, Seh-Jin; Zhang, Guojing; Myers, Alan M.; Leeson, Michael L.; Kamna, Marilyn; Martinez, Fabian C.; Stivers, Alan R.; Lorusso, Gian F.; Hermans, Jan; Hendrickx, Eric; Govindjee, Sanjay; Brandstetter, Gerd; Laursen, Tod

    2009-03-01

    EUV blank non-flatness results in both out of plane distortion (OPD) and in-plane distortion (IPD) [3-5]. Even for extremely flat masks (~50 nm peak to valley (PV)), the overlay error is estimated to be greater than the allocation in the overlay budget. In addition, due to multilayer and other thin film induced stresses, EUV masks have severe bow (~1 um PV). Since there is no electrostatic chuck to flatten the mask during the e-beam write step, EUV masks are written in a bent state that can result in ~15 nm of overlay error. In this article we present the use of physically-based models of mask bending and non-flatness induced overlay errors, to compensate for pattern placement of EUV masks during the e-beam write step in a process we refer to as E-beam Writer based Overlay error Correction (EWOC). This work could result in less restrictive tolerances for the mask blank non-flatness specs which in turn would result in less blank defects.

  19. Systematic errors in two-dimensional digital image correlation due to lens distortion

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Yu, Liping; Wu, Dafang; Tang, Liqun

    2013-02-01

    Lens distortion practically presents in a real optical imaging system causing non-uniform geometric distortion in the recorded images, and gives rise to additional errors in the displacement and strain results measured by two-dimensional digital image correlation (2D-DIC). In this work, the systematic errors in the displacement and strain results measured by 2D-DIC due to lens distortion are investigated theoretically using the radial lens distortion model and experimentally through easy-to-implement rigid body, in-plane translation tests. Theoretical analysis shows that the displacement and strain errors at an interrogated image point are not only in linear proportion to the distortion coefficient of the camera lens used, but also depend on its distance relative to distortion center and its magnitude of displacement. To eliminate the systematic errors caused by lens distortion, a simple linear least-squares algorithm is proposed to estimate the distortion coefficient from the distorted displacement results of rigid body, in-plane translation tests, which can be used to correct the distorted displacement fields to obtain unbiased displacement and strain fields. Experimental results verify the correctness of the theoretical derivation and the effectiveness of the proposed lens distortion correction method.

  20. Correction of non-additive errors in variational and ensemble data assimilation using image registration

    NASA Astrophysics Data System (ADS)

    Landelius, Tomas; Bojarova, Jelena; Gustafsson, Nils; Lindskog, Magnus

    2013-04-01

    It is hard to forecast the position of localized weather phenomena such as clouds, precipitation, and fronts. Moreover, cloudy areas are important since this is where most of the active weather occurs. Position errors, also known as phase or alignment or displacement errors, can have several causes; timing errors, deficient model physics, inadequate model resolution, etc. Furthermore, position errors have been shown to be non-additive and non-Gaussian, which violates the error model most data assimilation methods rely on. Remote sensing data contain coherent information on the weather development in time and space. By comparing structures in radar or satellite images with the forecast model state it is possible to get information about position errors. We use an image registration (optical flow) method to find a transformation, in terms of a displacement field, that aligns the model state with the corresponding remote sensing data. In particular, we surmise that assimilation of radiances in cloudy areas will benefit from a better aligned first guess. Analysis perturbations should become smaller and be easier to handle by the linearizations in the observation operator. In the variational setting the displacement field is used as a mapping function to obtain a new, better aligned, first guess from the old one by means of interpolation (warping). To reduce the effect of imbalances, the aligned first guess is not used as is. Instead it is used for generation of pseudo observations that are assimilated in a first step to get an aligned and balanced first guess. This step reduces the non-additive errors due to mis-alignment and is followed by a second step with a standard variational assimilation to compensate for the remaining additive errors. In ensemble data assimilation a displacement field is estimated for each ensemble member and is used as a distance measure. In areas where a member has a smaller displacement (smaller position error) than the control it is given

  1. All your data are always missing: incorporating bias due to measurement error into the potential outcomes framework.

    PubMed

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel

    2015-08-01

    Epidemiologists often use the potential outcomes framework to cast causal inference as a missing data problem. Here, we demonstrate how bias due to measurement error can be described in terms of potential outcomes and considered in concert with bias from other sources. In addition, we illustrate how acknowledging the uncertainty that arises due to measurement error increases the amount of missing information in causal inference. We use a simple example to show that estimating the average treatment effect requires the investigator to perform a series of hidden imputations based on strong assumptions.

  2. Pointing-error simulations of the DSS-13 antenna due to wind disturbances

    NASA Technical Reports Server (NTRS)

    Gawronski, W.; Bienkiewicz, B.; Hill, R. E.

    1992-01-01

    Accurate spacecraft tracking by the NASA Deep Space Network (DSN) antennas must be assured during changing weather conditions. Wind disturbances are the main source of tracking errors. The development of a wind-force model and simulations of wind-induced pointing errors of DSN antennas are presented. The antenna model includes the antenna structure, the elevation and azimuth servos, and the tracking controller. Simulation results show that pointing errors due to wind gusts are of the same order as errors due to static wind pressure and that these errors (similar to those of static wind pressure) satisfy the velocity quadratic law. The presented methodology is used for wind-disturbance estimation and for the design of an antenna controller with wind-disturbance rejection properties.

  3. Sensitivity of intersegmental angles of the spinal column to errors due to marker misplacement.

    PubMed

    Rouhani, Hossein; Mahallati, Sara; Preuss, Richard; Masani, Kei; Popovic, Milos R

    2015-07-01

    The ranges of angular motion measured using multisegmented spinal column models are typically small, meaning that minor experimental errors can potentially affect the reliability of these measures. This study aimed to investigate the sensitivity of the 3D intersegmental angles, measured using a multisegmented spinal column model, to errors due to marker misplacement. Eleven healthy subjects performed trunk bending in five directions. Six cameras recorded the trajectory of 22 markers, representing seven spinal column segments. Misplacement error for each marker was modeled as a Gaussian function with a standard deviation of 6 mm, and constrained to a maximum value of 12 mm in each coordinate across the skin. The sensitivity of 3D intersegmental angles to these marker misplacement errors, added to the measured data, was evaluated. The errors in sagittal plane motions resulting from marker misplacement were small (RMS error less than 3.2 deg and relative error in the angular range less than 15%) during the five trunk bending direction. The errors in the frontal and transverse plane motions, induced by marker misplacement, however, were large (RMS error up to 10.2 deg and relative error in the range up to 58%), especially during trunk bending in anterior, anterior-left, and anterior-right directions, and were often comparable in size to the intersubject variability for those motions. The induced errors in the frontal and transverse plane motions tended to be the greatest at the intersegmental levels in the lower lumbar region. These observations questioned reliability of angle measures in the frontal and transverse planes particularly in the lower lumbar region during trunk bending in anterior direction, and thus did not recommend interpreting these measures for clinical evaluation and decision-making.

  4. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions

    PubMed Central

    Karachun, Volodimir; Mel’nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the “false” angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  5. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions.

    PubMed

    Karachun, Volodimir; Mel'nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-02-26

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the "false" angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined.

  6. Dynamic modelling and estimation of the error due to asynchronism in a redundant asynchronous multiprocessor system

    NASA Technical Reports Server (NTRS)

    Huynh, Loc C.; Duval, R. W.

    1986-01-01

    The use of Redundant Asynchronous Multiprocessor System to achieve ultrareliable Fault Tolerant Control Systems shows great promise. The development has been hampered by the inability to determine whether differences in the outputs of redundant CPU's are due to failures or to accrued error built up by slight differences in CPU clock intervals. This study derives an analytical dynamic model of the difference between redundant CPU's due to differences in their clock intervals and uses this model with on-line parameter identification to idenitify the differences in the clock intervals. The ability of this methodology to accurately track errors due to asynchronisity generate an error signal with the effect of asynchronisity removed and this signal may be used to detect and isolate actual system failures.

  7. Error in trapped-ion quantum gates due to spontaneous photon scattering

    NASA Astrophysics Data System (ADS)

    Ozeri, R.; Langer, C.; Jost, J. D.; Blakestad, R. B.; Britton, J.; Chiaverini, J.; Hume, D.; Itano, W. M.; Knill, E.; Leibfried, D.; Reichle, R.; Seidelin, S.; Wesenberg, J. H.; Wineland, D. J.

    2006-05-01

    Quantum bits that are encoded into hyperfine states of trapped ions are a promising system for Quantum Information Processing (QIP). Quantum gates performed on trapped ions use laser induced stimulated Raman transitions. The spontaneous scattering of photons therefore sets a fundamental limit to the gate fidelity. Here we present a calculation that explores these limits. Errors are shown to arise from two sources. The first is due to spin relaxation (spontaneous Raman photon-scattering events) and the second due to the momentum-recoil that is imparted to the trapped ions in the scattering process. It is shown that the gate error due to spontaneous photon scattering can be reduced to very small values with the use of high laser power. It is further shown that error levels required for fault-tolerant QIP are within reach of experimentally realistic laser parameters.

  8. On Round-Off Error of Floating-Point Addition with Guard Digits,

    DTIC Science & Technology

    Some recent computers, such as those in the IBM 360 series, use radix 16 and single precision with guard digit in floating - point addition. In this...paper, a bound on the round-off error for floating - point addition in single precision with guard digits is derived. Comparison with double precision addition is made. (Author)

  9. Scene identification probabilities for evaluating radiation flux errors due to scene misidentification

    NASA Technical Reports Server (NTRS)

    Manalo, Natividad D.; Smith, G. L.

    1991-01-01

    The scene identification probabilities (Pij) are fundamentally important in evaluations of the top-of-the-atmosphere (TOA) radiation-flux errors due to the scene misidentification. In this paper, the scene identification error probabilities were empirically derived from data collected in 1985 by the Earth Radiation Budget Experiment (ERBE) scanning radiometer when the ERBE satellite and the NOAA-9 spacecraft were rotated so as to scan alongside during brief periods in January and August 1985. Radiation-flux error computations utilizing these probabilities were performed, using orbit specifications for the ERBE, the Cloud and Earth's Radiant Energy System (CERES), and the SCARAB missions for a scene that was identified as partly cloudy over ocean. Typical values of the standard deviation of the random shortwave error were in the order of 1.5-5 W/sq m, but could reach values as high as 18.0 W/sq m as computed from NOAA-9.

  10. The Correction for Attenuation Due to Measurement Error: Clarifying Concepts and Creating Confidence Sets

    ERIC Educational Resources Information Center

    Charles, Eric P.

    2005-01-01

    The correction for attenuation due to measurement error (CAME) has received many historical criticisms, most of which can be traced to the limited ability to use CAME inferentially. Past attempts to determine confidence intervals for CAME are summarized and their limitations discussed. The author suggests that inference requires confidence sets…

  11. Efficiency degradation due to tracking errors for point focusing solar collectors

    NASA Technical Reports Server (NTRS)

    Hughes, R. O.

    1978-01-01

    An important parameter in the design of point focusing solar collectors is the intercept factor which is a measure of efficiency and of energy available for use in the receiver. Using statistical methods, an expression of the expected value of the intercept factor is derived for various configurations and control law implementations. The analysis assumes that a radially symmetric flux distribution (not necessarily Gaussian) is generated at the focal plane due to the sun's finite image and various reflector errors. The time-varying tracking errors are assumed to be uniformly distributed within the threshold limits and allows the expected value calculation.

  12. Intermittent gear rattle due to interactions between forcing and manufacturing errors

    NASA Astrophysics Data System (ADS)

    Ottewill, James R.; Neild, Simon A.; Wilson, R. Eddie

    2009-04-01

    The interaction between eccentricity and an external forcing fluctuation in gear rattle response is investigated experimentally. The experimental rig consists of a 1:1 ratio steel spur gear pair, the input gear being controlled in displacement and the output gear being under no load. Gear transmission errors recorded using high accuracy encoders are presented. Large variations in backlash oscillation amplitude are observed as the relative phase of the input forcing and the sinusoidal static transmission error due to eccentricity is varied. A simplified mathematical model incorporating eccentricity is developed. It is compared with experimental findings for three different gear eccentricity alignments by way of plots relating backlash oscillation amplitude to forcing amplitude and phase relative to eccentricity sinusoid. It is shown that eccentricity does not fully account for the experimentally observed large variations in amplitude. Through analysis of the experimental data, it is suggested that further tooth profiling errors may explain the discrepancies.

  13. Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error

    ERIC Educational Resources Information Center

    González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén

    2015-01-01

    An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…

  14. A study of GPS measurement errors due to noise and multipath interference for CGADS

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.

    1996-01-01

    This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.

  15. A posteriori compensation of the systematic error due to polynomial interpolation in digital image correlation

    NASA Astrophysics Data System (ADS)

    Baldi, Antonio; Bertolino, Filippo

    2013-10-01

    It is well known that displacement components estimated using digital image correlation are affected by a systematic error due to the polynomial interpolation required by the numerical algorithm. The magnitude of bias depends on the characteristics of the speckle pattern (i.e., the frequency content of the image), on the fractional part of displacements and on the type of polynomial used for intensity interpolation. In literature, B-Spline polynomials are pointed out as being able to introduce the smaller errors, whereas bilinear and cubic interpolants generally give the worst results. However, the small bias of B-Spline polynomials is partially counterbalanced by a somewhat larger execution time. We will try to improve the accuracy of lower order polynomials by a posteriori correcting their results so as to obtain a faster and more accurate analysis.

  16. Estimating random errors due to shot noise in backscatter lidar observations

    NASA Astrophysics Data System (ADS)

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang

    2006-06-01

    We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.

  17. Estimating random errors due to shot noise in backscatter lidar observations.

    PubMed

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang

    2006-06-20

    We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.

  18. Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations

    NASA Technical Reports Server (NTRS)

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang

    2006-01-01

    In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:

  19. Uncertainty of permeability and specific storage due to experimental error during data acquisition for pulse-transient technique

    NASA Astrophysics Data System (ADS)

    Song, I.; Rathbun, A. P.; Saffer, D. M.

    2011-12-01

    Transient fluid flow through rock is governed by two hydraulic properties: permeability (k) and the specific storage (Ss), which are often determined by the pulse-transient technique when k is extremely low (e.g. k < 10-19 m2). The basic test system is composed of a pressure-confined rock sample connected to two closed reservoirs at its upstream and downstream ends. A pulse of pressure at the upstream boundary drives transient flow through the sample to the downstream end. The rock properties, k and Ss, can be determined by time-based recording of only one variable, the pressure change in each reservoir. Experimental error during data acquisition propagates through the data reduction process, leading to uncertainty in experimental results. In addition, unlike steady-state systems, the pressure-time curves are influenced by the compressive storage of the reservoirs and both the dimensions and properties of the sample. Thus, uncertainty in k and Ss may arise from errors in measurement of sample dimension, fluid pressure, or reservoir storages. In this study, the uncertainty in sample dimension is considered to be negligible, and reasonable error ranges in pressure and system storage measurements are considered. We first calculated pressure errors (P) induced by the difference between assumed, or experimentally measured values of k and Ss and their true values. Based on this result, the sensitivity coefficient (∂k/∂P and ∂Ss/∂P) is theoretically ~10 in percentage, i.e. 1% error of the pulse on average during a test cycle produces ~10% uncertainty in k and Ss. The sensitivity coefficient may become larger when the ratio of sample storage to upstream reservoir storage is extremely small. We also examined the sensitivity of experimental error in measuring the storage capacity of system reservoirs to uncertainty in resulting values of k and Ss. Because the reservoirs are typically small for tight rock samples and irregular in shape due to the combination of tubing

  20. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  1. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    PubMed

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2016-12-14

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM2.5) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (rS>0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).Journal of Exposure Science and Environmental Epidemiology advance online publication, 14 December 2016; doi:10.1038/jes.2016.66.

  2. Evaluation of linear registration algorithms for brain SPECT and the errors due to hypoperfusion lesions.

    PubMed

    Radau, P E; Slomka, P J; Julin, P; Svensson, L; Wahlund, L O

    2001-08-01

    The semiquantitative analysis of perfusion single-photon emission computed tomography (SPECT) images requires a reproducible, objective method. Automated spatial standardization (registration) of images is a prerequisite to this goal. A source of registration error is the presence of hypoperfusion defects, which was evaluated in this study with simulated lesions. The brain perfusion images measured by 99mTc-HMPAO SPECT from 21 patients with probable Alzheimer's disease and 35 control subjects were retrospectively analyzed. An automatic segmentation method was developed to remove external activity. Three registration methods, robust least squares, normalized mutual information (NMI), and count difference were implemented and the effects of simulated defects were compared. The tested registration methods required segmentation of the cerebrum from external activity, and the automatic and manual methods differed by a three-dimensional displacement of 1.4+/-1.1 mm. NMI registration proved to be least adversely effected by simulated defects with 3 mm average displacement caused by severe defects. The error in quantifying the patient-template parietal ratio due to misregistration was 2.0% for large defects (70% hypoperfusion) and 0.5% for smaller defects (85% hypoperfusion).

  3. Computational methods to compute wavefront error due to aero-optic effects

    NASA Astrophysics Data System (ADS)

    Genberg, Victor; Michels, Gregory; Doyle, Keith; Bury, Mark; Sebastian, Thomas

    2013-09-01

    Aero-optic effects can have deleterious effects on high performance airborne optical sensors that must view through turbulent flow fields created by the aerodynamic effects of windows and domes. Evaluating aero-optic effects early in the program during the design stages allows mitigation strategies and optical system design trades to be performed to optimize system performance. This necessitates a computationally efficient means to evaluate the impact of aero-optic effects such that the resulting dynamic pointing errors and wavefront distortions due to the spatially and temporally varying flow field can be minimized or corrected. To this end, an aero-optic analysis capability was developed within the commercial software SigFit that couples CFD results with optical design tools. SigFit reads the CFD generated density profile using the CGNS file format. OPD maps are then created by converting the three-dimensional density field into an index of refraction field and then integrating along specified paths to compute OPD errors across the optical field. The OPD maps may be evaluated directly against system requirements or imported into commercial optical design software including Zemax® and Code V® for a more detailed assessment of the impact on optical performance from which design trades may be performed.

  4. Assessment of Error in Aerosol Optical Depth Measured by AERONET Due to Aerosol Forward Scattering

    NASA Technical Reports Server (NTRS)

    Sinyuk, Alexander; Holben, Brent N.; Smirnov, Alexander; Eck, Thomas F.; Slustsker, Ilya; Schafer, Joel S.; Giles, David M.; Sorokin, Michail

    2013-01-01

    We present an analysis of the effect of aerosol forward scattering on the accuracy of aerosol optical depth (AOD) measured by CIMEL Sun photometers. The effect is quantified in terms of AOD and solar zenith angle using radiative transfer modeling. The analysis is based on aerosol size distributions derived from multi-year climatologies of AERONET aerosol retrievals. The study shows that the modeled error is lower than AOD calibration uncertainty (0.01) for the vast majority of AERONET level 2 observations, 99.53%. Only 0.47% of the AERONET database corresponding mostly to dust aerosol with high AOD and low solar elevations has larger biases. We also show that observations with extreme reductions in direct solar irradiance do not contribute to level 2 AOD due to low Sun photometer digital counts below a quality control cutoff threshold.

  5. Assessment of error in aerosol optical depth measured by AERONET due to aerosol forward scattering

    NASA Astrophysics Data System (ADS)

    Sinyuk, Alexander; Holben, Brent N.; Smirnov, Alexander; Eck, Thomas F.; Slutsker, Ilya; Schafer, Joel S.; Giles, David M.; Sorokin, Mikhail

    2012-12-01

    We present an analysis of the effect of aerosol forward scattering on the accuracy of aerosol optical depth (AOD) measured by CIMEL Sun photometers. The effect is quantified in terms of AOD and solar zenith angle using radiative transfer modeling. The analysis is based on aerosol size distributions derived from multi-year climatologies of AERONET aerosol retrievals. The study shows that the modeled error is lower than AOD calibration uncertainty (0.01) for the vast majority of AERONET level 2 observations, ∼99.53%. Only ∼0.47% of the AERONET database corresponding mostly to dust aerosol with high AOD and low solar elevations has larger biases. We also show that observations with extreme reductions in direct solar irradiance do not contribute to level 2 AOD due to low Sun photometer digital counts below a quality control cutoff threshold.

  6. Global Vision Impairment and Blindness Due to Uncorrected Refractive Error, 1990-2010.

    PubMed

    Naidoo, Kovin S; Leasher, Janet; Bourne, Rupert R; Flaxman, Seth R; Jonas, Jost B; Keeffe, Jill; Limburg, Hans; Pesudovs, Konrad; Price, Holly; White, Richard A; Wong, Tien Y; Taylor, Hugh R; Resnikoff, Serge

    2016-03-01

    The purpose of this systematic review was to estimate worldwide the number of people with moderate and severe visual impairment (MSVI; presenting visual acuity <6/18, ≥3/60) or blindness (presenting visual acuity <3/60) due to uncorrected refractive error (URE), to estimate trends in prevalence from 1990 to 2010, and to analyze regional differences. The review focuses on uncorrected refractive error which is now the most common cause of avoidable visual impairment globally. : The systematic review of 14,908 relevant manuscripts from 1990 to 2010 using Medline, Embase, and WHOLIS yielded 243 high-quality, population-based cross-sectional studies which informed a meta-analysis of trends by region. The results showed that in 2010, 6.8 million (95% confidence interval [CI]: 4.7-8.8 million) people were blind (7.9% increase from 1990) and 101.2 million (95% CI: 87.88-125.5 million) vision impaired due to URE (15% increase since 1990), while the global population increased by 30% (1990-2010). The all-age age-standardized prevalence of URE blindness decreased 33% from 0.2% (95% CI: 0.1-0.2%) in 1990 to 0.1% (95% CI: 0.1-0.1%) in 2010, whereas the prevalence of URE MSVI decreased 25% from 2.1% (95% CI: 1.6-2.4%) in 1990 to 1.5% (95% CI: 1.3-1.9%) in 2010. In 2010, URE contributed 20.9% (95% CI: 15.2-25.9%) of all blindness and 52.9% (95% CI: 47.2-57.3%) of all MSVI worldwide. The contribution of URE to all MSVI ranged from 44.2 to 48.1% in all regions except in South Asia which was at 65.4% (95% CI: 62-72%). : We conclude that in 2010, uncorrected refractive error continues as the leading cause of vision impairment and the second leading cause of blindness worldwide, affecting a total of 108 million people or 1 in 90 persons.

  7. Far-field errors due to random noise in cylindrical near-field measurements

    NASA Astrophysics Data System (ADS)

    Romeu, Jordi; Jofre, Luis; Cardama, Angel

    1992-01-01

    A full characterization of the far-field noise obtained from cylindrical near- to far-field transformation, for a white Gaussian, space stationary, near-field noise is derived. A possible source for such noise is the receiver additive noise. The noise characterization is done by obtaining the autocorrelation of the far-field noise, which is shown to be easily computed during the transformation process. Even for this simple case, the far-field noise has complex behavior dependent on the measurement probe. Once the statistical properties of the far-field noise are determined, it is possible to compute upper and lower bounds for the radiation pattern for a given probability. These bounds define a strip within the radiation pattern with the desired probability. This may be used as part of a complete near-field error analysis of a particular cylindrical near-field facility.

  8. Bit Error Rate Performance Limitations Due to Raman Amplifier Induced Crosstalk in a WDM Transmission System

    NASA Astrophysics Data System (ADS)

    Tithi, F. H.; Majumder, S. P.

    2017-03-01

    Analysis is carried out for a single span wavelength division multiplexing (WDM) transmission system with distributed Raman amplification to find the effect of amplifier induced crosstalk on the bit error rate (BER) with different system parameters. The results are evaluated in terms of crosstalk power induced in a WDM channel due to Raman amplification, optical signal to crosstalk ratio (OSCR) and BER at any distance for different pump power and number of WDM channels. The results show that the WDM system suffers power penalty due to crosstalk which is significant at higher pump power, higher channel separation and number of WDM channel. It is noticed that at a BER 10-9, the power penalty is 8.7 dB and 10.5 dB for the length of 180 km and number of WDM channel N=32 and 64 respectively when the pump power is 20 mW and is higher at high pump power. Analytical results are validated by simulation.

  9. A table of integrals of the error function. II - Additions and corrections.

    NASA Technical Reports Server (NTRS)

    Geller, M.; Ng, E. W.

    1971-01-01

    Integrals of products of error functions with other functions are presented, taking into account a combination of the error function with powers, a combination of the error function with exponentials and powers, a combination of the error function with exponentials of more complicated arguments, definite integrals from Laplace transforms, and a combination of the error function with trigonometric functions. Other integrals considered include a combination of the error function with logarithms and powers, a combination of two error functions, and a combination of the error function with other special functions.

  10. Errors of five-day mean surface wind and temperature conditions due to inadequate sampling

    NASA Technical Reports Server (NTRS)

    Legler, David M.

    1991-01-01

    Surface meteorological reports of wind components, wind speed, air temperature, and sea-surface temperature from buoys located in equatorial and midlatitude regions are used in a simulation of random sampling to determine errors of the calculated means due to inadequate sampling. Subsampling the data with several different sample sizes leads to estimates of the accuracy of the subsampled means. The number N of random observations needed to compute mean winds with chosen accuracies of 0.5 (N sub 0.5) and 1.0 (N sub 1,0) m/s and mean air and sea surface temperatures with chosen accuracies of 0.1 (N sub 0.1) and 0.2 (N sub 0.2) C were calculated for each 5-day and 30-day period in the buoy datasets. Mean values of N for the various accuracies and datasets are given. A second-order polynomial relation is established between N and the variability of the data record. This relationship demonstrates that for the same accuracy, N increases as the variability of the data record increases. The relationship is also independent of the data source. Volunteer-observing ship data do not satisfy the recommended minimum number of observations for obtaining 0.5 m/s and 0.2 C accuracy for most locations. The effect of having remotely sensed data is discussed.

  11. Evaluation of counting error due to colony masking in bioaerosol sampling.

    PubMed

    Chang, C W; Hwang, Y H; Grinshpun, S A; Macher, J M; Willeke, K

    1994-10-01

    Colony counting error due to indistinguishable colony overlap (i.e., masking) was evaluated theoretically and experimentally. A theoretical model to predict colony masking was used to determine colony counting efficiency by Monte Carlo computer simulation of microorganism collection and development into CFU. The computer simulation was verified experimentally by collecting aerosolized Bacillus subtilis spores and examining micro- and macroscopic colonies. Colony counting efficiency decreased (i) with increasing density of collected culturable microorganisms, (ii) with increasing colony size, and (iii) with decreasing ability of an observation system to distinguish adjacent colonies as separate units. Counting efficiency for 2-mm colonies, at optimal resolution, decreased from 98 to 85% when colony density increased from 1 to 10 microorganisms cm-2, in contrast to an efficiency decrease from 90 to 45% for 5-mm colonies. No statistically significant difference (alpha = 0.05) between experimental and theoretical results was found when colony shape was used to estimate the number of individual colonies in a CFU. Experimental colony counts were 1.2 times simulation estimates when colony shape was not considered, because of nonuniformity of actual colony size and the better discrimination ability of the human eye relative to the model. Colony surface densities associated with high counting accuracy were compared with recommended upper plate count limits and found to depend on colony size and an observation system's ability to identify overlapped colonies. Correction factors were developed to estimate the actual number of collected microorganisms from observed colony counts.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. Pedigree error due to extra-pair reproduction substantially biases estimates of inbreeding depression.

    PubMed

    Reid, Jane M; Keller, Lukas F; Marr, Amy B; Nietlisbach, Pirmin; Sardell, Rebecca J; Arcese, Peter

    2014-03-01

    Understanding the evolutionary dynamics of inbreeding and inbreeding depression requires unbiased estimation of inbreeding depression across diverse mating systems. However, studies estimating inbreeding depression often measure inbreeding with error, for example, based on pedigree data derived from observed parental behavior that ignore paternity error stemming from multiple mating. Such paternity error causes error in estimated coefficients of inbreeding (f) and reproductive success and could bias estimates of inbreeding depression. We used complete "apparent" pedigree data compiled from observed parental behavior and analogous "actual" pedigree data comprising genetic parentage to quantify effects of paternity error stemming from extra-pair reproduction on estimates of f, reproductive success, and inbreeding depression in free-living song sparrows (Melospiza melodia). Paternity error caused widespread error in estimates of f and male reproductive success, causing inbreeding depression in male and female annual and lifetime reproductive success and juvenile male survival to be substantially underestimated. Conversely, inbreeding depression in adult male survival tended to be overestimated when paternity error was ignored. Pedigree error stemming from extra-pair reproduction therefore caused substantial and divergent bias in estimates of inbreeding depression that could bias tests of evolutionary theories regarding inbreeding and inbreeding depression and their links to variation in mating system.

  13. Manifest variable path analysis: potentially serious and misleading consequences due to uncorrected measurement error.

    PubMed

    Cole, David A; Preacher, Kristopher J

    2014-06-01

    Despite clear evidence that manifest variable path analysis requires highly reliable measures, path analyses with fallible measures are commonplace even in premier journals. Using fallible measures in path analysis can cause several serious problems: (a) As measurement error pervades a given data set, many path coefficients may be either over- or underestimated. (b) Extensive measurement error diminishes power and can prevent invalid models from being rejected. (c) Even a little measurement error can cause valid models to appear invalid. (d) Differential measurement error in various parts of a model can change the substantive conclusions that derive from path analysis. (e) All of these problems become increasingly serious and intractable as models become more complex. Methods to prevent and correct these problems are reviewed. The conclusion is that researchers should use more reliable measures (or correct for measurement error in the measures they do use), obtain multiple measures for use in latent variable modeling, and test simpler models containing fewer variables.

  14. Pointing and tracking errors due to localized deformation in inter-satellite laser communication links.

    PubMed

    Tan, Liying; Yang, Yuqiang; Ma, Jing; Yu, Jianjie

    2008-08-18

    Instead of Zernike polynomials, ellipse Gaussian model is proposed to represent localized wave-front deformation in researching pointing and tracking errors in inter-satellite laser communication links, which can simplify the calculation. It is shown that both pointing and tracking errors depend on the center deepness h, the radiuses a and b, and the distance d of the Gaussian distortion and change regularly as they increase. The maximum peak values of pointing and tracking errors always appear around h=0.2lambda. The influence of localized deformation is up to 0.7microrad for pointing error, and 0.5microrad for tracking error. To reduce the impact of localized deformation on pointing and tracking errors, the machining precision of optical devices, which should be more greater than 0.2?, is proposed. The principle of choosing the optical devices with localized deformation is presented, and the method that adjusts the pointing direction to compensate pointing and tracking errors is given. We hope the results can be used in the design of inter-satellite lasercom systems.

  15. Standard addition/absorption detection microfluidic system for salt error-free nitrite determination.

    PubMed

    Ahn, Jae-Hoon; Jo, Kyoung Ho; Hahn, Jong Hoon

    2015-07-30

    A continuous-flow microfluidic chip-based standard addition/absorption detection system has been developed for accurate determination of nitrite in water of varying salinity. The absorption detection of nitrite is made via color development using the Griess reaction. We have found the yield of the reaction is significantly affected by salinity (e.g., -12% error for 30‰ NaCl, 50.0 μg L(-1)N-NO2(-) solution). The microchip has been designed to perform standard addition, color development, and absorbance detection in sequence. To effectively block stray light, the microchip made from black poly(dimethylsiloxane) is placed on the top of a compact housing that accommodates a light-emitting diode, a photomultiplier tube, and an interference filter, where the light source and the detector are optically isolated. An 80-mm liquid-core waveguide mounted on the chip externally has been employed as the absorption detection flow cell. These designs for optics secure a wide linear response range (up to 500 μg L(-1)N-NO2(-)) and a low detection limit (0.12 μg L(-1)N-NO2(-) = 8.6 nM N-NO2(-), S/N = 3). From determination of nitrite in standard samples and real samples collected from an estuary, it has been demonstrated that our microfluidic system is highly accurate (<1% RSD, n = 3) and precise (<1% RSD, n = 3).

  16. Reducing On-Board Computer Propagation Errors Due to Omitted Geopotential Terms by Judicious Selection of Uploaded State Vector

    NASA Technical Reports Server (NTRS)

    Greatorex, Scott (Editor); Beckman, Mark

    1996-01-01

    Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would

  17. Constrained localized-warping-reduced registration errors due to lesions in functional neuroimages

    NASA Astrophysics Data System (ADS)

    Radau, Perry E.; Slomka, Piotr J.; Julin, Per; Svensson, Leif; Wahlund, Lars-Olof

    2001-07-01

    The constrained, localized warping (CLW) algorithm was developed to minimize the registration errors caused by hypoperfusion lesions. SPECT brain perfusion images from 21 Alzheimer patients and 35 controls were analyzed. CLW automatically determines homologous landmarks on patient and template images. CLW was constrained by anatomy and where lesions were probable. CLW was compared with 3rd-degree, polynomial warping (AIR 3.0). Accuracy was assessed by correlation, overlap, and variance. 16 lesion types were simulated, repeated with 5 images. The errors in defect volume and intensity after registration were estimated by comparing the images resulting from warping transforms calculated when the defects were or were not present. Registration accuracy of normal studies was very similar between CLW and polynomial warping methods, and showed marked improvement over linear registration. The lesions had minimal effect on the CLW algorithm accuracy, with small errors in volume (> -4%) and intensity (< +2%). The accuracy improvement compared with not warping was nearly constant regardless of defect: +1.5% overlap and +0.001 correlation. Polynomial warping caused larger errors in defect volume (< -10%) and intensity (> +2.5%) for most defects. CLW is recommended because it caused small errors in defect estimation and improved the registration accuracy in all cases.

  18. Irradiance measurement errors due to the assumption of a Lambertian reference panel

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kirchner, J. A.

    1982-01-01

    A technique is presented for determining the error in diurnal irradiance measurements that results from the non-Lambertian behavior of a reference panel under various irradiance conditions. Spectral biconical reflectance factors of a spray-painted barium sulfate panel, along with simulated sky radiance data for clear and hazy skies at six solar zenith angles, were used to calculate the estimated panel irradiances and true irradiances for a nadir-looking sensor in two wavelength bands. The inherent errors in total spectral irradiance (0.68 microns) for a clear sky were 0.60, 6.0, 13.0, and 27.0% for solar zenith angles of 0, 45, 60, and 75 deg, respectively. The technique can be used to characterize the error of a specific panel used in field measurements, and thus eliminate any ambiguity of the effects of the type, preparation, and aging of the paint.

  19. Compensation of errors due to incident beam drift in a 3 DOF measurement system for linear guide motion.

    PubMed

    Hu, Pengcheng; Mao, Shuai; Tan, Jiu-Bin

    2015-11-02

    A measurement system with three degrees of freedom (3 DOF) that compensates for errors caused by incident beam drift is proposed. The system's measurement model (i.e. its mathematical foundation) is analyzed, and a measurement module (i.e. the designed orientation measurement unit) is developed and adopted to measure simultaneously straightness errors and the incident beam direction; thus, the errors due to incident beam drift can be compensated. The experimental results show that the proposed system has a deviation of 1 μm in the range of 200 mm for distance measurements, and a deviation of 1.3 μm in the range of 2 mm for straightness error measurements.

  20. QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES

    EPA Science Inventory

    The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...

  1. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    ERIC Educational Resources Information Center

    Bishara, Anthony J.; Hittner, James B.

    2015-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…

  2. Distorted orbit due to field errors and particle trajectories in combined undulator and axial magnetic field

    SciTech Connect

    Papadichev, V.A.

    1995-12-31

    Undulator and solenoid field errors cause electron trajectory deviation from the ideal orbit. Even small errors can result in a large lower frequency excursion from the undulator axis of a distorted orbit and of betatron oscillations performed now around it, especially near resonant conditions. Numerical calculation of a trajectory step by step requires large computing time and treats only particular cases, thus lacking generality. Theoretical treatment is traditionally based on random distribution of field errors, which allows a rather general approach, but is not convenient for practical purposes. In contrast, analytical treatment shows explicitly how distorted orbit and betatron oscillation amplitude depend on field parameters and errors and indicates how to eliminate these distortions. An analytical solution of the equations of motion can be found by expanding field errors and distorted orbit in Fourier series as was done earlier for the simplest case of a plane undulator without axial magnetic field. The same method is applied now to the more general case of combined generlized undulator and axial magnetic fields. The undulator field is a superposition of the fields of two plane undulators with mutually orthogonal fields and an arbitrary axial shift of the second undulator relative to the first. Beam space-charge forces and external linear focusing are taken into account. The particle trajectory is a superposition of ideal and distorted orbits with cyclotron gyration and slow drift gyration in the axial magnetic field caused by a balance of focusing and defocusing forces. The amplitudes of these gyrations depend on transverse coordinate and velocity at injection and can nearly double the total deviation of an electron from the undulator axis even after an adiabatic undulator entry. If the wavenumber of any Fourier harmonic is close to the wavenumbers of cyclotron or drift gyrations, a resonant increase of orbit distortion occurs.

  3. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  4. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGES

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  5. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.; Griffin, John C.

    2015-07-01

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. Using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  6. Variability of modal behavior in terms of critical speeds of a gear pair due to manufacturing errors and shaft misalignments

    NASA Astrophysics Data System (ADS)

    Driot, N.; Perret-Liaudet, J.

    2006-05-01

    This paper deals with the variability of the dynamic behavior induced by transmission error of a mass production gear pair. The origins of this variability are due to the manufacturing errors. The tolerances that are associated with shaft misalignment errors, gear tooth profile and lead errors, are considered as geometric independent random parameters. A procedure based on Taguchi's method is used to treat the tolerances statistically. The efficiency of this methodology is demonstrated by considering a simple dynamic model of a single spur gear pair. The predicted variations in dynamic behavior due to tolerances are verified by comparison with results obtained using Monte Carlo simulations. The analyzed parameters are firstly the static transmission error and the time-average mesh stiffness. As a consequence of the variability of the mesh stiffness, statistical variations of natural frequencies are observed for critical modes (high-energy modes). The related critical speed ranges are given too. At last, the variations of the high-energy mode shapes are also observed.

  7. Errors in confocal fluorescence ratiometric imaging microscopy due to chromatic aberration.

    PubMed

    Lin, Yuxiang; Gmitro, Arthur F

    2011-01-01

    Confocal fluorescence ratiometric imaging is an optical technique used to measure a variety of important biological parameters. A small amount of chromatic aberration in the microscope system can introduce a variation in the signal ratio dependent on the fluorophore concentration gradient along the optical axis and lead to bias in the measurement. We present a theoretical model of this effect. Experimental results and simulations clearly demonstrate that this error can be significant and should not be ignored.

  8. Spectral radiance errors in remote sensing ground studies due to nearby objects

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kirchner, J. A.; Newcomb, W. W.

    1983-01-01

    Attention is given to the error that can occur in all radiometric measurements owing to the presence of nearby objects. When a researcher positions himself on the side of a target point opposite the sun, his body gives rise to two erroneous effects. First, it blocks a portion of the incoming diffuse sky radiance to the target point, and second, it reflects incoming diffuse and direct solar irradiance and ground exitance onto the target point. It is noted that the same phenomenon occurs for any nearby object, whether it be a field truck, a building structure, or a row of trees. This error deriving from nearby objects is often not recognized by researchers or is considered insignificant with no knowledge of its magnitude. The approach taken here is to mathematically model the radiant transfers that take place between the global irradiance, panel, or scene and the object and to report the magnitude of this error for various solar zenith angles, wavelengths, size and distances of objects (steradian blockage), and spectral reflectances of the scene and object. The scene, object, and panel are assumed to be Lambertian, and the object is always located on the side of the target point opposite the sun.

  9. Spectral noise due to sampling errors in Fourier-transform spectroscopy.

    PubMed

    Palchetti, L; Lastrucci, D

    2001-07-01

    An assessment is made of the spectral noise in Fourier-transform spectroscopy caused by sampling errors in the interferogram acquisition. Numerical evaluations are performed in the case of the REFIR (radiation explorer in the far infrared) instrument developed for the measurement of the long-wavelength Earth emissions from satellite platforms. In this case the slow response of a room-temperature pyroelectric detector, the relatively short acquisition time, the broadband operation, and the wish for a relaxed requirement of the mirror drive accuracy make sampling error an important issue. Different sampling methods can be considered for reduction of the spectral noise induced by sampling errors. The effects of different sampling methods are quantified and discussed for the selection of the most-suitable option for this instrument. We find that only sampling methods that introduce some compensation (either analog or digital) of the frequency dependence of amplitude and phase components of the acquisition-system responsivity provide satisfactory results. In particular, the equal time sampling followed by a digital filter and numerical resampling has been examined minutely with a simulation model used to perform sensitivity tests of the main parameters that characterize the procedure.

  10. A model study of potential sampling errors due to data scatter around expendable bathythermograph transects in the tropical Pacific

    NASA Technical Reports Server (NTRS)

    Mcphaden, Michael J.; Busalacchi, Antonio J.; Picaut, Joel; Raymond, Gary

    1988-01-01

    A linear multiple vertical-mode model described by McPhaden et al. (1988) is used to examine potential errors due to data scatter around expendable bathythermograph (XBT) transects in the tropical Pacific. Two methods of sampling are compared. In the first, the model was sampled along approximately straight lines of grid points corresponding to the mean positions of XBT tracks in the eastern, central, and western Pacific; in the second, the model was sampled again at the dates and locations of actual XTB casts for 1979-1983. The model indicates that the data scattered zonally around XBT transects in general can lead to about 2 dyn cm error in dynamic height in composite sections of XBT data. Errors larger than 2 dyn cm occurred in regions where XBT sample spacing in the zonal direction was insufficient to resolve Rossby wave variations in the model.

  11. Mitigation of Angle Tracking Errors Due to Color Dependent Centroid Shifts in SIM-Lite

    NASA Technical Reports Server (NTRS)

    Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.; Zhai, Chengxing

    2010-01-01

    The SIM-Lite astrometric interferometer will search for Earth-size planets in the habitable zones of nearby stars. In this search the interferometer will monitor the astrometric position of candidate stars relative to nearby reference stars over the course of a 5 year mission. The elemental measurement is the angle between a target star and a reference star. This is a two-step process, in which the interferometer will each time need to use its controllable optics to align the starlight in the two arms with each other and with the metrology beams. The sensor for this alignment is an angle tracking CCD camera. Various constraints in the design of the camera subject it to systematic alignment errors when observing a star of one spectrum compared with a start of a different spectrum. This effect is called a Color Dependent Centroid Shift (CDCS) and has been studied extensively with SIM-Lite's SCDU testbed. Here we describe results from the simulation and testing of this error in the SCDU testbed, as well as effective ways that it can be reduced to acceptable levels.

  12. Dosage uniformity problems which occur due to technological errors in extemporaneously prepared suppositories in hospitals and pharmacies.

    PubMed

    Kalmár, Eva; Lasher, Jason Richard; Tarry, Thomas Dean; Myers, Andrea; Szakonyi, Gerda; Dombi, György; Baki, Gabriella; Alexander, Kenneth S

    2014-09-01

    The availability of suppositories in Hungary, especially in clinical pharmacy practice, is usually provided by extemporaneous preparations. Due to the known advantages of rectal drug administration, its benefits are frequently utilized in pediatrics. However, errors during the extemporaneous manufacturing process can lead to non-homogenous drug distribution within the dosage units. To determine the root cause of these errors and provide corrective actions, we studied suppository samples prepared with exactly known errors using both cerimetric titration and HPLC technique. Our results show that the most frequent technological error occurs when the pharmacist fails to use the correct displacement factor in the calculations which could lead to a 4.6% increase/decrease in the assay in individual dosage units. The second most important source of error can occur when the molding excess is calculated solely for the suppository base. This can further dilute the final suppository drug concentration causing the assay to be as low as 80%. As a conclusion we emphasize that the application of predetermined displacement factors in calculations for the formulation of suppositories is highly important, which enables the pharmacist to produce a final product containing exactly the determined dose of an active substance despite the different densities of the components.

  13. Quantifying Errors in Jet Noise Research Due to Microphone Support Reflection

    NASA Technical Reports Server (NTRS)

    Nallasamy, Nambi; Bridges, James

    2002-01-01

    The reflection coefficient of a microphone support structure used insist noise testing is documented through tests performed in the anechoic AeroAcoustic Propulsion Laboratory. The tests involve the acquisition of acoustic data from a microphone mounted in the support structure while noise is generated from a known broadband source. The ratio of reflected signal amplitude to the original signal amplitude is determined by performing an auto-correlation function on the data. The documentation of the reflection coefficients is one component of the validation of jet noise data acquired using the given microphone support structure. Finally. two forms of acoustic material were applied to the microphone support structure to determine their effectiveness in reducing reflections which give rise to bias errors in the microphone measurements.

  14. Doing the Right Thing: A Qualitative Investigation of Retractions Due to Unintentional Error.

    PubMed

    Hosseini, Mohammad; Hilhorst, Medard; de Beaufort, Inez; Fanelli, Daniele

    2017-03-20

    Retractions solicited by authors following the discovery of an unintentional error-what we henceforth call a "self-retraction"-are a new phenomenon of growing importance, about which very little is known. Here we present results of a small qualitative study aimed at gaining preliminary insights about circumstances, motivations and beliefs that accompanied the experience of a self-retraction. We identified retraction notes that unambiguously reported an honest error and that had been published between the years 2010 and 2015. We limited our sample to retractions with at least one co-author based in the Netherlands, Belgium, United Kingdom, Germany or a Scandinavian country, and we invited these authors to a semi-structured interview. Fourteen authors accepted our invitation. Contrary to our initial assumptions, most of our interviewees had not originally intended to retract their paper. They had contacted the journal to request a correction and the decision to retract had been made by journal editors. All interviewees reported that having to retract their own publication made them concerned for their scientific reputation and career, often causing considerable stress and anxiety. Interviewees also encountered difficulties in communicating with the journal and recalled other procedural issues that had unnecessarily slowed down the process of self-retraction. Intriguingly, however, all interviewees reported how, contrary to their own expectations, the self-retraction had brought no damage to their reputation and in some cases had actually improved it. We also examined the ethical motivations that interviewees ascribed, retrospectively, to their actions and found that such motivations included a combination of moral and prudential (i.e. pragmatic) considerations. These preliminary results suggest that scientists would welcome innovations to facilitate the process of self-retraction.

  15. Prevalence of visual impairment due to uncorrected refractive error: Results from Delhi-Rapid Assessment of Visual Impairment Study

    PubMed Central

    Senjam, Suraj Singh; Vashist, Praveen; Gupta, Noopur; Malhotra, Sumit; Misra, Vasundhara; Bhardwaj, Amit; Gupta, Vivek

    2016-01-01

    Aim: To estimate the prevalence of visual impairment (VI) due to uncorrected refractive error (URE) and to assess the barriers to utilization of services in the adult urban population of Delhi. Materials and Methods: A population-based rapid assessment of VI was conducted among people aged 40 years and above in 24 randomly selected clusters of East Delhi district. Presenting visual acuity (PVA) was assessed in each eye using Snellen's E chart. Pinhole examination was done if PVA was <20/60 in either eye and ocular examination to ascertain the cause of VI. Barriers to utilization of services for refractive error were recorded with questionnaires. Results: Of 2421 individuals enumerated, 2331 (96%) individuals were examined. Females were 50.7% among them. The mean age of all examined subjects was 51.32 ± 10.5 years (standard deviation). VI in either eye due to URE was present in 275 individuals (11.8%, 95% confidence interval [CI]: 10.5–13.1). URE was identified as the most common cause (53.4%) of VI. The overall prevalence of VI due to URE in the study population was 6.1% (95% CI: 5.1 CI: 5.1–7.0). The elder population as well as females were more likely to have VI due to URE (odds ratio [OR] = 12.3; P < 0.001 and OR = 1.5; P < 0.02). Lack of felt need was the most common reported barrier (31.5%). Conclusions: The prevalence of VI due to URE among the urban adult population of Delhi is still high despite the availability of abundant eye care facilities. The majority of reported barriers are related to human behavior and attitude toward the refractive error. Understanding these aspects will help in planning appropriate strategies to eliminate VI due to URE. PMID:27380979

  16. Initialization shock in decadal hindcasts due to errors in wind stress over the tropical Pacific

    NASA Astrophysics Data System (ADS)

    Pohlmann, Holger; Kröger, Jürgen; Greatbatch, Richard J.; Müller, Wolfgang A.

    2016-12-01

    Low prediction skill in the tropical Pacific is a common problem in decadal prediction systems, especially for lead years 2-5 which, in many systems, is lower than in uninitialized experiments. On the other hand, the tropical Pacific is of almost worldwide climate relevance through its teleconnections with other tropical and extratropical regions and also of importance for global mean temperature. Understanding the causes of the reduced prediction skill is thus of major interest for decadal climate predictions. We look into the problem of reduced prediction skill by analyzing the Max Planck Institute Earth System Model (MPI-ESM) decadal hindcasts for the fifth phase of the Climate Model Intercomparison Project and performing a sensitivity experiment in which hindcasts are initialized from a model run forced only by surface wind stress. In both systems, sea surface temperature variability in the tropical Pacific is successfully initialized, but most skill is lost at lead years 2-5. Utilizing the sensitivity experiment enables us to pin down the reason for the reduced prediction skill in MPI-ESM to errors in wind stress used for the initialization. A spurious trend in the wind stress forcing displaces the equatorial thermocline in MPI-ESM unrealistically. When the climate model is then switched into its forecast mode, the recovery process triggers artificial El Niño and La Niña events at the surface. Our results demonstrate the importance of realistic wind stress products for the initialization of decadal predictions.

  17. A Framework for Dealing With Uncertainty due to Model Structure Error

    NASA Astrophysics Data System (ADS)

    van der Keur, P.; Refsgaard, J.; van der Sluijs, J.; Brown, J.

    2004-12-01

    Although uncertainty about structures of environmental models (conceptual uncertainty) has been recognised often to be the main source of uncertainty in model predictions, it is rarely considered in environmental modelling. Rather, formal uncertainty analyses have traditionally focused on model parameters and input data as the principal source of uncertainty in model predictions. The traditional approach to model uncertainty analysis that considers only a single conceptual model, fails to adequately sample the relevant space of plausible models. As such, it is prone to modelling bias and underestimation of model uncertainty. In this paper we review a range of strategies for assessing structural uncertainties. The existing strategies fall into two categories depending on whether field data are available for the variable of interest. Most research attention has until now been devoted to situations, where model structure uncertainties can be assessed directly on the basis of field data. This corresponds to a situation of `interpolation'. However, in many cases environmental models are used for `extrapolation' beyond the situation and the field data available for calibration. A framework is presented for assessing the predictive uncertainties of environmental models used for extrapolation. The key elements are the use of alternative conceptual models and assessment of their pedigree and the adequacy of the samples of conceptual models to represent the space of plausible models by expert elicitation. Keywords: model error, model structure, conceptual uncertainty, scenario analysis, pedigree

  18. Investigating treatment dose error due to beam attenuation by a carbon fiber tabletop.

    PubMed

    Myint, W Kenji; Niedbala, Malgorzata; Wilkins, David; Gerig, Lee H

    2006-08-24

    Carbon fiber is commonly used in radiation therapy for treatment tabletops and various immobilization and support devices, partially because it is generally perceived to be almost radiotransparent to high-energy photons. To avoid exposure to normal tissue during modern radiation therapy, one must deliver the radiation from all gantry angles; hence, beams often transit the couch proximal to the patient. The effects of the beam attenuation by the support structure of the couch are often neglected in the planning process. In this study, we investigate the attenuation of 6-MV and 18-MV photon beams by a Medtec (Orange City, IA) carbon fiber couch. We have determined that neglecting the attenuation of oblique treatment fields by the carbon fiber couch can result in localized dose reduction from 4% to 16%, depending on energy, field size, and geometry. Further, we investigate the ability of a commercial treatment-planning system (Theraplan Plus v3.8) to account for the attenuation by the treatment couch. Results show that incorporating the carbon fiber couch in the patient model reduces the dose error to less than 2%. The variation in dose reduction as a function of longitudinal couch position was also measured. In the triangular strut region of the couch, the attenuation varied +/- 0.5% following the periodic nature of the support structure. Based on these findings, we propose the routine incorporation of the treatment tabletop into patient treatment planning dose calculations.

  19. Additional Enhancement of Electric Field in Surface-Enhanced Raman Scattering due to Fresnel Mechanism

    NASA Astrophysics Data System (ADS)

    Jayawardhana, Sasani; Rosa, Lorenzo; Juodkazis, Saulius; Stoddart, Paul R.

    2013-08-01

    Surface-enhanced Raman scattering (SERS) is attracting increasing interest for chemical sensing, surface science research and as an intriguing challenge in nanoscale plasmonic engineering. Several studies have shown that SERS intensities are increased when metal island film substrates are excited through a transparent base material, rather than directly through air. However, to our knowledge, the origin of this additional enhancement has never been satisfactorily explained. In this paper, finite difference time domain modeling is presented to show that the electric field intensity at the dielectric interface between metal particles is higher for ``far-side'' excitation than ``near-side''. This is reasonably consistent with the observed enhancement for silver islands on SiO2. The modeling results are supported by a simple analytical model based on Fresnel reflection at the interface, which suggests that the additional SERS signal is caused by near-field enhancement of the electric field due to the phase shift at the dielectric interface.

  20. Determination of stores pointing error due to wing flexibility under flight load

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.

    1995-01-01

    The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.

  1. Estimating Errors in Satellite Retrievals of Bio-Optical Properties due to Incorrect Aerosol Model Selection

    DTIC Science & Technology

    2011-01-01

    as Martha’s Vineyard or Venice. This is due to a large amount of cloud coverage during the year, as well as the AERONET-OC station being unavailable...can be used to produce a good result for nLw(412) for day 176. This is an instance where the MODIS image has sporadic cloud coverage, as well as haze...1989). [10] Gordon, H. R., Brown, J. W. and Evans, R. H., "Exact Rayleigh scattering calculations for use with the Nimbus -7 Coastal Zone Color

  2. Thermal-induced phase-shift error of a fiber-optic gyroscope due to fiber tail length asymmetry.

    PubMed

    Zhang, Yunhao; Zhang, Yonggang; Gao, Zhongxing

    2017-01-10

    As a high-precision angular sensor, the fiber-optic gyroscope (FOG) usually shows high sensitivity to disturbances of the environmental temperature. Research on thermal-induced error of the FOG is meaningful to improve its robust performance and reliability in practical applications. In this paper, thermal-induced nonreciprocal phase-shift error of the FOG due to asymmetric fiber tail length is discussed in detail, based on temperature diffusion theory. Theoretical analysis shows that the increase of thermal-induced nonreciprocal phase shift of the FOG is proportional to the asymmetric tail length. Moreover, experiments with temperature ranging from -40°C to 60°C are performed to confirm the analysis. The analysis and experiment results indicate that we may compensate the asymmetry of fiber coil due to imperfect winding and the assembly process by adjusting the fiber tail length, which can reduce the thermal-induced phase-shift error and further improve the adaptability of the FOG in a changing ambient temperature.

  3. Subspace electrode selection methodology for EEG multiple source localization error reduction due to uncertain conductivity values.

    PubMed

    Crevecoeur, Guillaume; Yitembe, Bertrand; Dupre, Luc; Van Keer, Roger

    2013-01-01

    This paper proposes a modification of the subspace correlation cost function and the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) method for electroencephalography (EEG) source analysis in epilepsy. This enables to reconstruct neural source locations and orientations that are less degraded due to the uncertain knowledge of the head conductivity values. An extended linear forward model is used in the subspace correlation cost function that incorporates the sensitivity of the EEG potentials to the uncertain conductivity value parameter. More specifically, the principal vector of the subspace correlation function is used to provide relevant information for solving the EEG inverse problems. A simulation study is carried out on a simplified spherical head model with uncertain skull to soft tissue conductivity ratio. Results show an improvement in the reconstruction accuracy of source parameters compared to traditional methodology, when using conductivity ratio values that are different from the actual conductivity ratio.

  4. ANALYSIS OF DISTRIBUTION FEEDER LOSSES DUE TO ADDITION OF DISTRIBUTED PHOTOVOLTAIC GENERATORS

    SciTech Connect

    Tuffner, Francis K.; Singh, Ruchi

    2011-08-09

    Distributed generators (DG) are small scale power supplying sources owned by customers or utilities and scattered throughout the power system distribution network. Distributed generation can be both renewable and non-renewable. Addition of distributed generation is primarily to increase feeder capacity and to provide peak load reduction. However, this addition comes with several impacts on the distribution feeder. Several studies have shown that addition of DG leads to reduction of feeder loss. However, most of these studies have considered lumped load and distributed load models to analyze the effects on system losses, where the dynamic variation of load due to seasonal changes is ignored. It is very important for utilities to minimize the losses under all scenarios to decrease revenue losses, promote efficient asset utilization, and therefore, increase feeder capacity. This paper will investigate an IEEE 13-node feeder populated with photovoltaic generators on detailed residential houses with water heater, Heating Ventilation and Air conditioning (HVAC) units, lights, and other plug and convenience loads. An analysis of losses for different power system components, such as transformers, underground and overhead lines, and triplex lines, will be performed. The analysis will utilize different seasons and different solar penetration levels (15%, 30%).

  5. Assessment of measurement error due to sampling perspective in the space-based Doppler lidar wind profiler

    NASA Technical Reports Server (NTRS)

    Houston, S. H.; Emmitt, G. D.

    1986-01-01

    A Multipair Algorithm (MPA) has been developed to minimize the contribution of the sampling error in the simulated Doppler lidar wind profiler measurements (due to angular and spatial separation between shots in a shot pair) to the total measurement uncertainty. Idealized wind fields are used as input to the profiling model, and radial wind estimates are passed through the MPA to yield a wind measurement for 300 x 300 sq km areas. The derived divergence fields illustrate the gradient patterns that are particular to the Doppler lidar sampling strategy and perspective.

  6. Additional Enhancement of Electric Field in Surface-Enhanced Raman Scattering due to Fresnel Mechanism

    PubMed Central

    Jayawardhana, Sasani; Rosa, Lorenzo; Juodkazis, Saulius; Stoddart, Paul R.

    2013-01-01

    Surface-enhanced Raman scattering (SERS) is attracting increasing interest for chemical sensing, surface science research and as an intriguing challenge in nanoscale plasmonic engineering. Several studies have shown that SERS intensities are increased when metal island film substrates are excited through a transparent base material, rather than directly through air. However, to our knowledge, the origin of this additional enhancement has never been satisfactorily explained. In this paper, finite difference time domain modeling is presented to show that the electric field intensity at the dielectric interface between metal particles is higher for “far-side” excitation than “near-side”. This is reasonably consistent with the observed enhancement for silver islands on SiO2. The modeling results are supported by a simple analytical model based on Fresnel reflection at the interface, which suggests that the additional SERS signal is caused by near-field enhancement of the electric field due to the phase shift at the dielectric interface. PMID:23903714

  7. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  8. Exposure estimation errors to nitrogen oxides on a population scale due to daytime activity away from home.

    PubMed

    Shafran-Nathan, Rakefet; Yuval; Levy, Ilan; Broday, David M

    2017-02-15

    Accurate estimation of exposure to air pollution is necessary for assessing the impact of air pollution on the public health. Most environmental epidemiology studies assign the home address exposure to the study subjects. Here, we quantify the exposure estimation error at the population scale due to assigning it solely at the residence place. A cohort of most schoolchildren in Israel (~950,000), age 6-18, and a representative cohort of Israeli adults (~380,000), age 24-65, were used. For each subject the home and the work or school addresses were geocoded. Together, these two microenvironments account for the locations at which people are present during most of the weekdays. For each subject, we estimated ambient nitrogen oxide concentrations at the home and work or school addresses using two air quality models: a stationary land use regression model and a dynamic dispersion-like model. On average, accounting for the subjects' work or school address as well as for the daily pollutant variation reduced the estimation error of exposure to ambient NOx/NO2 by 5-10ppb, since daytime concentrations at work/school and at home can differ significantly. These results were consistent regardless which air quality model as used and even for subjects that work or study close to their home. Yet, due to their usually short commute, assigning schoolchildren exposure solely at their residential place seems to be a reasonable estimation. In contrast, since adults commute for longer distances, assigning exposure of adults only at the residential place has a lower correlation with the daily weighted exposure, resulting in larger exposure estimation errors. We show that exposure misclassification can result from not accounting for the subjects' time-location trajectories through the spatiotemporally varying pollutant concentrations field.

  9. Calculation of stochastic broadening in real space due to noise and field errors in the DIII-D tokamak

    NASA Astrophysics Data System (ADS)

    Brodsky, Lisa; Punjabi, Alkesh; Ali, Halima

    2008-11-01

    The equilibrium EFIT data for the DIII-D shot 115467 at 3000 ms is used to construct the equilibrium generating function for magnetic field line trajectories in the DIII-D tokamak in natural canonical coordinates. A canonical transformation is used to construct an area-preserving map for field line trajectories in the natural canonical coordinates in the DIII-D. Maps in natural canonical coordinates have the advantage that natural canonical coordinates can be inverted to calculate real space coordinates (R,Z,φ), and there is no problem in crossing the separatrix. This is not possible for magnetic coordinates. This map is applied to calculate stochastic broadening due to magnetic noise and field errors in the DIII-D. Mode numbers for noise + field errors are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3). The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. Preliminary results suggest that the width of stochastic layer from noise and field errors in the DIII-D varies from about 7 to 16 cm near X-point, and about 0.6 to 3% of poloidal flux is lost from inside ideal separatrix. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.

  10. Phantom Effects in School Composition Research: Consequences of Failure to Control Biases Due to Measurement Error in Traditional Multilevel Models

    ERIC Educational Resources Information Center

    Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik

    2015-01-01

    The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…

  11. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  12. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  13. Accelerated Degradation Due to Weakened Adhesion from Li-TFSI Additives in Perovskite Solar Cells.

    PubMed

    Lee, Inhwa; Yun, Jae Hoon; Son, Hae Jung; Kim, Taek-Soo

    2017-03-01

    Reliable integration of organometallic halide perovskite in photovoltaic devices is critically limited by its low stability in humid environments. Furthermore, additives to increase the mobility in the hole transport material (HTM) have deliquescence and hygroscopic properties, which attract water molecules and result in accelerated degradation of the perovskite devices. In this study, a double cantilever beam (DCB) test is used to investigate the effects of additives in the HTM layer on the perovskite layer through neatly delaminating the interface between the perovskite and HTM layers. Using the DCB test, the bottom surface of the HTM layers is directly observed, and it is found that the additives are accumulated at the bottom along the thickness (i.e., through-plane direction) of the films. It is also found that the additives significantly decrease the adhesion at the interface between the perovskite and HTM layers by more than 60% through hardening the HTM films. Finally, the adhesion-based degradation mechanism of perovskite devices according to the existence of additives is proposed for humid environments.

  14. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  15. On the Changes in Lift of Hydrofoils Due to Surface Injections of Polymer Additives

    DTIC Science & Technology

    1978-02-01

    to xh in~jection~ olf polymer solutiow, .n to thoir ~u aes sho t t he i -t uit either decrease or increzise depn i i on the polymor,’ inject’lon...different boundary layer effects, and these., in turn, are iden- tical with the changes likely to be produced by polymer injections., No explicit...symmetric hydro- foils due to the injection of polymer solutions on to their surfaces show that the lift can either decrease or increase depending on

  16. Anomalous yield reduction in direct-drive DT implosions due to 3He addition

    SciTech Connect

    Herrmann, Hans W; Langenbrunner, James R; Mack, Joseph M; Cooley, James H; Wilson, Douglas C; Evans, Scott C; Sedillo, Tom J; Kyrala, George A; Caldwell, Stephen E; Young, Carlton A; Nobile, Arthur; Wermer, Joseph R; Paglieri, Stephen N; Mcevoy, Aaron M; Kim, Yong Ho; Batha, Steven H; Horsfield, Colin J; Drew, Dave; Garbett, Warren; Rubery, Michael; Glebov, Vladimir Yu; Roberts, Samuel; Frenje, Johan A

    2008-01-01

    Glass capsules were imploded in direct drive on the OMEGA laser [T. R. Boehly et aI., Opt. Commun. 133, 495, 1997] to look for anomalous degradation in deuterium/tritium (DT) yield (i.e., beyond what is predicted) and changes in reaction history with {sup 3}He addition. Such anomalies have previously been reported for D/{sup 3}He plasmas, but had not yet been investigated for DT/{sup 3}He. Anomalies such as these provide fertile ground for furthering our physics understanding of ICF implosions and capsule performance. A relatively short laser pulse (600 ps) was used to provide some degree of temporal separation between shock and compression yield components for analysis. Anomalous degradation in the compression component of yield was observed, consistent with the 'factor of two' degradation previously reported by MIT at a 50% {sup 3}He atom fraction in D{sub 2} using plastic capsules [Rygg et aI., Phys. Plasmas 13, 052702 (2006)]. However, clean calculations (i.e., no fuel-shell mixing) predict the shock component of yield quite well, contrary to the result reported by MIT, but consistent with LANL results in D{sub 2}/{sup 3}He [Wilson, et aI., lml Phys: Conf Series 112, 022015 (2008)]. X-ray imaging suggests less-than-predicted compression ofcapsules containing {sup 3}He. Leading candidate explanations are poorly understood Equation-of-State (EOS) for gas mixtures, and unanticipated particle pressure variation with increasing {sup 3}He addition.

  17. Mechanism of wiggling enhancement due to HBr gas addition during amorphous carbon etching

    NASA Astrophysics Data System (ADS)

    Kofuji, Naoyuki; Ishimura, Hiroaki; Kobayashi, Hitoshi; Une, Satoshi

    2015-06-01

    The effect of gas chemistry during etching of an amorphous carbon layer (ACL) on wiggling has been investigated, focusing especially on the changes in residual stress. Although the HBr gas addition reduces critical dimension loss, it enhances the surface stress and therefore increases wiggling. Attenuated total reflectance Fourier transform infrared spectroscopy revealed that the increase in surface stress was caused by hydrogenation of the ACL surface with hydrogen radicals. Three-dimensional (3D) nonlinear finite element method analysis confirmed that the increase in surface stress is large enough to cause the wiggling. These results also suggest that etching with hydrogen compound gases using an ACL mask has high potential to cause the wiggling.

  18. EFFECT ON 105KW NORTH WALL DUE TO ADDITION OF FILTRATION SYSTEM

    SciTech Connect

    CHO CS

    2010-03-08

    CHPRC D&D Projects is adding three filtration system on two 1-ft concrete pads adjacent to the north side of existing KW Basin building. This analysis is prepared to provide qualitative assessment based on the review of design information available for 105KW basin substructure. In the proposed heating, ventilation and air conditioning (HVAC) filtration pad designs a 2 ft gap will be maintained between the pads and the north end of the existing 1 05KW -Basin building. Filtration Skids No.2 and No.3 share one pad. It is conservative to evaluate the No.2 and No.3 skid pad for the wall assessment. Figure 1 shows the plan layout of the 105KW basin site and the location of the pads for the filtration system or HVAC skids. Figure 2 shows the cross-section elevation view of the pad. The concrete pad Drawing H-1-91482 directs the replacement of the existing 8-inch concrete pad with two new 1-ft think pads. The existing 8-inch pad is separated from the 105KW basin superstructure by an expansion joint of only half an inch. The concrete pad Drawing H-1-91482 shows the gap between the new proposed pads and the north wall and any overflow pits and sumps is 2-ft. Following analysis demonstrates that the newly added filtration units and their pads do not exceed the structural capacity of existing wall. The calculation shows that the total bending moment on the north wall due to newly added filtration units and pads including seismic load is 82.636 ft-kip/ft and is within the capacity of wall which is 139.0ft-kip/ft.

  19. Harmonic Resonance in Power Transmission Systems due to the Addition of Shunt Capacitors

    NASA Astrophysics Data System (ADS)

    Patil, Hardik U.

    Shunt capacitors are often added in transmission networks at suitable locations to improve the voltage profile. In this thesis, the transmission system in Arizona is considered as a test bed. Many shunt capacitors already exist in the Arizona transmission system and more are planned to be added. Addition of these shunt capacitors may create resonance conditions in response to harmonic voltages and currents. Such resonance, if it occurs, may create problematic issues in the system. It is main objective of this thesis to identify potential problematic effects that could occur after placing new shunt capacitors at selected buses in the Arizona network. Part of the objective is to create a systematic plan for avoidance of resonance issues. For this study, a method of capacitance scan is proposed. The bus admittance matrix is used as a model of the networked transmission system. The calculations on the admittance matrix were done using Matlab. The test bed is the actual transmission system in Arizona; however, for proprietary reasons, bus names are masked in the thesis copy intended for the public domain. The admittance matrix was obtained from data using the PowerWorld Simulator after equivalencing the 2016 summer peak load (planning case). The full Western Electricity Coordinating Council (WECC) system data were used. The equivalencing procedure retains only the Arizona portion of the WECC. The capacitor scan results for single capacitor placement and multiple capacitor placement cases are presented. Problematic cases are identified in the form of 'forbidden response. The harmonic voltage impact of known sources of harmonics, mainly large scale HVDC sources, is also presented. Specific key results for the study indicated include: (1) The forbidden zones obtained as per the IEEE 519 standard indicates the bus 10 to be the most problematic bus. (2) The forbidden zones also indicate that switching values for the switched shunt capacitor (if used) at bus 3 should be

  20. Errors in the determination of the solar constant by the Langley method due to the presence of volcanic aerosol

    SciTech Connect

    Schotland, R.M.; Hartman, J.E.

    1989-02-01

    The accuracy in the determination of the solar constant by means of the Langley method is strongly influenced by the spatial inhomogeneities of the atmospheric aerosol. Volcanos frequently inject aerosol into the upper troposphere and lower stratosphere. This paper evaluates the solar constant error that would occur if observations had been taken throughout the plume of El Chichon observed by NASA aircraft in the fall of 1982 and the spring of 1983. A lidar method is suggested to minimize this error. 15 refs.

  1. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  2. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    NASA Astrophysics Data System (ADS)

    Wang, B.; Pan, B.; Tao, R.; Lubineau, G.

    2017-04-01

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε. Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach.

  3. The potential for regional-scale bias in top-down CO2 flux estimates due to atmospheric transport errors

    NASA Astrophysics Data System (ADS)

    Miller, S. M.; Fung, I.; Liu, J.; Hayek, M. N.; Andrews, A. E.

    2014-09-01

    Estimates of CO2 fluxes that are based on atmospheric data rely upon a meteorological model to simulate atmospheric CO2 transport. These models provide a quantitative link between surface fluxes of CO2 and atmospheric measurements taken downwind. Therefore, any errors in the meteorological model can propagate into atmospheric CO2 transport and ultimately bias the estimated CO2 fluxes. These errors, however, have traditionally been difficult to characterize. To examine the effects of CO2 transport errors on estimated CO2 fluxes, we use a global meteorological model-data assimilation system known as "CAM-LETKF" to quantify two aspects of the transport errors: error variances (standard deviations) and temporal error correlations. Furthermore, we develop two case studies. In the first case study, we examine the extent to which CO2 transport uncertainties can bias CO2 flux estimates. In particular, we use a common flux estimate known as CarbonTracker to discover the minimum hypothetical bias that can be detected above the CO2 transport uncertainties. In the second case study, we then investigate which meteorological conditions may contribute to month-long biases in modeled atmospheric transport. We estimate 6 hourly CO2 transport uncertainties in the model surface layer that range from 0.15 to 9.6 ppm (standard deviation), depending on location, and we estimate an average error decorrelation time of ∼2.3 days at existing CO2 observation sites. As a consequence of these uncertainties, we find that CarbonTracker CO2 fluxes would need to be biased by at least 29%, on average, before that bias were detectable at existing non-marine atmospheric CO2 observation sites. Furthermore, we find that persistent, bias-type errors in atmospheric transport are associated with consistent low net radiation, low energy boundary layer conditions. The meteorological model is not necessarily more uncertain in these conditions. Rather, the extent to which meteorological uncertainties

  4. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  5. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles

    PubMed Central

    Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956

  6. Diagnosis of vertical velocities with the QG omega equation: an examination of the errors due to sampling strategy

    NASA Astrophysics Data System (ADS)

    Allen, J. T.; Smeed, D. A.; Nurser, A. J. G.; Zhang, J. W.; Rixen, M.

    2001-02-01

    Vertical motion at the mesoscale plays a key role in ocean circulation, ocean-atmosphere interaction, and hence climate. It is not yet possible to make direct Eulerian measurements of vertical velocities less than 1000 m day -1. However, by assuming quasi-geostrophic (QG) balance, vertical velocities O (10 m day -1) can be diagnosed from the geostrophic velocity field and suitable boundary conditions. Significant errors in the accuracy of this diagnosis arise from the necessary compromise between spatial resolution and synopticity of a hydrographic survey. This problem has been addressed by sampling the output of a numerical ocean model to simulate typical oceanographic surveys of mesoscale fronts. The balance between the number of observations and the synopticity of observations affects the apparent flow and in particular the diagnosed vertical motion. A combination of effects can typically lead to errors of 85% in the estimation of net vertical heat flux. An analytical two-layer model is used to understand components of this error and indicate the key parameters for the design of mesoscale sampling.

  7. Optics for five-dimensional measurement for correction of vertical displacement error due to attitude of floating body in superconducting magnetic levitation system

    SciTech Connect

    Shiota, Fuyuhiko; Morokuma, Tadashi

    2006-09-15

    An improved optical system for five-dimensional measurement has been developed for the correction of vertical displacement error due to the attitude change of a superconducting floating body that shows five degrees of freedom besides a vertical displacement of 10 mm. The available solid angle for the optical measurement is extremely limited because of the cryogenic laser interferometer sharing the optical window of a vacuum chamber in addition to the basic structure of the cryogenic vessel for liquid helium. The aim of the design was to develop a more practical as well as better optical system compared with the prototype system. Various artifices were built into this optical system and the result shows a satisfactory performance and easy operation overcoming the extremely severe spatial difficulty in the levitation system. Although the system described here is specifically designed for our magnetic levitation system, the concept and each artifice will be applicable to the optical measurement system for an object in a high-vacuum chamber and/or cryogenic vessel where the available solid angle for an optical path is extremely limited.

  8. Effect of water quality improvement on the remediation of river sediment due to the addition of calcium nitrate.

    PubMed

    Liu, Xiaoning; Tao, Yi; Zhou, Kuiyu; Zhang, Qiqi; Chen, Guangyao; Zhang, Xihui

    2017-01-01

    In situ sediment remediation technique is commonly used to control the release of pollutants from sediment. Addition of calcium nitrate to sediment has been applied to control the release of phosphorus from sediments. In this study, laboratory experiments were conducted to investigate the effect of water quality improvement on the remediation of river sediment with the addition of calcium nitrate. The results demonstrated that the redox-potential of sediments increased from -282mV to -130mV after 28days of calcium nitrate treatment. The acid volatile sulphide in the sediments significantly decreased (by 54.9% to 57.1%), whereas the total organic carbon decreased by 9.7% to 10.2%. However, the difference between these and water quality improvement was not significant. Due to the addition of calcium nitrate, low phosphate concentration in the water column and interstitial phosphate in the sediment were observed, indicating that the calcium nitrate was beneficial to controlling the release of phosphorus from river sediment. The decrease in phosphorus release could be attributed to the fixation of iron-phosphorus and calcium-phosphorus due to the addition of calcium nitrate. The addition of calcium nitrate to sediment caused the oxidation of sulphide to sulphate, hence resulting in high nitrate and sulphate concentrations in the water column, and high interstitial nitrate and sulphate concentrations in the sediment. The results also showed that only the water quality improvement had a significant effect on the interstitial nitrate and sulphate concentrations in the sediment.

  9. Estimation of Cyclic Error Due to Scattering in the Internal OPD Metrology of the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Tang, Hong; Zhao, Feng

    2005-01-01

    A common-path laser heterodyne interferometer capable of measuring the internal optical path difference (OPD) with accuracy of the order of 10 pm was demonstrated at JPL. To achieve this accuracy, the relative power received by the detector that is contributed by the scattering of light at the optical surfaces should be less than -97 dB. A method has been developed to estimate the cyclic error caused by the scattering of the optical surfaces. The result of the analysis is presented.

  10. Managing Uncertainty Due to a Fundamental Error Source Arising from Scatterer Distribution Complexity in Radar Remote Sensing of Precipitation

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Kuo, Kwo-Sen; Meneghini, Robert; Mugnai, Alberto

    2007-01-01

    The assumption that cloud and rain drops are spatially distributed according to a Poisson distribution within a scattering volume probed by a radar being used to estimate precipitation has represented bedrock theory in establishing 'rules of the game' for pulse averaging--the process needed to beat down noise to an acceptable level in the measurement of radar reflectivity factor. Based on relatively recent observations of 'realistic' spatial distributions of hydrometeor scatterers in a cloudy atmosphere motivates a renewed examination of the consequences of using a too simplified assumption underlying volume scattering--particularly in regards to the standard pulse averaging rule. Our investigation addresses two extremes, simple to complex, insofar as allowed for complexities in an underlying scatterer distribution. It is demonstrated that as the spatial distribution ranges from Poisson (a narrow distribution) to multi-fractal (much broader distribution), uncertainty in a measurement increases if the rule for pulse averaging goes unchanged from its Poisson distribution reference county. [A bounded cascade is used for the multi-fractal distribution, a regularly observed distribution vis-a-vis cloud liquid water content.] The resultant measurement uncertainty leads to a fundamental source of error in the estimation of rain rate from radar measurements, one that has been disregarded since the early 1950s when radar sets first began to be used for rainfall measuring. It is shown how this source of error can be 'managed'--under the assumption that number of data analysis experiments would be carried out, experiments involving pulse-by-pulse measurements obtained from a radar set modified to output individual pulses of reflectivity factor. For practical applications, a new parameter called normalized k-sample intensity invariance is developed to enable defining the required pulse average count according to a preferred degree of uncertainty.

  11. A Low-Cost Environmental Monitoring System: How to Prevent Systematic Errors in the Design Phase through the Combined Use of Additive Manufacturing and Thermographic Techniques.

    PubMed

    Salamone, Francesco; Danza, Ludovico; Meroni, Italo; Pollastro, Maria Cristina

    2017-04-11

    nEMoS (nano Environmental Monitoring System) is a 3D-printed device built following the Do-It-Yourself (DIY) approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ). It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters.

  12. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  13. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    PubMed

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  14. Extended FDD-WT method based on correcting the errors due to non-synchronous sensing of sensors

    NASA Astrophysics Data System (ADS)

    Tarinejad, Reza; Damadipour, Majid

    2016-05-01

    In this research, a combinational non-parametric method called frequency domain decomposition-wavelet transform (FDD-WT) that was recently presented by the authors, is extended for correction of the errors resulting from asynchronous sensing of sensors, in order to extend the application of the algorithm for different kinds of structures, especially for huge structures. Therefore, the analysis process is based on time-frequency domain decomposition and is performed with emphasis on correcting time delays between sensors. Time delay estimation (TDE) methods are investigated for their efficiency and accuracy for noisy environmental records and the Phase Transform - β (PHAT-β) technique was selected as an appropriate method to modify the operation of traditional FDD-WT in order to achieve the exact results. In this paper, a theoretical example (3DOF system) has been provided in order to indicate the non-synchronous sensing effects of the sensors on the modal parameters; moreover, the Pacoima dam subjected to 13 Jan 2001 earthquake excitation was selected as a case study. The modal parameters of the dam obtained from the extended FDD-WT method were compared with the output of the classical signal processing method, which is referred to as 4-Spectral method, as well as other literatures relating to the dynamic characteristics of Pacoima dam. The results comparison indicates that values are correct and reliable.

  15. Quantification and correction of the error due to limited PIV resolution on the accuracy of non-intrusive spatial pressure measurement using a DNS channel flow database

    NASA Astrophysics Data System (ADS)

    Liu, Xiaofeng; Siddle-Mitchell, Seth

    2016-11-01

    The effect of the subgrid-scale (SGS) stress due to limited PIV resolution on pressure measurement accuracy is quantified using data from a direct numerical simulation database of turbulent channel flow (JHTDB). A series of 2000 consecutive realizations of sample block data with 512x512x49 grid nodal points were selected and spatially filtered with a coarse 17x17x17 and a fine 5x5x5 box averaging, respectively, giving rise to corresponding PIV resolutions of roughly 62.6 and 18.4 times of the viscous length scale. Comparison of the reconstructed pressure at different levels of pressure gradient approximation with the filtered pressure shows that the neglect of the viscous term leads to a small but noticeable change in the reconstructed pressure, especially in regions near the channel walls. As a contrast, the neglect of the SGS stress results in a more significant increase in both the bias and the random errors, indicating the SGS term must be accounted for in PIV pressure measurement. Correction using similarity SGS modeling reduces the random error due to the omission of SGS stress from 114.5% of the filtered pressure r.m.s. fluctuation to 89.1% for the coarse PIV resolution, and from 66.5% to 35.9% for the fine PIV resolution, respectively, confirming the benefit of the error compensation method and the positive influence of increasing PIV resolution on pressure measurement accuracy improvement.

  16. Meta-analysis of gene-environment-wide association scans accounting for education level identifies additional loci for refractive error.

    PubMed

    Fan, Qiao; Verhoeven, Virginie J M; Wojciechowski, Robert; Barathi, Veluchamy A; Hysi, Pirro G; Guggenheim, Jeremy A; Höhn, René; Vitart, Veronique; Khawaja, Anthony P; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E; Williams, Katie M; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F; Joshi, Peter K; McMahon, George; St Pourcain, Beate; Evans, David M; Simpson, Claire L; Schwantes-An, Tae-Hwi; Igo, Robert P; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M; Amin, Najaf; Uitterlinden, André G; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E H; Lim, Wan'e; Beuerman, Roger W; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B; Teo, Yik-Ying; Mackey, David A; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N; Stambolian, Dwight; Wilson, Joan E Bailey; Cheng, Ching-Yu; Hammond, Christopher J; Klaver, Caroline C W; Saw, Seang-Mei; Rahi, Jugnoo S; Korobelnik, Jean-François; Kemp, John P; Timpson, Nicholas J; Smith, George Davey; Craig, Jamie E; Burdon, Kathryn P; Fogarty, Rhys D; Iyengar, Sudha K; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F; Fondran, Jeremy R; Lass, Jonathan H; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O; Jhanji, Vishal; Young, Alvin L; Döring, Angela; Raffel, Leslie J; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K H; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L; Tedja, Milly; Deangelis, Margaret M; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-03-29

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10(-5)), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia.

  17. Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error

    PubMed Central

    Fan, Qiao; Verhoeven, Virginie J. M.; Wojciechowski, Robert; Barathi, Veluchamy A.; Hysi, Pirro G.; Guggenheim, Jeremy A.; Höhn, René; Vitart, Veronique; Khawaja, Anthony P.; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W.; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E.; Williams, Katie M.; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F.; Joshi, Peter K.; McMahon, George; St Pourcain, Beate; Evans, David M.; Simpson, Claire L.; Schwantes-An, Tae-Hwi; Igo, Robert P.; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S.; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M.; Amin, Najaf; Uitterlinden, André G.; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R.; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M. Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E. H.; Lim, Wan'e; Beuerman, Roger W.; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N.; Foster, Paul J.; Klein, Barbara E. K.; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L.; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M.; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B.; Teo, Yik-Ying; Mackey, David A.; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D.; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N.; Stambolian, Dwight; Wilson, Joan E. Bailey; Cheng, Ching-Yu; Hammond, Christopher J.; Klaver, Caroline C. W.; Saw, Seang-Mei; Rahi, Jugnoo S.; Korobelnik, Jean-François; Kemp, John P.; Timpson, Nicholas J.; Smith, George Davey; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G.; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F.; Fondran, Jeremy R.; Lass, Jonathan H.; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J.; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O.; Jhanji, Vishal; Young, Alvin L.; Döring, Angela; Raffel, Leslie J.; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K.H.; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L.; Tedja, Milly; Deangelis, Margaret M.; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  18. The Effect of Additional Dead Space on Respiratory Exchange Ratio and Carbon Dioxide Production Due to Training

    PubMed Central

    Smolka, Lukasz; Borkowski, Jacek; Zaton, Marek

    2014-01-01

    The purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training. The primary outcome measures were respiratory exchange ratio (RER) and carbon dioxide production (VCO2). Two groups of young healthy males: Experimental (Exp, n = 15) and Control (Con, n = 15), participated in this study. The training consisted of 12 sessions, performed twice a week for 6 weeks. A single training session consisted of continuous, constant-rate exercise on a cycle ergometer at 60% of VO2max which was maintained for 30 minutes. Subjects in Exp group were breathing through additional respiratory dead space (1200ml), while subjects in Con group were breathing without additional dead space. Pre-test and two post-training incremental exercise tests were performed for the detection of gas exchange variables. In all training sessions, pCO2 was higher and blood pH was lower in the Exp group (p < 0.001) ensuring respiratory acidosis. A 12-session training program resulted in significant increase in performance time in both groups (from 17”29 ± 1”31 to 18”47 ± 1”37 in Exp; p=0.02 and from 17”20 ± 1”18 to 18”45 ± 1”44 in Con; p = 0.02), but has not revealed a significant difference in RER and VCO2 in both post-training tests, performed at rest and during submaximal workload. We interpret the lack of difference in post-training values of RER and VCO2 between groups as an absence of inhibition in glycolysis and glycogenolysis during exercise with additional dead space. Key Points The purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training on respiratory exchange ratio and carbon dioxide production. In all training sessions, respiratory acidosis was gained by experimental group only. No significant difference in RER and VCO2 between experimental and control group due to the trainings. The lack of

  19. The effect of additional dead space on respiratory exchange ratio and carbon dioxide production due to training.

    PubMed

    Smolka, Lukasz; Borkowski, Jacek; Zaton, Marek

    2014-01-01

    The purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training. The primary outcome measures were respiratory exchange ratio (RER) and carbon dioxide production (VCO2). Two groups of young healthy males: Experimental (Exp, n = 15) and Control (Con, n = 15), participated in this study. The training consisted of 12 sessions, performed twice a week for 6 weeks. A single training session consisted of continuous, constant-rate exercise on a cycle ergometer at 60% of VO2max which was maintained for 30 minutes. Subjects in Exp group were breathing through additional respiratory dead space (1200ml), while subjects in Con group were breathing without additional dead space. Pre-test and two post-training incremental exercise tests were performed for the detection of gas exchange variables. In all training sessions, pCO2 was higher and blood pH was lower in the Exp group (p < 0.001) ensuring respiratory acidosis. A 12-session training program resulted in significant increase in performance time in both groups (from 17"29 ± 1"31 to 18"47 ± 1"37 in Exp; p=0.02 and from 17"20 ± 1"18 to 18"45 ± 1"44 in Con; p = 0.02), but has not revealed a significant difference in RER and VCO2 in both post-training tests, performed at rest and during submaximal workload. We interpret the lack of difference in post-training values of RER and VCO2 between groups as an absence of inhibition in glycolysis and glycogenolysis during exercise with additional dead space. Key PointsThe purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training on respiratory exchange ratio and carbon dioxide production.In all training sessions, respiratory acidosis was gained by experimental group only.No significant difference in RER and VCO2 between experimental and control group due to the trainings.The lack of difference in post

  20. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  1. Decrease in corneal damage due to benzalkonium chloride by the addition of sericin into timolol maleate eye drops.

    PubMed

    Nagai, Noriaki; Ito, Yoshimasa; Okamoto, Norio; Shimomura, Yoshikazu

    2013-01-01

    We investigated the protective effects of sericin on corneal damage due to benzalkonium chloride (BAC) used as a preservative in commercially available timolol maleate eye drops using rat debrided corneal epithelium and a human cornea epithelial cell line (HCE-T). Corneal wounds were monitored using a fundus camera TRC-50X equipped with a digital camera; eye drops were instilled into the rat eyes five times a day after corneal epithelial abrasion. The viability of HCE-T cells was calculated by TetraColor One; and Escherichia coli (ATCC 8739) were used to measure antimicrobial activity. The reducing effects on transcorneal penetration and intraocular pressure (IOP) of the eye drops were determined using rabbits. The corneal wound healing rate and rate constants (kH) as well as cell viability were higher following treatment with 0.005% BAC solution containing 0.1% sericin than in the case of treatment with BAC solution alone; the antimicrobial activity was approximately the same for BAC solutions with and without sericin. In addition, the kH for rat eyes instilled with commercially available timolol maleate eye drops containing 0.1% sericin was significantly higher than that of eyes instilled with timolol maleate eye drops without sericin, and the addition of sericin did not affect the corneal penetration or IOP reducing effect of commercially available timolol maleate eye drops. A preservative system comprising BAC and sericin may provide effective therapy for glaucoma patients requiring long-term anti-glaucoma agents.

  2. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis

    PubMed Central

    Cornforth, Daniel M.; Matthews, Andrew; Brown, Sam P.; Raymond, Ben

    2015-01-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  3. Aberrant Expression of TIMP-2 and PBEF Genes in the Placentae of Cloned Mice Due to Epigenetic Reprogramming Error

    PubMed Central

    Kim, Hong Rye; Lee, Jae Eun; Oqani, Reza Kheirkhahi; Kim, So Yeon; Wakayama, Teruhiko; Li, Chong; Sa, Su Jin; Woo, Je Seok; Jin, Dong Il

    2016-01-01

    Cloned mice derived from somatic or ES cells show placental overgrowth (placentomegaly) at term. We had previously analyzed cloned and normal mouse placentae by using two-dimensional gel electrophoresis and mass spectrometry to identify differential protein expression patterns. Cloned placentae showed upregulation of tissue inhibitor of metalloproteinase-2 (TIMP-2), which is involved in extracellular matrix degradation and tissue remodeling, and downregulation of pre-B cell colony enhancing factor 1 (PBEF), which inhibits apoptosis and induces spontaneous labor. Here, we used Western blotting to further analyze the protein expression levels of TIMP-2 and PBEF in cloned placentae derived from cumulus cells, TSA-treated cumulus cells, intracytoplasmic sperm injection (ICSI), and natural mating (NM control). Cloned and TSA-treated cloned placentae had higher expression levels of TIMP-2 compared with NM control and ICSI-derived placentae, and there was a positive association between TIMP-2 expression and the placental weight of cloned mouse concepti. Conversely, PBEF protein expression was significantly lower in cloned and ICSI placentae compared to NM controls. To examine whether the observed differences were due to abnormal gene expression caused by faulty epigenetic reprogramming in clones, we investigated DNA methylation and histone modification in the promoter regions of the genes encoding TIMP-2 and PBEF. Sodium bisulfite sequencing did not reveal any difference in DNA methylation between cloned and NM control placentae. However, ChIP assays revealed that the level of H3-K9/K14 acetylation at the TIMP-2 locus was higher in cloned placentae than in NM controls, whereas acetylation of the PBEF promoter was lower in cloned and ICSI placenta versus NM controls. These results suggest that cloned placentae appear to suffer from failure of histone modification-based reprogramming in these (and potentially other) developmentally important genes, leading to aberrant

  4. Estimating Parallax Error Due to Orbital Motion for HST/WFC3 Spatial Scan Observations of 19 Long-period Classical Cepheids

    NASA Astrophysics Data System (ADS)

    Anderson, Richard I.; Casertano, Stefano; Riess, Adam G.

    2017-01-01

    We employ the Hubble Space Telescope's Wide Field Camera 3 (HST/WFC3) in spatial scanning mode to measure 30 - 40μas parallax of 19 classical Cepheids in the Milky Way with the aim of improving the calibration of the cosmic distance scale (Riess et al. 2014; Casertano et al. 2016). The measured parallaxes are an order of magnitude more precise than parallaxes from the first Gaia data release and thus furthermore provide important cross-checks for Gaia data processing.Here we present our work aimed at estimating the parallax error due to orbital motion caused by undetected companion stars (Anderson et al. 2016). We have secured more than 1600 high-precision radial velocity (RV) measurements of the 19 long-period (Ppuls > 9d) Cepheids in our sample using ground-based telescopes on both hemispheres to investigate the presence of spectroscopic companions. We model the RV variability together with orbital motion using a grid of input orbital periods, Porb. We determine upper limits on the (unsigned) projected parallax error induced by hypothetical companions using the orbital configuration upper limits determined by modeling RV data. We thus show that our HST/WFC3 parallax measurements are subject to an error of less than 2% in parallax (i.e., typically less than ±7μas) for 16 stars in the sample, and < 4% for two Cepheids with fewer RV observations. For YZ Carinae, however, we correct the previously published orbital solution and show that the astrometric model must take into account orbital motion to avoid significant (approx. ±100μas) parallax error.We have further investigated long-timescale (Porb > 10yr) orbital motion using literature data and RV templates based on our new data. We thus discover new evidence for RV signals due to long-term orbital motion for 4 Cepheids and critically assess putative evidence for spectroscopic binarity previously reported based on data of much lesser quality. We caution that astrometric measurements of binaries with Porb on

  5. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  6. Too generous to a fault? Is reliable earthquake safety a lost art? Errors in expected human losses due to incorrect seismic hazard estimates

    NASA Astrophysics Data System (ADS)

    Bela, James

    2014-11-01

    "One is well advised, when traveling to a new territory, to take a good map and then to check the map with the actual territory during the journey." In just such a reality check, Global Seismic Hazard Assessment Program (GSHAP) maps (prepared using PSHA) portrayed a "low seismic hazard," which was then also assumed to be the "risk to which the populations were exposed." But time-after-time-after-time the actual earthquakes that occurred were not only "surprises" (many times larger than those implied on the maps), but they were often near the maximum potential size (Maximum Credible Earthquake or MCE) that geologically could occur. Given these "errors in expected human losses due to incorrect seismic hazard estimates" revealed globally in these past performances of the GSHAP maps (> 700,000 deaths 2001-2011), we need to ask not only: "Is reliable earthquake safety a lost art?" but also: "Who and what were the `Raiders of the Lost Art?' "

  7. SU-E-T-385: Evaluation of DVH Change for PTV Due to Patient Weight Loss in Prostate VMAT Using Gaussian Error Function

    SciTech Connect

    Viraganathan, H; Jiang, R; Chow, J

    2015-06-15

    Purpose: We proposed a method to predict the change of dose-volume histogram (DVH) for PTV due to patient weight loss in prostate volumetric modulated arc therapy (VMAT). This method is based on a pre-calculated patient dataset and DVH curve fitting using the Gaussian error function (GEF). Methods: Pre-calculated dose-volume data from patients having weight loss in prostate VMAT was employed to predict the change of PTV coverage due to reduced depth in external contour. The effect of patient weight loss in treatment was described by a prostate dose-volume factor (PDVF), which was evaluated by the prostate PTV. Along with the PDVF, the GEF was used to fit into the DVH curve for the PTV. To predict a new DVH due to weight loss, parameters from the GEF describing the shape of DVH curve were determined. Since the parameters were related to the PDVF as per the specific reduced depth, we could first predict the PDVF at a reduced depth based on the prostate size from the pre-calculated dataset. Then parameters of the GEF could be determined from the PDVF to plot the new DVH for the PTV corresponding to the reduced depth. Results: A MATLAB program was built basing on the patient dataset with different prostate sizes. We input data of the prostate size and reduced depth of the patient into the program. The program then calculated the PDVF and DVH for the PTV considering the patient weight loss. The program was verified by different patient cases with various reduced depths. Conclusion: Our method can estimate the change of DVH for the PTV due to patient weight loss quickly without CT rescan and replan. This would help the radiation staff to predict the change of PTV coverage, when patient’s external contour reduced in prostate VMAT.

  8. SU-E-J-164: Estimation of DVH Variation for PTV Due to Interfraction Organ Motion in Prostate VMAT Using Gaussian Error Function

    SciTech Connect

    Lewis, C; Jiang, R; Chow, J

    2015-06-15

    Purpose: We developed a method to predict the change of DVH for PTV due to interfraction organ motion in prostate VMAT without repeating the CT scan and treatment planning. The method is based on a pre-calculated patient database with DVH curves of PTV modelled by the Gaussian error function (GEF). Methods: For a group of 30 patients with different prostate sizes, their VMAT plans were recalculated by shifting their PTVs 1 cm with 10 increments in the anterior-posterior, left-right and superior-inferior directions. The DVH curve of PTV in each replan was then fitted by the GEF to determine parameters describing the shape of curve. Information of parameters, varying with the DVH change due to prostate motion for different prostate sizes, was analyzed and stored in a database of a program written by MATLAB. Results: To predict a new DVH for PTV due to prostate interfraction motion, prostate size and shift distance with direction were input to the program. Parameters modelling the DVH for PTV were determined based on the pre-calculated patient dataset. From the new parameters, DVH curves of PTVs with and without considering the prostate motion were plotted for comparison. The program was verified with different prostate cases involving interfraction prostate shifts and replans. Conclusion: Variation of DVH for PTV in prostate VMAT can be predicted using a pre-calculated patient database with DVH curve fitting. The computing time is fast because CT rescan and replan are not required. This quick DVH estimation can help radiation staff to determine if the changed PTV coverage due to prostate shift is tolerable in the treatment. However, it should be noted that the program can only consider prostate interfraction motions along three axes, and is restricted to prostate VMAT plan using the same plan script in the treatment planning system.

  9. Estimating errors in cloud amount and cloud optical thickness due to limited spatial sampling using a satellite imager as a proxy for nadir-view sensors

    NASA Astrophysics Data System (ADS)

    Liu, Yinghui

    2015-07-01

    Cloud climatologies from space-based active sensors have been used in climate and other studies without their uncertainties specified. This study quantifies the errors in monthly mean cloud amount and optical thickness due to the limited spatial sampling of space-based active sensors. Nadir-view observations from a satellite imager, the Moderate Resolution Imaging Spectroradiometer (MODIS), serve as a proxy for those active sensors and observations within 10° of the sensor's nadir view serve as truth for data from 2003 to 2013 in the Arctic. June-July monthly mean cloud amount and liquid water and ice cloud optical thickness from MODIS for both observations are calculated and compared. Results show that errors increase with decreasing sample numbers for monthly means in cloud amount and cloud optical thickness. The root-mean-square error of monthly mean cloud amount from nadir-view observations increases with lower latitudes, with 0.7% (1.4%) at 80°N and 4.2% (11.2%) at 60°N using data from 2003 to 2013 (from 2012). For a 100 km resolution Equal-Area Scalable Earth Grid (EASE-Grid) cell of 1000 sample numbers, the absolute differences in these two monthly mean cloud amounts are less than 6.5% (9.0%, 11.5%) with an 80 (90, 95)%chance; such differences decrease to 4.0% (5.0%, 6.5%) with 5000 sample numbers. For a 100 km resolution EASE-Grid of 1000 sample numbers, the absolute differences in these two monthly mean cloud optical thicknesses are less than 2.7 (3.8) with a 90% chance for liquid water cloud (ice cloud); such differences decrease to 1.3 (1.0) for 5000 sample numbers. The uncertainties in monthly mean cloud amount and optical thickness estimated in this study may provide useful information for applying cloud climatologies from active sensors in climate studies and suggest the need for future spaceborne active sensors with a wide swath.

  10. Common Ion Effects In Zeoponic Substrates: Dissolution And Cation Exchange Variations Due to Additions of Calcite, Dolomite and Wollastonite

    NASA Technical Reports Server (NTRS)

    Beiersdorfer, R. E.; Ming, D. W.; Galindo, C., Jr.

    2003-01-01

    c1inoptilolite-rich tuff-hydroxyapatite mixture (zeoponic substrate) has the potential to serve as a synthetic soil-additive for plant growth. Essential plant macro-nutrients such as calcium, phosphorous, magnesium, ammonium and potassium are released into solution via dissolution of the hydroxyapatite and cation exchange on zeolite charged sites. Plant growth experiments resulting in low yield for wheat have been attributed to a Ca deficiency caused by a high degree of cation exchange by the zeolite. Batch-equilibration experiments were performed in order to determine if the Ca deficiency can be remedied by the addition of a second Ca-bearing, soluble, mineral such as calcite, dolomite or wollastonite. Variations in the amount of calcite, dolomite or wollastonite resulted in systematic changes in the concentrations of Ca and P. The addition of calcite, dolomite or wollastonite to the zeoponic substrate resulted in an exponential decrease in the phosphorous concentration in solution. The exponential rate of decay was greatest for calcite (5.60 wt. % -I), intermediate for wollastonite (2.85 wt.% -I) and least for dolomite (1.58 wt.% -I). Additions of the three minerals resulted in linear increases in the calcium concentration in solution. The rate of increase was greatest for calcite (3.64), intermediate for wollastonite (2.41) and least for dolomite (0.61). The observed changes in P and Ca concentration are consistent with the solubilities of calcite, dolomite and wollastonite and with changes expected from a common ion effect with Ca. Keywords: zeolite, zeoponics, common-ion effect, clinoptilolite, hydroxyapatite

  11. Decrease in Corneal Damage due to Benzalkonium Chloride by the Addition of Mannitol into Timolol Maleate Eye Drops.

    PubMed

    Nagai, Noriaki; Yoshioka, Chiaki; Tanino, Tadatoshi; Ito, Yoshimasa; Okamoto, Norio; Shimomura, Yoshikazu

    2015-01-01

    We investigated the protective effects of mannitol on corneal damage caused by benzalkonium chloride (BAC), which is used as a preservative in commercially available timolol maleate eye drops, using rat debrided corneal epithelium and a human cornea epithelial cell line (HCE-T). Corneal wounds were monitored using a fundus camera TRC-50X equipped with a digital camera; eye drops were instilled into rat eyes five times a day after corneal epithelial abrasion. The viability of HCE-T cells was calculated by TetraColor One; and Escherichia coli (ATCC 8739) were used to measure antimicrobial activity. The reducing effects on transcorneal penetration and intraocular pressure (IOP) of the eye drops were determined using rabbits. The corneal wound healing rate and rate constant (kH), as well as cell viability, were higher following treatment with 0.005% BAC solution containing 0.5% mannitol than in the case BAC solution alone; the antimicrobial activity was approximately the same for BAC solutions with and without mannitol. In addition, the kH for rat eyes instilled with commercially available timolol maleate eye drops containing 0.5% mannitol was significantly higher than that for eyes instilled with timolol maleate eye drops without mannitol, and the addition of mannitol did not affect the corneal penetration or IOP reducing effect of the timolol maleate eye drops. A preservative system comprising BAC and mannitol may provide effective therapy for glaucoma patients requiring long-term treatment with anti-glaucoma agents.

  12. Additive effects due to biochar and endophyte application enable soybean to enhance nutrient uptake and modulate nutritional parameters* #

    PubMed Central

    Waqas, Muhammad; Kim, Yoon-Ha; Khan, Abdul Latif; Shahzad, Raheem; Asaf, Sajjad; Hamayun, Muhammad; Kang, Sang-Mo; Khan, Muhammad Aaqil; Lee, In-Jung

    2017-01-01

    We studied the effects of hardwood-derived biochar (BC) and the phytohormone-producing endophyte Galactomyces geotrichum WLL1 in soybean (Glycine max (L.) Merr.) with respect to basic, macro-and micronutrient uptakes and assimilations, and their subsequent effects on the regulation of functional amino acids, isoflavones, fatty acid composition, total sugar contents, total phenolic contents, and 1,1-diphenyl-2-picrylhydrazyl (DPPH)-scavenging activity. The assimilation of basic nutrients such as nitrogen was up-regulated, leaving carbon, oxygen, and hydrogen unaffected in BC+G. geotrichum-treated soybean plants. In comparison, the uptakes of macro-and micronutrients fluctuated in the individual or co-application of BC and G. geotrichum in soybean plant organs and rhizospheric substrate. Moreover, the same attribute was recorded for the regulation of functional amino acids, isoflavones, fatty acid composition, total sugar contents, total phenolic contents, and DPPH-scavenging activity. Collectively, these results showed that BC+G. geotrichum-treated soybean yielded better results than did the plants treated with individual applications. It was concluded that BC is an additional nutriment source and that the G. geotrichum acts as a plant biostimulating source and the effects of both are additive towards plant growth promotion. Strategies involving the incorporation of BC and endophytic symbiosis may help achieve eco-friendly agricultural production, thus reducing the excessive use of chemical agents. PMID:28124840

  13. Error probability performance of unbalanced QPSK receivers

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1978-01-01

    A simple technique for calculating the error probability performance and associated noisy reference loss of practical unbalanced QPSK receivers is presented. The approach is based on expanding the error probability conditioned on the loop phase error in a power series in the loop phase error and then, keeping only the first few terms of this series, averaging this conditional error probability over the probability density function of the loop phase error. Doing so results in an expression for the average error probability which is in the form of a leading term representing the ideal (perfect synchronization references) performance plus a term proportional to the mean-squared crosstalk. Thus, the additional error probability due to noisy synchronization references occurs as an additive term proportional to the mean-squared phase jitter directly associated with the receiver's tracking loop. Similar arguments are advanced to give closed-form results for the noisy reference loss itself.

  14. Correction for 'artificial' electron disequilibrium due to cone-beam CT density errors: implications for on-line adaptive stereotactic body radiation therapy of lung.

    PubMed

    Disher, Brandon; Hajdok, George; Wang, An; Craig, Jeff; Gaede, Stewart; Battista, Jerry J

    2013-06-21

    Cone-beam computed tomography (CBCT) has rapidly become a clinically useful imaging modality for image-guided radiation therapy. Unfortunately, CBCT images of the thorax are susceptible to artefacts due to scattered photons, beam hardening, lag in data acquisition, and respiratory motion during a slow scan. These limitations cause dose errors when CBCT image data are used directly in dose computations for on-line, dose adaptive radiation therapy (DART). The purpose of this work is to assess the magnitude of errors in CBCT numbers (HU), and determine the resultant effects on derived tissue density and computed dose accuracy for stereotactic body radiation therapy (SBRT) of lung cancer. Planning CT (PCT) images of three lung patients were acquired using a Philips multi-slice helical CT simulator, while CBCT images were obtained with a Varian On-Board Imaging system. To account for erroneous CBCT data, three practical correction techniques were tested: (1) conversion of CBCT numbers to electron density using phantoms, (2) replacement of individual CBCT pixel values with bulk CT numbers, averaged from PCT images for tissue regions, and (3) limited replacement of CBCT lung pixels values (LCT) likely to produce artificial lateral electron disequilibrium. For each corrected CBCT data set, lung SBRT dose distributions were computed for a 6 MV volume modulated arc therapy (VMAT) technique within the Philips Pinnacle treatment planning system. The reference prescription dose was set such that 95% of the planning target volume (PTV) received at least 54 Gy (i.e. D95). Further, we used the relative depth dose factor as an a priori index to predict the effects of incorrect low tissue density on computed lung dose in regions of severe electron disequilibrium. CT number profiles from co-registered CBCT and PCT patient lung images revealed many reduced lung pixel values in CBCT data, with some pixels corresponding to vacuum (-1000 HU). Similarly, CBCT data in a plastic lung

  15. Correction for ‘artificial’ electron disequilibrium due to cone-beam CT density errors: implications for on-line adaptive stereotactic body radiation therapy of lung

    NASA Astrophysics Data System (ADS)

    Disher, Brandon; Hajdok, George; Wang, An; Craig, Jeff; Gaede, Stewart; Battista, Jerry J.

    2013-06-01

    Cone-beam computed tomography (CBCT) has rapidly become a clinically useful imaging modality for image-guided radiation therapy. Unfortunately, CBCT images of the thorax are susceptible to artefacts due to scattered photons, beam hardening, lag in data acquisition, and respiratory motion during a slow scan. These limitations cause dose errors when CBCT image data are used directly in dose computations for on-line, dose adaptive radiation therapy (DART). The purpose of this work is to assess the magnitude of errors in CBCT numbers (HU), and determine the resultant effects on derived tissue density and computed dose accuracy for stereotactic body radiation therapy (SBRT) of lung cancer. Planning CT (PCT) images of three lung patients were acquired using a Philips multi-slice helical CT simulator, while CBCT images were obtained with a Varian On-Board Imaging system. To account for erroneous CBCT data, three practical correction techniques were tested: (1) conversion of CBCT numbers to electron density using phantoms, (2) replacement of individual CBCT pixel values with bulk CT numbers, averaged from PCT images for tissue regions, and (3) limited replacement of CBCT lung pixels values (LCT) likely to produce artificial lateral electron disequilibrium. For each corrected CBCT data set, lung SBRT dose distributions were computed for a 6 MV volume modulated arc therapy (VMAT) technique within the Philips Pinnacle treatment planning system. The reference prescription dose was set such that 95% of the planning target volume (PTV) received at least 54 Gy (i.e. D95). Further, we used the relative depth dose factor as an a priori index to predict the effects of incorrect low tissue density on computed lung dose in regions of severe electron disequilibrium. CT number profiles from co-registered CBCT and PCT patient lung images revealed many reduced lung pixel values in CBCT data, with some pixels corresponding to vacuum (-1000 HU). Similarly, CBCT data in a plastic lung

  16. Radio metric errors due to mismatch and offset between a DSN antenna beam and the beam of a troposphere calibration instrument

    NASA Technical Reports Server (NTRS)

    Linfield, R. P.; Wilcox, J. Z.

    1993-01-01

    Two components of the error of a troposphere calibration measurement were quantified by theoretical calculations. The first component is a beam mismatch error, which occurs when the calibration instrument senses a conical volume different from the cylindrical volume sampled by a Deep Space Network (DSN) antenna. The second component is a beam offset error, which occurs if the calibration instrument is not mounted on the axis of the DSN antenna. These two error sources were calculated for both delay (e.g., VLBI) and delay rate (e.g., Doppler) measurements. The beam mismatch error for both delay and delay rate drops rapidly as the beamwidth of the troposphere calibration instrument (e.g., a water vapor radiometer or an infrared Fourier transform spectrometer) is reduced. At a 10-deg elevation angle, the instantaneous beam mismatch error is 1.0 mm for a 6-deg beamwidth and 0.09 mm for a 0.5-deg beam (these are the full angular widths of a circular beam with uniform gain out to a sharp cutoff). Time averaging for 60-100 sec will reduce these errors by factors of 1.2-2.2. At a 20-deg elevation angle, the lower limit for current Doppler observations, the beam-mismatch delay rate error is an Allan standard deviation over 100 sec of 1.1 x 10(exp -14) with a 4-deg beam and 1.3 x 10(exp -l5) for a 0.5-deg beam. A 50-m beam offset would result in a fairly modest (compared to other expected error sources) delay error (less than or equal to 0.3 mm for 60-sec integrations at any elevation angle is greater than or equal to 6 deg). However, the same offset would cause a large error in delay rate measurements (e.g., an Allan standard deviation of 1.2 x 10(exp -14) over 100 sec at a 20-deg elevation angle), which would dominate over other known error sources if the beamwidth is 2 deg or smaller. An on-axis location is essential for accurate troposphere calibration of delay rate measurements. A half-power beamwidth (for a beam with a tapered gain profile) of 1.2 deg or smaller is

  17. SU-E-P-13: Quantifying the Geometric Error Due to Irregular Motion in Four-Dimensional Computed Tomography (4DCT)

    SciTech Connect

    Sawant, A

    2015-06-15

    Purpose: Respiratory correlated 4DCT images are generated under the assumption of a regular breathing cycle. This study evaluates the error in 4DCT-based target position estimation in the presence of irregular respiratory motion. Methods: A custom-made programmable externally-and internally-deformable lung motion phantom was placed inside the CT bore. An abdominal pressure belt was placed around the phantom to mimic clinical 4DCT acquisitio and the motion platform was programmed with a sinusoidal (±10mm, 10 cycles per minute) motion trace and 7 motion traces recorded from lung cancer patients. The same setup and motion trajectories were repeated in the linac room and kV fluoroscopic images were acquired using the on-board imager. Positions of 4 internal markers segmented from the 4DCT volumes were overlaid upon the motion trajectories derived from the fluoroscopic time series to calculate the difference between estimated (4DCT) and “actual” (kV fluoro) positions. Results: With a sinusoidal trace, absolute errors of the 4DCT estimated markers positions vary between 0.78mm and 5.4mm and RMS errors are between 0.38mm to 1.7mm. With irregular patient traces, absolute errors of the 4DCT estimated markers positions increased significantly by 100 to 200 percent, while the corresponding RMS error values have much smaller changes. Significant mismatches were frequently found at peak-inhale or peak-exhale phase. Conclusion: As expected, under conditions of well-behaved, periodic sinusoidal motion, the 4DCT yielded much better estimation of marker positions. When an actual patient trace is used 4DCT-derived positions showed significant mismatches with the fluoroscopic trajectories, indicating the potential for geometric and therefore dosimetric errors in the presence of cycle-to-cycle respiratory variations.

  18. Anemia Causes Hypoglycemia in Intensive Care Unit Patients Due to Error in Single-Channel Glucometers: Methods of Reducing Patient Risk

    DTIC Science & Technology

    2010-01-01

    hematocrit, low oxygen tension, acetaminophen, uric acid, ascorbic acid, maltose , galactose, xy- lose, lactose, operator inexperience, age of strips, heat...Biomedical, Waltham, MA) that corrects for the effects of anemia, low oxygen tension, acetaminophen, uric acid, ascorbic acid, maltose , galactose, xylose, and... maltose , galactose, xylose, lactose) and low oxygen tension were eliminated as signifi- cant contributors to error by comparison of glucometer results

  19. Measuring uncertainty in dose delivered to the cochlea due to setup error during external beam treatment of patients with cancer of the head and neck

    SciTech Connect

    Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A.

    2013-12-15

    Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup

  20. Measuring uncertainty in dose delivered to the cochlea due to setup error during external beam treatment of patients with cancer of the head and neck

    PubMed Central

    Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A.

    2013-01-01

    Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup

  1. Relativistic regimes in which Compton scattering doubly differential cross sections obtained from impulse approximation are accurate due to cancelation of errors

    NASA Astrophysics Data System (ADS)

    Lajohn, L. A.; Pratt, R. H.

    2015-05-01

    There is no simple parameter that can be used to predict when impulse approximation (IA) can yield accurate Compton scattering doubly differential cross sections (DDCS) in relativistic regimes. When Z is low, a small value of the parameter /q (where is the average initial electron momentum and q is the momentum transfer) suffices. For small Z the photon electron kinematic contribution described in relativistic S-matrix (SM) theory reduces to an expression, Xrel, which is present in the relativistic impulse approximation (RIA) formula for Compton DDCS. When Z is high, the S-Matrix photon electron kinematics no longer reduces to Xrel, and this along with the error characterized by the magnitude of /q contribute to the RIA error Δ. We demonstrate and illustrate in the form of contour plots that there are regimes of incident photon energy ωi and scattering angle θ in which the two types of errors at least partially cancel. Our calculations show that when θ is about 65° for Uranium K-shell scattering, Δ is less than 1% over an ωi range of 300 to 900 keV.

  2. A Comparison of the Hybrid and EnSRF Analysis Schemes in the Presence of Model Errors due to Unresolved Scales

    DTIC Science & Technology

    2009-10-01

    variance matrix; and Pb is the background-error co- variance. As in Wang et al. (2007a), PbHT and HPbHT are formed by HPbHT 5 (1 a)(r p3ps 8 HP eHT )1a( fHBH...T and (2) PbHT 5 (1 a)(rn3ps 8 P eHT )1a( fBH)T, (3) where PeHT and HPeHT are calculated from the K ETKF (Wang et al. 2007a) ensemble-forecast... eHT andHPeHT in Eqs. (2) and (3). Therefore, the relative weight of the ETKF perturbation and the random perturbation in the back- ground ensemble

  3. Measurement of Dihedral Angle Errors of a Large-Aperture Space Retroreflector: Separation of the Effect of Sag Due to Gravity

    NASA Astrophysics Data System (ADS)

    Minato, Atsushi; Sugimoto, Nobuo; Bleier, Zvi; Hunter, George C.; Paul, James

    1995-08-01

    A 50-cm diameter hollow cube-corner retroreflector for a space application was tested with a 60-cm interferometer. To separate the effect of sag due to gravity, the interferograms were taken at six rotational positions, with the retroreflector mounted horizontally. The zero-gravity dihedral angles and the effect of sag were estimated from the interferograms.

  4. Growth enhancement of Picea abies trees under long-term, low-dose N addition is due to morphological more than to physiological changes.

    PubMed

    Krause, Kim; Cherubini, Paolo; Bugmann, Harald; Schleppi, Patrick

    2012-12-01

    Human activities have drastically increased nitrogen (N) inputs into natural and near-natural terrestrial ecosystems such that critical loads are now being exceeded in many regions of the world. This implies that these ecosystems are shifting from natural N limitation to eutrophication or even N saturation. This process is expected to modify the growth of forests and thus, along with management, to affect their carbon (C) sequestration. However, knowledge of the physiological mechanisms underlying tree response to N inputs, especially in the long term, is still lacking. In this study, we used tree-ring patterns and a dual stable isotope approach (δ(13)C and δ(18)O) to investigate tree growth responses and the underlying physiological reactions in a long-term, low-dose N addition experiment (+23 kg N ha(-1) a(-1)). This experiment has been conducted for 14 years in a mountain Picea abies (L.) Karst. forest in Alptal, Switzerland, using a paired-catchment design. Tree stem C sequestration increased by ∼22%, with an N use efficiency (NUE) of ca. 8 kg additional C in tree stems per kg of N added. Neither earlywood nor latewood δ(13)C values changed significantly compared with the control, indicating that the intrinsic water use efficiency (WUE(i)) (A/g(s)) did not change due to N addition. Further, the isotopic signal of δ(18)O in early- and latewood showed no significant response to the treatment, indicating that neither stomatal conductance nor leaf-level photosynthesis changed significantly. Foliar analyses showed that needle N concentration significantly increased in the fourth to seventh treatment year, accompanied by increased dry mass and area per needle, and by increased tree height growth. Later, N concentration and height growth returned to nearly background values, while dry mass and area per needle remained high. Our results support the hypothesis that enhanced stem growth caused by N addition is mainly due to an increased leaf area index (LAI

  5. Pre- and post-experimental manipulation assessments confirm the increase in number of birds due to the addition of nest boxes.

    PubMed

    Cuatianquiz Lima, Cecilia; Macías Garcia, Constantino

    2016-01-01

    Secondary cavity nesting (SCN) birds breed in holes that they do not excavate themselves. This is possible where there are large trees whose size and age permit the digging of holes by primary excavators and only rarely happens in forest plantations, where we expected a deficit of both breeding holes and SCN species. We assessed whether the availability of tree cavities influenced the number of SCNs in two temperate forest types, and evaluated the change in number of SCNs after adding nest boxes. First, we counted all cavities within each of our 25-m radius sampling points in mature and young forest plots during 2009. We then added nest boxes at standardised locations during 2010 and 2011 and conducted fortnightly bird counts (January-October 2009-2011). In 2011 we added two extra plots of each forest type, where we also conducted bird counts. Prior to adding nest boxes, counts revealed more SCNs in mature than in young forest. Following the addition of nest boxes, the number of SCNs increased significantly in the points with nest boxes in both types of forest. Counts in 2011 confirmed the increase in number of birds due to the addition of nest boxes. Given the likely benefits associated with a richer bird community we propose that, as is routinely done in some countries, forest management programs preserve old tree stumps and add nest boxes to forest plantations in order to increase bird numbers and bird community diversity.

  6. Pre- and post-experimental manipulation assessments confirm the increase in number of birds due to the addition of nest boxes

    PubMed Central

    Cuatianquiz Lima, Cecilia

    2016-01-01

    Secondary cavity nesting (SCN) birds breed in holes that they do not excavate themselves. This is possible where there are large trees whose size and age permit the digging of holes by primary excavators and only rarely happens in forest plantations, where we expected a deficit of both breeding holes and SCN species. We assessed whether the availability of tree cavities influenced the number of SCNs in two temperate forest types, and evaluated the change in number of SCNs after adding nest boxes. First, we counted all cavities within each of our 25-m radius sampling points in mature and young forest plots during 2009. We then added nest boxes at standardised locations during 2010 and 2011 and conducted fortnightly bird counts (January–October 2009–2011). In 2011 we added two extra plots of each forest type, where we also conducted bird counts. Prior to adding nest boxes, counts revealed more SCNs in mature than in young forest. Following the addition of nest boxes, the number of SCNs increased significantly in the points with nest boxes in both types of forest. Counts in 2011 confirmed the increase in number of birds due to the addition of nest boxes. Given the likely benefits associated with a richer bird community we propose that, as is routinely done in some countries, forest management programs preserve old tree stumps and add nest boxes to forest plantations in order to increase bird numbers and bird community diversity. PMID:26998410

  7. Variation in mechanical behavior due to different build directions of Titanium6Aluminum4Vanadium fabricated by electron beam additive manufacturing technology

    NASA Astrophysics Data System (ADS)

    Roy, Lalit

    Titanium has always been a metal of great interest since its discovery especially for critical applications because of its excellent mechanical properties such as light weight (almost half of that of the steel), low density (4.4 gm/cc) and high strength (almost similar to steel). It creates a stable and adherent oxide layer on its surface upon exposure to air or water which gives it a great resistance to corrosion and has made it a great choice for structures in severe corrosive environment and sea water. Its non-allergic property has made it suitable for biomedical application for manufacturing implants. Having a very high melting temperature, it has a very good potential for high temperature applications. But high production and processing cost has limited its application. Ti6Al4V is the most used titanium alloy for which it has acquired the title as `workhouse' of the Ti family. Additive layer Manufacturing (ALM) has brought revolution in manufacturing industries. Today, this additive manufacturing has developed into several methods and formed a family. This method fabricates a product by adding layer after layer as per the geometry given as input into the system. Though the conception was developed to fabricate prototypes and making tools initially, but its highly economic aspect i.e., very little waste material for less machining and comparatively lower production lead time, obviation of machine tools have drawn attention for its further development towards mass production. Electron Beam Melting (EBM) is the latest addition to ALM family developed by Arcam, ABRTM located in Sweden. The electron beam that is used as heat source melts metal powder to form layers. For this thesis work, three different types of specimens have been fabricated using EBM system. These specimens differ in regard of direction of layer addition. Mechanical properties such as ultimate tensile strength, elastic modulus and yield strength, have been measured and compared with standard data

  8. Five-Year-Olds’ Systematic Errors in Second-Order False Belief Tasks Are Due to First-Order Theory of Mind Strategy Selection: A Computational Modeling Study

    PubMed Central

    Arslan, Burcu; Taatgen, Niels A.; Verbrugge, Rineke

    2017-01-01

    The focus of studies on second-order false belief reasoning generally was on investigating the roles of executive functions and language with correlational studies. Different from those studies, we focus on the question how 5-year-olds select and revise reasoning strategies in second-order false belief tasks by constructing two computational cognitive models of this process: an instance-based learning model and a reinforcement learning model. Unlike the reinforcement learning model, the instance-based learning model predicted that children who fail second-order false belief tasks would give answers based on first-order theory of mind (ToM) reasoning as opposed to zero-order reasoning. This prediction was confirmed with an empirical study that we conducted with 72 5- to 6-year-old children. The results showed that 17% of the answers were correct and 83% of the answers were wrong. In line with our prediction, 65% of the wrong answers were based on a first-order ToM strategy, while only 29% of them were based on a zero-order strategy (the remaining 6% of subjects did not provide any answer). Based on our instance-based learning model, we propose that when children get feedback “Wrong,” they explicitly revise their strategy to a higher level instead of implicitly selecting one of the available ToM strategies. Moreover, we predict that children’s failures are due to lack of experience and that with exposure to second-order false belief reasoning, children can revise their wrong first-order reasoning strategy to a correct second-order reasoning strategy. PMID:28293206

  9. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  10. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  11. A Simple Error Formula for the Lunar Ephemeris of Regiomontanus

    NASA Astrophysics Data System (ADS)

    Brosche, P.; Kokott, W.

    The errors of the lunar ephemeris of Regiomontanus are a function mainly of lunar age. This is because the "variation" in the modern theory has no counterpart in ptolemaic theory. In addition to this sinusoidal error constituent (with zero point at syzygies and an amplitude +/-0°.66 at the first and last quarters), there is a constant error due to longitude inconsistencies and a random part +/-0°.5 from various sources.

  12. A hardware error estimate for floating-point computations

    NASA Astrophysics Data System (ADS)

    Lang, Tomás; Bruguera, Javier D.

    2008-08-01

    We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However

  13. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  14. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  15. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  16. Error Analysis in Mathematics Education.

    ERIC Educational Resources Information Center

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  17. Medication Errors

    MedlinePlus

    ... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...

  18. Suitability of live yeast addition to alleviate the adverse effects due to the restriction of the time of access to feed in sheep fed only pasture.

    PubMed

    Pérez-Ruchel, A; Repetto, J L; Cajarville, C

    2013-12-01

    The effect of yeast addition on intake and digestive utilization of pasture was studied in ovines under restricted time of access to forage. Eighteen wethers housed in metabolic cages and fed fresh forage (predominantly Lotus corniculatus) were randomly assigned to three treatments: forage available all day (AD); forage available only 6 h/day (R) and forage available only 6 h/day plus live Saccharomyces cerevisiae yeast (RY). Feed intake and digestibility, feeding behaviour, kinetics of passage, ruminal pH and ammonia concentration, nitrogen balance and microbial nitrogen synthesis (MNS) were determined in vivo, and ruminal liquor activity of animals was evaluated in vitro. Restricted animals consumed less than those fed all day but achieved more than 75% of the intake and spent less time ruminating (p = 0.014). Although animals without restriction consumed more feed, they had a lower rate of passage (p = 0.030). The addition of yeast did affect neither intake nor feeding behaviour, but increased digestibility. Organic matter digestibility tended to increase 11% by yeast addition (p = 0.051), mainly by a rise in NDF (27%, p = 0.032) and ADF digestibility (37%, p = 0.051). Ingested and retained N was lower in restricted animals, as MNS (p ≤ 0.045). The use of yeasts did not significantly change the N balance or MNS, but retained N tended to be higher in supplemented animals (p = 0.090). Neither ruminal pH nor ammonia concentrations were affected by the restriction, but restricted animals had a lower ruminal activity evidenced by a lower volume of gas (p = 0.020). The addition of yeast overcame this limitation, noted by a higher volume of gas of inocula from supplemented animals (p = 0.015). Yeast addition emerged as a useful tool to improve digestibility of forage cell walls in ovines under restricted time of access to forage.

  19. Ferrite Formation Dynamics and Microstructure Due to Inclusion Engineering in Low-Alloy Steels by Ti2O3 and TiN Addition

    NASA Astrophysics Data System (ADS)

    Mu, Wangzhong; Shibata, Hiroyuki; Hedström, Peter; Jönsson, Pär Göran; Nakajima, Keiji

    2016-08-01

    The dynamics of intragranular ferrite (IGF) formation in inclusion engineered steels with either Ti2O3 or TiN addition were investigated using in situ high temperature confocal laser scanning microscopy. Furthermore, the chemical composition of the inclusions and the final microstructure after continuous cooling transformation was investigated using electron probe microanalysis and electron backscatter diffraction, respectively. It was found that there is a significant effect of the chemical composition of the inclusions, the cooling rate, and the prior austenite grain size on the phase fractions and the starting temperatures of IGF and grain boundary ferrite (GBF). The fraction of IGF is larger in the steel with Ti2O3 addition compared to the steel with TiN addition after the same thermal cycle has been imposed. The reason for this difference is the higher potency of the TiO x phase as nucleation sites for IGF formation compared to the TiN phase, which was supported by calculations using classical nucleation theory. The IGF fraction increases with increasing prior austenite grain size, while the fraction of IGF in both steels was the highest for the intermediate cooling rate of 70 °C/min, since competing phase transformations were avoided, the structure of the IGF was though refined with increasing cooling rate. Finally, regarding the starting temperatures of IGF and GBF, they decrease with increasing cooling rate and the starting temperature of GBF decreases with increasing grain size, while the starting temperature of IGF remains constant irrespective of grain size.

  20. Addition and correction: the NF-kappa B-like DNA binding activity observed in Dictyostelium nuclear extracts is due to the GBF transcription factor.

    PubMed

    Traincard, F; Ponte, E; Pun, J; Coukell, B; Veron, M

    2001-10-01

    We have previously reported that a NF-kappa B transduction pathway was likely to be present in the cellular slime mold Dictyostelium discoideum. This conclusion was based on several observations, including the detection of developmentally regulated DNA binding proteins in Dictyostelium nuclear extracts that bound to bona fide kappa B sequences. We have now performed additional experiments which demonstrate that the protein responsible for this NF-kappa B-like DNA binding activity is the Dictyostelium GBF (G box regulatory element binding factor) transcription factor. This result, along with the fact that no sequence with significant similarity to components of the mammalian NF-kappa B pathway can be found in Dictyostelium genome, now almost entirely sequenced, led us to reconsider our previous conclusion on the occurrence of a NF-kappa B signal transduction pathway in Dictyostelium.

  1. Effect of reaction pH and CuSO4 addition on the formation of catechinone due to oxidation of (+)-catechin.

    PubMed

    Matsubara, T; Wataoka, I; Urakawa, H; Yasunaga, H

    2013-08-01

    A novel hair dyeing technique being milder and safer for a human body is desired. The oxidation product of (+)-catechin, catechinone, was invented as a safer dyestuff for hair colouring under such the situation. The preparation of catechinone by a chemical oxidation is a practical way and the objective of the study is clarify the effect of the solution pH and in the presence or absence of Cu(2+) on the formation rate and yield of catechinone in order to improve the efficiency of the dye formation. The catechinone formation was monitored by ultraviolet-visible spectroscopy. Catechinone was prepared chemically from (+)-catechin in aqueous solution with O2 gas introduced over a pH range of 7.1-11.7. The rate and amount of the dye formation increase with increasing pH. Dissociation of the hydroxyl group of the catechol part of (+)-catechin is significant for the oxidation of (+)-catechin and promotes the dye production. This is because the deprotonated (+)-catechin has a higher reactivity with O2 . The production of catechinone is accelerated by the addition of CuSO4 and the production rate reaches the maximum at pH = 8.8. (+)-Catechin - Cu(2+) complexes are formed and the formation promotes the oxidation of the catechol part of (+)-catechin at pH ≤ 8.8. On the other hand, the complex becomes too stable to proceed for the oxidation reaction at pH > 8.8.

  2. Modeling the glucose sensor error.

    PubMed

    Facchinetti, Andrea; Del Favero, Simone; Sparacino, Giovanni; Castle, Jessica R; Ward, W Kenneth; Cobelli, Claudio

    2014-03-01

    Continuous glucose monitoring (CGM) sensors are portable devices, employed in the treatment of diabetes, able to measure glucose concentration in the interstitium almost continuously for several days. However, CGM sensors are not as accurate as standard blood glucose (BG) meters. Studies comparing CGM versus BG demonstrated that CGM is affected by distortion due to diffusion processes and by time-varying systematic under/overestimations due to calibrations and sensor drifts. In addition, measurement noise is also present in CGM data. A reliable model of the different components of CGM inaccuracy with respect to BG (briefly, "sensor error") is important in several applications, e.g., design of optimal digital filters for denoising of CGM data, real-time glucose prediction, insulin dosing, and artificial pancreas control algorithms. The aim of this paper is to propose an approach to describe CGM sensor error by exploiting n multiple simultaneous CGM recordings. The model of sensor error description includes a model of blood-to-interstitial glucose diffusion process, a linear time-varying model to account for calibration and sensor drift-in-time, and an autoregressive model to describe the additive measurement noise. Model orders and parameters are identified from the n simultaneous CGM sensor recordings and BG references. While the model is applicable to any CGM sensor, here, it is used on a database of 36 datasets of type 1 diabetic adults in which n = 4 Dexcom SEVEN Plus CGM time series and frequent BG references were available simultaneously. Results demonstrates that multiple simultaneous sensor data and proper modeling allow dissecting the sensor error into its different components, distinguishing those related to physiology from those related to technology.

  3. Short-term salivary acetaldehyde increase due to direct exposure to alcoholic beverages as an additional cancer risk factor beyond ethanol metabolism

    PubMed Central

    2011-01-01

    Background An increasing body of evidence now implicates acetaldehyde as a major underlying factor for the carcinogenicity of alcoholic beverages and especially for oesophageal and oral cancer. Acetaldehyde associated with alcohol consumption is regarded as 'carcinogenic to humans' (IARC Group 1), with sufficient evidence available for the oesophagus, head and neck as sites of carcinogenicity. At present, research into the mechanistic aspects of acetaldehyde-related oral cancer has been focused on salivary acetaldehyde that is formed either from ethanol metabolism in the epithelia or from microbial oxidation of ethanol by the oral microflora. This study was conducted to evaluate the role of the acetaldehyde that is found as a component of alcoholic beverages as an additional factor in the aetiology of oral cancer. Methods Salivary acetaldehyde levels were determined in the context of sensory analysis of different alcoholic beverages (beer, cider, wine, sherry, vodka, calvados, grape marc spirit, tequila, cherry spirit), without swallowing, to exclude systemic ethanol metabolism. Results The rinsing of the mouth for 30 seconds with an alcoholic beverage is able to increase salivary acetaldehyde above levels previously judged to be carcinogenic in vitro, with levels up to 1000 μM in cases of beverages with extreme acetaldehyde content. In general, the highest salivary acetaldehyde concentration was found in all cases in the saliva 30 sec after using the beverages (average 353 μM). The average concentration then decreased at the 2-min (156 μM), 5-min (76 μM) and 10-min (40 μM) sampling points. The salivary acetaldehyde concentration depends primarily on the direct ingestion of acetaldehyde contained in the beverages at the 30-sec sampling, while the influence of the metabolic formation from ethanol becomes the major factor at the 2-min sampling point. Conclusions This study offers a plausible mechanism to explain the increased risk for oral cancer associated with

  4. Error-related electrocorticographic activity in humans during continuous movements

    NASA Astrophysics Data System (ADS)

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2012-04-01

    Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects’ movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.

  5. Examining food additives and spices for their anti-oxidant ability to counteract oxidative damage due to chronic exposure to free radicals from environmental pollutants

    NASA Astrophysics Data System (ADS)

    Martinez, Raul A., III

    The main objective of this work was to examine food additives and spices (from the Apiaceae family) to determine their antioxidant properties to counteract oxidative stress (damage) caused by Environmental pollutants. Environmental pollutants generate Reactive Oxygen species and Reactive Nitrogen species. Star anise essential oil showed lower antioxidant activity than extracts using DPPH scavenging. Dill Seed -- Anethum Graveolens -the monoterpene components of dill showed to activate the enzyme glutathione-S-transferase , which helped attach the antioxidant molecule glutathione to oxidized molecules that would otherwise do damage in the body. The antioxidant activity of extracts of dill was comparable with ascorbic acid, alpha-tocopherol, and quercetin in in-vitro systems. Black Cumin -- Nigella Sativa: was evaluated the method 1,1-diphenyl2-picrylhhydrazyl (DPPH) radical scavenging activity. Positive correlations were found between the total phenolic content in the black cumin extracts and their antioxidant activities. Caraway -- Carum Carvi: The antioxidant activity was evaluated by the scavenging effects of 1,1'-diphenyl-2-picrylhydrazyl (DPPH). Caraway showed strong antioxidant activity. Cumin -- Cuminum Cyminum - the major polyphenolic were extracted and separated by HPTLC. The antioxidant activity of the cumin extract was tested on 1,1'-diphenyl-2- picrylhydrazyl (DPPH) free radical scavenging. Coriander -- Coriandrum Sativum - the antioxidant and free-radical-scavenging property of the seeds was studied and also investigated whether the administration of seeds curtails oxidative stress. Coriander seed powder not only inhibited the process of Peroxidative damage, but also significantly reactivated the antioxidant enzymes and antioxidant levels. The seeds also showed scavenging activity against superoxides and hydroxyl radicals. The total polyphenolic content of the seeds was found to be 12.2 galic acid equivalents (GAE)/g while the total flavonoid content

  6. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  7. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  8. Triangulation Error Analysis for the Barium Ion Cloud Experiment. M.S. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1973-01-01

    The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.

  9. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-04-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  10. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  11. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices

  12. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  13. Discretization errors in particle tracking

    NASA Astrophysics Data System (ADS)

    Carmon, G.; Mamman, N.; Feingold, M.

    2007-03-01

    High precision video tracking of microscopic particles is limited by systematic and random errors. Systematic errors are partly due to the discretization process both in position and in intensity. We study the behavior of such errors in a simple tracking algorithm designed for the case of symmetric particles. This symmetry algorithm uses interpolation to estimate the value of the intensity at arbitrary points in the image plane. We show that the discretization error is composed of two parts: (1) the error due to the discretization of the intensity, bD and (2) that due to interpolation, bI. While bD behaves asymptotically like N-1 where N is the number of intensity gray levels, bI is small when using cubic spline interpolation.

  14. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  15. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  16. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  17. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  18. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  19. The metabolism of 3alpha, 7alpha, 12alpha-trihydorxy-5beta-cholestan-26-oic acid in two siblings with cholestasis due to intrahepatic bile duct anomalies. An apparent inborn error of cholic acid synthesis.

    PubMed Central

    Hanson, R F; Isenberg, J N; Williams, G C; Hachey, D; Szczepanik, P; Klein, P D; Sharp, H L

    1975-01-01

    Studies were carried out in a family in which two children with cholestasis due to intrahepatic bile duct anomalies were shown to have increased amounts of the cholic acid precursor, 3alpha, 7alpha, 12alpha-trihydorxy-5beta-cholestan-26-oic acid (THCA). The metabolism of THCA was studied in one of these patients after an intravenous injection of (3H)THCA, and the cause of the increased amounts of THCA in this condition was found to be due to a metabolic defect in the conversion of this compound into cholic acid. A small amount of (3H)cholic acid was also identified after (3H)THCA administration, confirming that this metabolic defect was incomplete. Varanic acid (3alpha, 7alpha, 12alpha, 24xi-tetrahydorxy-5beta-cholestan-26-oic acid), a metabolite of THCA, could not be identified in either of these patients. By assuming that this compound would be conjugated and excreted if the metabolic block occurred after the formation of varanic acid, the defect in these patients appears to be due to a deficiency of a 24-hydroxylating enzyme system required to convert THCA into varanic acid. This condition appears to be transmitted in an autosomal recessive fashion, because the two affected patients were of opposite sex, and neither a normal sibling nor the two parents have increased amount of THCA in their bile. Images PMID:1159074

  20. Hybrid Models for Trajectory Error Modelling in Urban Environments

    NASA Astrophysics Data System (ADS)

    Angelatsa, E.; Parés, M. E.; Colomina, I.

    2016-06-01

    This paper tackles the first step of any strategy aiming to improve the trajectory of terrestrial mobile mapping systems in urban environments. We present an approach to model the error of terrestrial mobile mapping trajectories, combining deterministic and stochastic models. Due to urban specific environment, the deterministic component will be modelled with non-continuous functions composed by linear shifts, drifts or polynomial functions. In addition, we will introduce a stochastic error component for modelling residual noise of the trajectory error function. First step for error modelling requires to know the actual trajectory error values for several representative environments. In order to determine as accurately as possible the trajectories error, (almost) error less trajectories should be estimated using extracted nonsemantic features from a sequence of images collected with the terrestrial mobile mapping system and from a full set of ground control points. Once the references are estimated, they will be used to determine the actual errors in terrestrial mobile mapping trajectory. The rigorous analysis of these data sets will allow us to characterize the errors of a terrestrial mobile mapping system for a wide range of environments. This information will be of great use in future campaigns to improve the results of the 3D points cloud generation. The proposed approach has been evaluated using real data. The data originate from a mobile mapping campaign over an urban and controlled area of Dortmund (Germany), with harmful GNSS conditions. The mobile mapping system, that includes two laser scanner and two cameras, was mounted on a van and it was driven over a controlled area around three hours. The results show the suitability to decompose trajectory error with non-continuous deterministic and stochastic components.

  1. Analysis of offset error for segmented micro-structure optical element based on optical diffraction theory

    NASA Astrophysics Data System (ADS)

    Su, Jinyan; Wu, Shibin; Yang, Wei; Wang, Lihua

    2016-10-01

    Micro-structure optical elements are gradually applied in modern optical system due to their characters such as light weight, replicating easily, high diffraction efficiency and many design variables. Fresnel lens is a typical micro-structure optical element. So in this paper we take Fresnel lens as base of research. Analytic solution to the Point Spread Function (PSF) of the segmented Fresnel lens is derived based on the theory of optical diffraction, and the mathematical simulation model is established. Then we take segmented Fresnel lens with 5 pieces of sub-mirror as an example. In order to analyze the influence of different offset errors on the system's far-field image quality, we obtain the analytic solution to PSF of the system under the condition of different offset errors by using Fourier-transform. The result shows the translation error along XYZ axis and tilt error around XY axis will introduce phase errors which affect the imaging quality of system. The translation errors along XYZ axis constitute linear relationship with corresponding phase errors and the tilt errors around XY axis constitute trigonometric function relationship with corresponding phase errors. In addition, the standard deviations of translation errors along XY axis constitute quadratic nonlinear relationship with system's Strehl ratio. Finally, the tolerances of different offset errors are obtained according to Strehl Criteria.

  2. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  3. Errors in general practice: development of an error classification and pilot study of a method for detecting errors

    PubMed Central

    Rubin, G; George, A; Chinn, D; Richardson, C

    2003-01-01

    Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice. Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire. Setting: UK general practice. Participants: Ten general practices in the North East of England. Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants. Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and "other" errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening. Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative. PMID:14645760

  4. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  5. Antenna motion errors in bistatic SAR imagery

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Yazıcı, Birsen; Cagri Yanik, H.

    2015-06-01

    Antenna trajectory or motion errors are pervasive in synthetic aperture radar (SAR) imaging. Motion errors typically result in smearing and positioning errors in SAR images. Understanding the relationship between the trajectory errors and position errors in reconstructed images is essential in forming focused SAR images. Existing studies on the effect of antenna motion errors are limited to certain geometries, trajectory error models or monostatic SAR configuration. In this paper, we present an analysis of position errors in bistatic SAR imagery due to antenna motion errors. Bistatic SAR imagery is becoming increasingly important in the context of passive imaging and multi-sensor imaging. Our analysis provides an explicit quantitative relationship between the trajectory errors and the positioning errors in bistatic SAR images. The analysis is applicable to arbitrary trajectory errors and arbitrary imaging geometries including wide apertures and large scenes. We present extensive numerical simulations to validate the analysis and to illustrate the results in commonly used bistatic configurations and certain trajectory error models.

  6. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  7. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  8. Errors in CT colonography.

    PubMed

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  9. Understanding the Nature of Medication Errors in an ICU with a Computerized Physician Order Entry System

    PubMed Central

    Cho, Insook; Park, Hyeok; Choi, Youn Jeong; Hwang, Mi Heui; Bates, David W.

    2014-01-01

    Objectives We investigated incidence rates to understand the nature of medication errors potentially introduced by utilizing a computerized physician order entry (CPOE) system in the three clinical phases of the medication process: prescription, administration, and documentation. Methods Overt observations and chart reviews were employed at two surgical intensive care units of a 950-bed tertiary teaching hospital. Ten categories of high-risk drugs prescribed over a four-month period were noted and reviewed. Error definition and classifications were adapted from previous studies for use in the present research. Incidences of medication errors in the three phases of the medication process were analyzed. In addition, nurses' responses to prescription errors were also assessed. Results Of the 534 prescriptions issued, 286 (53.6%) included at least one error. The proportion of errors was 19.0% (58) of the 306 drug administrations, of which two-thirds were verbal orders classified as errors due to incorrectly entered prescriptions. Documentation errors occurred in 205 (82.7%) of 248 correctly performed administrations. When tracking incorrectly entered prescriptions, 93% of the errors were intercepted by nurses, but two-thirds of them were recorded as prescribed rather than administered. Conclusion The number of errors occurring at each phase of the medication process was relatively high, despite long experience with a CPOE system. The main causes of administration errors and documentation errors were prescription errors and verbal order processes. To reduce these errors, hospital-level and unit-level efforts toward a better system are needed. PMID:25526059

  10. Error Properties of Argos Satellite Telemetry Locations Using Least Squares and Kalman Filtering

    PubMed Central

    Boyd, Janice D.; Brightsmith, Donald J.

    2013-01-01

    Study of animal movements is key for understanding their ecology and facilitating their conservation. The Argos satellite system is a valuable tool for tracking species which move long distances, inhabit remote areas, and are otherwise difficult to track with traditional VHF telemetry and are not suitable for GPS systems. Previous research has raised doubts about the magnitude of position errors quoted by the satellite service provider CLS. In addition, no peer-reviewed publications have evaluated the usefulness of the CLS supplied error ellipses nor the accuracy of the new Kalman filtering (KF) processing method. Using transmitters hung from towers and trees in southeastern Peru, we show the Argos error ellipses generally contain <25% of the true locations and therefore do not adequately describe the true location errors. We also find that KF processing does not significantly increase location accuracy. The errors for both LS and KF processing methods were found to be lognormally distributed, which has important repercussions for error calculation, statistical analysis, and data interpretation. In brief, “good” positions (location codes 3, 2, 1, A) are accurate to about 2 km, while 0 and B locations are accurate to about 5–10 km. However, due to the lognormal distribution of the errors, larger outliers are to be expected in all location codes and need to be accounted for in the user’s data processing. We evaluate five different empirical error estimates and find that 68% lognormal error ellipses provided the most useful error estimates. Longitude errors are larger than latitude errors by a factor of 2 to 3, supporting the use of elliptical error ellipses. Numerous studies over the past 15 years have also found fault with the CLS-claimed error estimates yet CLS has failed to correct their misleading information. We hope this will be reversed in the near future. PMID:23690980

  11. Error properties of Argos satellite telemetry locations using least squares and Kalman filtering.

    PubMed

    Boyd, Janice D; Brightsmith, Donald J

    2013-01-01

    Study of animal movements is key for understanding their ecology and facilitating their conservation. The Argos satellite system is a valuable tool for tracking species which move long distances, inhabit remote areas, and are otherwise difficult to track with traditional VHF telemetry and are not suitable for GPS systems. Previous research has raised doubts about the magnitude of position errors quoted by the satellite service provider CLS. In addition, no peer-reviewed publications have evaluated the usefulness of the CLS supplied error ellipses nor the accuracy of the new Kalman filtering (KF) processing method. Using transmitters hung from towers and trees in southeastern Peru, we show the Argos error ellipses generally contain <25% of the true locations and therefore do not adequately describe the true location errors. We also find that KF processing does not significantly increase location accuracy. The errors for both LS and KF processing methods were found to be lognormally distributed, which has important repercussions for error calculation, statistical analysis, and data interpretation. In brief, "good" positions (location codes 3, 2, 1, A) are accurate to about 2 km, while 0 and B locations are accurate to about 5-10 km. However, due to the lognormal distribution of the errors, larger outliers are to be expected in all location codes and need to be accounted for in the user's data processing. We evaluate five different empirical error estimates and find that 68% lognormal error ellipses provided the most useful error estimates. Longitude errors are larger than latitude errors by a factor of 2 to 3, supporting the use of elliptical error ellipses. Numerous studies over the past 15 years have also found fault with the CLS-claimed error estimates yet CLS has failed to correct their misleading information. We hope this will be reversed in the near future.

  12. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  13. Mean-square error due to gradiometer field measuring devices.

    PubMed

    Hatsell, C P

    1991-06-01

    Gradiometers use spatial common mode magnetic field rejection to reduce interference from distant sources. They also introduce distortion that can be severe, rendering experimental data difficult to interpret. Attempts to recover the measured magnetic field from the gradiometer output will be plagued by the nonexistence of a spatial function for deconvolution (except for first-order gradiometers), and by the high-pass nature of the spatial transform that emphasizes high spatial frequency noise. Goals of a design for a facility for measuring biomagnetic fields should be an effective shielded room and a field detector employing a first-order gradiometer.

  14. How social is error observation? The neural mechanisms underlying the observation of human and machine errors

    PubMed Central

    Deschrijver, Eliane; Brass, Marcel

    2014-01-01

    Recently, it has been shown that the medial prefrontal cortex (MPFC) is involved in error execution as well as error observation. Based on this finding, it has been argued that recognizing each other’s mistakes might rely on motor simulation. In the current functional magnetic resonance imaging (fMRI) study, we directly tested this hypothesis by investigating whether medial prefrontal activity in error observation is restricted to situations that enable simulation. To this aim, we compared brain activity related to the observation of errors that can be simulated (human errors) with brain activity related to errors that cannot be simulated (machine errors). We show that medial prefrontal activity is not only restricted to the observation of human errors but also occurs when observing errors of a machine. In addition, our data indicate that the MPFC reflects a domain general mechanism of monitoring violations of expectancies. PMID:23314011

  15. Group representations, error bases and quantum codes

    SciTech Connect

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  16. ERROR CORRECTION IN HIGH SPEED ARITHMETIC,

    DTIC Science & Technology

    The errors due to a faulty high speed multiplier are shown to be iterative in nature. These errors are analyzed in various aspects. The arithmetic coding technique is suggested for the improvement of high speed multiplier reliability. Through a number theoretic investigation, a large class of arithmetic codes for single iterative error correction are developed. The codes are shown to have near-optimal rates and to render a simple decoding method. The implementation of these codes seems highly practical. (Author)

  17. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  18. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  19. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  20. Compounding errors in 2 dogs receiving anticonvulsants.

    PubMed

    McConkey, Sandra E; Walker, Susan; Adams, Cathy

    2012-04-01

    Two cases that involve drug compounding errors are described. One dog exhibited increased seizure activity due to a compounded, flavored phenobarbital solution that deteriorated before the expiration date provided by the compounder. The other dog developed clinical signs of hyperkalemia and bromine toxicity following a 5-fold compounding error in the concentration of potassium bromide (KBr).

  1. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  2. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  3. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  4. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  5. MOPITT V6 & V7 Processing Error

    Atmospheric Science Data Center

    2017-03-27

    Due to an error in the reported PMC (pressure modulated cell) pressure following the February 2016 calibration ... Services Science Systems & Applications, Inc. Atmospheric Science Data Center NASA Langley Research Center MS ...

  6. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  7. Error Analysis: Past, Present, and Future

    ERIC Educational Resources Information Center

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  8. Flux Sampling Errors for Aircraft and Towers

    NASA Technical Reports Server (NTRS)

    Mahrt, Larry

    1998-01-01

    Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.

  9. Error latency measurements in symbolic architectures

    NASA Technical Reports Server (NTRS)

    Young, L. T.; Iyer, R. K.

    1991-01-01

    Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.

  10. Studies of Error Sources in Geodetic VLBI

    NASA Technical Reports Server (NTRS)

    Rogers, A. E. E.; Niell, A. E.; Corey, B. E.

    1996-01-01

    Achieving the goal of millimeter uncertainty in three dimensional geodetic positioning on a global scale requires significant improvement in the precision and accuracy of both random and systematic error sources. For this investigation we proposed to study errors due to instrumentation in Very Long Base Interferometry (VLBI) and due to the atmosphere. After the inception of this work we expanded the scope to include assessment of error sources in GPS measurements, especially as they affect the vertical component of site position and the measurement of water vapor in the atmosphere. The atmosphere correction 'improvements described below are of benefit to both GPS and VLBI.

  11. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  12. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  13. Food additives

    MedlinePlus

    ... or natural. Natural food additives include: Herbs or spices to add flavor to foods Vinegar for pickling ... Certain colors improve the appearance of foods. Many spices, as well as natural and man-made flavors, ...

  14. Image pre-filtering for measurement error reduction in digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  15. Addressing Medical Errors in Hand Surgery

    PubMed Central

    Johnson, Shepard P.; Adkinson, Joshua M.; Chung, Kevin C.

    2014-01-01

    Influential think-tank such as the Institute of Medicine has raised awareness about the implications of medical errors. In response, organizations, medical societies, and institutions have initiated programs to decrease the incidence and effects of these errors. Surgeons deal with the direct implications of adverse events involving patients. In addition to managing the physical consequences, they are confronted with ethical and social issues when caring for a harmed patient. Although there is considerable effort to implement system-wide changes, there is little guidance for hand surgeons on how to address medical errors. Admitting an error is difficult, but a transparent environment where patients are notified of errors and offered consolation and compensation is essential to maintain trust. Further, equipping hand surgeons with a guide for addressing medical errors will promote compassionate patient interaction, help identify system failures, provide learning points for safety improvement, and demonstrate a commitment to ethically responsible medical care. PMID:25154576

  16. Extending the Error Correction Capability of Linear Codes,

    DTIC Science & Technology

    be made to tolerate and correct up to (k-1) bit failures. Thus if the classical error correction bounds are assumed, a linear transmission code used...in digital circuitry is under-utilized. For example, the single- error - correction , double-error-detection Hamming code could be used to correct up to...two bit failures with some additional error correction circuitry. A simple algorithm for correcting these extra errors in linear codoes is presented. (Author)

  17. An Empirical Study for Impacts of Measurement Errors on EHR based Association Studies

    PubMed Central

    Duan, Rui; Cao, Ming; Wu, Yonghui; Huang, Jing; Denny, Joshua C; Xu, Hua; Chen, Yong

    2016-01-01

    Over the last decade, Electronic Health Records (EHR) systems have been increasingly implemented at US hospitals. Despite their great potential, the complex and uneven nature of clinical documentation and data quality brings additional challenges for analyzing EHR data. A critical challenge is the information bias due to the measurement errors in outcome and covariates. We conducted empirical studies to quantify the impacts of the information bias on association study. Specifically, we designed our simulation studies based on the characteristics of the Electronic Medical Records and Genomics (eMERGE) Network. Through simulation studies, we quantified the loss of power due to misclassifications in case ascertainment and measurement errors in covariate status extraction, with respect to different levels of misclassification rates, disease prevalence, and covariate frequencies. These empirical findings can inform investigators for better understanding of the potential power loss due to misclassification and measurement errors under a variety of conditions in EHR based association studies. PMID:28269935

  18. Error Processing in Huntington's Disease

    PubMed Central

    Andrich, Jürgen; Gold, Ralf; Falkenstein, Michael

    2006-01-01

    Background Huntington's disease (HD) is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related) negativity (Ne/ERN), a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC) and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD). Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. Methodology/Principle Findings We assessed the error negativity (Ne) in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. Conclusions/Significance The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated “cognitive” biomarker in HD. PMID:17183717

  19. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    PubMed

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-18

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error.

  20. Twenty Questions about Student Errors.

    ERIC Educational Resources Information Center

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    1986-01-01

    Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)

  1. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  2. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  3. Teacher-Induced Errors.

    ERIC Educational Resources Information Center

    Richmond, Kent C.

    Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…

  4. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under

  5. Process Recovery after CaO Addition Due to Granule Formation in a CSTR Co-Digester-A Tool to Influence the Composition of the Microbial Community and Stabilize the Process?

    PubMed

    Liebrich, Marietta; Kleyböcker, Anne; Kasina, Monika; Miethling-Graff, Rona; Kassahun, Andrea; Würdemann, Hilke

    2016-03-17

    The composition, structure and function of granules formed during process recovery with calcium oxide in a laboratory-scale fermenter fed with sewage sludge and rapeseed oil were studied. In the course of over-acidification and successful process recovery, only minor changes were observed in the bacterial community of the digestate, while granules appeared during recovery. Fluorescence microscopic analysis of the granules showed a close spatial relationship between calcium and oil and/or long chain fatty acids. This finding further substantiated the hypothesis that calcium precipitated with carbon of organic origin and reduced the negative effects of overloading with oil. Furthermore, the enrichment of phosphate minerals in the granules was shown, and molecular biological analyses detected polyphosphate-accumulating organisms as well as methanogenic archaea in the core. Organisms related to Methanoculleus receptaculi were detected in the inner zones of a granule, whereas they were present in the digestate only after process recovery. This finding indicated more favorable microhabitats inside the granules that supported process recovery. Thus, the granule formation triggered by calcium oxide addition served as a tool to influence the composition of the microbial community and to stabilize the process after overloading with oil.

  6. Process Recovery after CaO Addition Due to Granule Formation in a CSTR Co-Digester—A Tool to Influence the Composition of the Microbial Community and Stabilize the Process?

    PubMed Central

    Liebrich, Marietta; Kleyböcker, Anne; Kasina, Monika; Miethling-Graff, Rona; Kassahun, Andrea; Würdemann, Hilke

    2016-01-01

    The composition, structure and function of granules formed during process recovery with calcium oxide in a laboratory-scale fermenter fed with sewage sludge and rapeseed oil were studied. In the course of over-acidification and successful process recovery, only minor changes were observed in the bacterial community of the digestate, while granules appeared during recovery. Fluorescence microscopic analysis of the granules showed a close spatial relationship between calcium and oil and/or long chain fatty acids. This finding further substantiated the hypothesis that calcium precipitated with carbon of organic origin and reduced the negative effects of overloading with oil. Furthermore, the enrichment of phosphate minerals in the granules was shown, and molecular biological analyses detected polyphosphate-accumulating organisms as well as methanogenic archaea in the core. Organisms related to Methanoculleus receptaculi were detected in the inner zones of a granule, whereas they were present in the digestate only after process recovery. This finding indicated more favorable microhabitats inside the granules that supported process recovery. Thus, the granule formation triggered by calcium oxide addition served as a tool to influence the composition of the microbial community and to stabilize the process after overloading with oil. PMID:27681911

  7. Negligence, genuine error, and litigation

    PubMed Central

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  8. Addition of the Neurokinin-1-Receptor Antagonist (RA) Aprepitant to a 5-Hydroxytryptamine-RA and Dexamethasone in the Prophylaxis of Nausea and Vomiting Due to Radiation Therapy With Concomitant Cisplatin

    SciTech Connect

    Jahn, Franziska; Jahn, Patrick; Sieker, Frank; Vordermark, Dirk; Jordan, Karin

    2015-08-01

    Purpose: To assess, in a prospective, observational study, the safety and efficacy of the addition of the neurokinin-1-receptor antagonist (NK1-RA) aprepitant to concomitant radiochemotherapy, for the prophylaxis of radiation therapy–induced nausea and vomiting. Patients and Methods: This prospective observational study compared the antiemetic efficacy of an NK1-RA (aprepitant), a 5-hydroxytryptamine-RA, and dexamethasone (aprepitant regimen) versus a 5-hydroxytryptamine-RA and dexamethasone (control regimen) in patients receiving concomitant radiochemotherapy with cisplatin at the Department of Radiation Oncology, University Hospital Halle (Saale), Germany. The primary endpoint was complete response in the overall phase, defined as no vomiting and no use of rescue therapy in this period. Results: Fifty-nine patients treated with concomitant radiochemotherapy with cisplatin were included in this study. Thirty-one patients received the aprepitant regimen and 29 the control regimen. The overall complete response rates for cycles 1 and 2 were 75.9% and 64.5% for the aprepitant group and 60.7% and 54.2% for the control group, respectively. Although a 15.2% absolute difference was reached in cycle 1, a statistical significance was not detected (P=.22). Furthermore maximum nausea was 1.58 ± 1.91 in the control group and 0.73 ± 1.79 in the aprepitant group (P=.084); for the head-and-neck subset, 2.23 ± 2.13 in the control group and 0.64 ± 1.77 in the aprepitant group, respectively (P=.03). Conclusion: This is the first study of an NK1-RA–containing antiemetic prophylaxis regimen in patients receiving concomitant radiochemotherapy. Although the primary endpoint was not obtained, the absolute difference of 10% in efficacy was reached, which is defined as clinically meaningful for patients by international guidelines groups. Randomized phase 3 studies are necessary to further define the potential role of an NK1-RA in this setting.

  9. Voice Onset Time in Consonant Cluster Errors: Can Phonetic Accommodation Differentiate Cognitive from Motor Errors?

    ERIC Educational Resources Information Center

    Pouplier, Marianne; Marin, Stefania; Waltl, Susanne

    2014-01-01

    Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…

  10. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  11. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  12. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  13. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  14. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  15. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    . Moreover, the PES effect appears across tasksets with distinct stimuli and response rules in the context of observed errors, reflecting a generic process. Additionally, the slowing effect and improved accuracy in the post-observed error trial do not occur together, suggesting that they are independent behavioral adjustments in the context of observed errors. PMID:26934579

  16. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory

  17. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  18. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  19. Diagnostic errors in interactive telepathology.

    PubMed

    Stauch, G; Schweppe, K W; Kayser, K

    2000-01-01

    Telepathology (TP) as a service in pathology at a distance is now widely used. It is integrated in the daily workflow of numerous pathologists. Meanwhile, in Germany 15 departments of pathology are using the telepathology technique for frozen section service; however, a common recognised quality standard in diagnostic accuracy is still missing. In a first step, the working group Aurich uses a TP system for frozen section service in order to analyse the frequency and sources of errors in TP frozen section diagnoses for evaluating the quality of frozen section slides, the important components of image quality and their influences an diagnostic accuracy. The authors point to the necessity of an optimal training program for all participants in this service in order to reduce the risk of diagnostic errors. In addition, there is need for optimal cooperation of all partners involved in TP service.

  20. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  1. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  2. Patient cueing, a type of diagnostic error

    PubMed Central

    2016-01-01

    Diagnostic failure can be due to a variety of psychological errors on the part of the diagnostician. An erroneous diagnosis rendered by previous clinicians can lead a diagnostician to the wrong diagnosis. This report is the case of a patient who misdiagnosed herself and then led an emergency room physician and subsequent treating physicians to the wrong diagnosis. This mechanism of diagnostic error can be called patient cueing. PMID:27284538

  3. Phase and amplitude errors in FM radars

    NASA Astrophysics Data System (ADS)

    Griffiths, Hugh D.

    The constraints on phase and amplitude errors are determined for various types of FM radar by calculating the range sidelobe levels on the point target response due to the phase and amplitude modulation of the target echo. It is shown that under certain circumstances the constraints on phase linearity appropriate for conventional pulse compression radars are unnecessarily stringent, and quite large phase errors can be tolerated provided the relative delay of the local oscillator with respect to the target echo is small compared with the periodicity of the phase error characteristic. The constraints on amplitude flatness, however, are severe under almost all circumstances.

  4. Antenna pointing systematic error model derivations

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.; Lansing, F. L.; Riggs, R.

    1987-01-01

    The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

  5. Preventable Errors in Organ Transplantation: An Emerging Patient Safety Issue?

    PubMed Central

    Ison, Michael G.; Holl, Jane L.; Ladner, Daniela

    2012-01-01

    Several widely publicized errors in transplantation including a death due ABO incompatibility, two HIV transmissions and two HCV transmissions have raised concerns about medical errors in organ transplantation. The root cause analysis of each of these events revealed preventable failures in the systems and processes of care as the underlying causes. In each event, no standardized system or redundant process was in place to mitigate the failures that led to the error. Additional system and process vulnerabilities such as poor clinician communication, erroneous data transcription and transmission were also identified. Organ transplantation, because it is highly complex, often stresses the systems and processes of care and, therefore, offers a unique opportunity to proactively identify vulnerabilities and potential failures. Initial steps have been taken to understand such issues through the OPTN/UNOS Operations and Safety Committee, the Disease Transmission Advisory Committee (DTAC), and the current A2ALL ancillary Safety Study. However, to effectively improve patient safety in organ transplantation, the development of a process for reporting of preventable errors that affords protection and the support of empiric research are critical. Further, the transplant community needs to embrace the implementation of evidence-based system and process improvements that will mitigate existing safety vulnerabilities. PMID:22703471

  6. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.

  7. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    SciTech Connect

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.

  8. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE PAGES

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less

  9. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  10. Quantifying soil CO2 respiration measurement error across instruments

    NASA Astrophysics Data System (ADS)

    Creelman, C. A.; Nickerson, N. R.; Risk, D. A.

    2010-12-01

    A variety of instrumental methodologies have been developed in an attempt to accurately measure the rate of soil CO2 respiration. Among the most commonly used are the static and dynamic chamber systems. The degree to which these methods misread or perturb the soil CO2 signal, however, is poorly understood. One source of error in particular is the introduction of lateral diffusion due to the disturbance of the steady-state CO2 concentrations. The addition of soil collars to the chamber system attempts to address this perturbation, but may induce additional errors from the increased physical disturbance. Using a numerical 3D soil-atmosphere diffusion model, we are undertaking a comprehensive comparative study of existing static and dynamic chambers, as well as a solid-state CTFD probe. Specifically, we are examining the 3D diffusion errors associated with each method and opportunities for correction. In this study, the impact of collar length, chamber geometry, chamber mixing and diffusion parameters on the magnitude of lateral diffusion around the instrument are quantified in order to provide insight into obtaining more accurate soil respiration estimates. Results suggest that while each method can approximate the true flux rate under idealized conditions, the associated errors can be of a high magnitude and may vary substantially in their sensitivity to these parameters. In some cases, factors such as the collar length and chamber exchange rate used are coupled in their effect on accuracy. Due to the widespread use of these instruments, it is critical that the nature of their biases and inaccuracies be understood in order to inform future development, ensure the accuracy of current measurements and to facilitate inter-comparison between existing datasets.

  11. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  12. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  13. Error monitoring in musicians

    PubMed Central

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  14. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  15. Soft Error Vulnerability of Iterative Linear Algebra Methods

    SciTech Connect

    Bronevetsky, G; de Supinski, B

    2008-01-19

    Devices are increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft error rates were significant primarily in space and high-atmospheric computing. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming important even at terrestrial altitudes. Due to their large number of components, supercomputers are particularly susceptible to soft errors. Since many large scale parallel scientific applications use iterative linear algebra methods, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. Many users consider these methods invulnerable to most soft errors since they converge from an imprecise solution to a precise one. However, we show in this paper that iterative methods are vulnerable to soft errors, exhibiting both silent data corruptions and poor ability to detect errors. Further, we evaluate a variety of soft error detection and tolerance techniques, including checkpointing, linear matrix encodings, and residual tracking techniques.

  16. Bond additivity corrections for quantum chemistry methods

    SciTech Connect

    C. F. Melius; M. D. Allendorf

    1999-04-01

    In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method due to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.

  17. Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.

    PubMed

    Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing

    2013-09-20

    Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.

  18. A posteriori error estimates for Maxwell equations

    NASA Astrophysics Data System (ADS)

    Schoeberl, Joachim

    2008-06-01

    Maxwell equations are posed as variational boundary value problems in the function space H(operatorname{curl}) and are discretized by Nedelec finite elements. In Beck et al., 2000, a residual type a posteriori error estimator was proposed and analyzed under certain conditions onto the domain. In the present paper, we prove the reliability of that error estimator on Lipschitz domains. The key is to establish new error estimates for the commuting quasi-interpolation operators recently introduced in J. Schoeberl, Commuting quasi-interpolation operators for mixed finite elements. Similar estimates are required for additive Schwarz preconditioning. To incorporate boundary conditions, we establish a new extension result.

  19. Reduction of Orifice-Induced Pressure Errors

    NASA Technical Reports Server (NTRS)

    Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.

    1987-01-01

    Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.

  20. The effects of error augmentation on learning to walk on a narrow balance beam.

    PubMed

    Domingo, Antoinette; Ferris, Daniel P

    2010-10-01

    Error augmentation during training has been proposed as a means to facilitate motor learning due to the human nervous system's reliance on performance errors to shape motor commands. We studied the effects of error augmentation on short-term learning of walking on a balance beam to determine whether it had beneficial effects on motor performance. Four groups of able-bodied subjects walked on a treadmill-mounted balance beam (2.5-cm wide) before and after 30 min of training. During training, two groups walked on the beam with a destabilization device that augmented error (Medium and High Destabilization groups). A third group walked on a narrower beam (1.27-cm) to augment error (Narrow). The fourth group practiced walking on the 2.5-cm balance beam (Wide). Subjects in the Wide group had significantly greater improvements after training than the error augmentation groups. The High Destabilization group had significantly less performance gains than the Narrow group in spite of similar failures per minute during training. In a follow-up experiment, a fifth group of subjects (Assisted) practiced with a device that greatly reduced catastrophic errors (i.e., stepping off the beam) but maintained similar pelvic movement variability. Performance gains were significantly greater in the Wide group than the Assisted group, indicating that catastrophic errors were important for short-term learning. We conclude that increasing errors during practice via destabilization and a narrower balance beam did not improve short-term learning of beam walking. In addition, the presence of qualitatively catastrophic errors seems to improve short-term learning of walking balance.

  1. Generalized phase-shifting algorithms: error analysis and minimization of noise propagation.

    PubMed

    Ayubi, Gastón A; Perciante, César D; Di Martino, J Matías; Flores, Jorge L; Ferrari, José A

    2016-02-20

    Phase shifting is a technique for phase retrieval that requires a series of intensity measurements with certain phase steps. The purpose of the present work is threefold: first we present a new method for generating general phase-shifting algorithms with arbitrarily spaced phase steps. Second, we study the conditions for which the phase-retrieval error due to phase-shift miscalibration can be minimized. Third, we study the phase extraction from interferograms with additive random noise, and deduce the conditions to be satisfied for minimizing the phase-retrieval error. Algorithms with unevenly spaced phase steps are discussed under linear phase-shift errors and additive Gaussian noise, and simulations are presented.

  2. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data.

    PubMed

    Wei, Na; Fang, Rongxin

    2016-05-11

    With sparse and uneven site distribution, Global Positioning System (GPS) data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6-7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP) with no additional regularization. The optimal truncation degree should be decreased to degree 4-5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM) approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  3. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    PubMed Central

    Wei, Na; Fang, Rongxin

    2016-01-01

    With sparse and uneven site distribution, Global Positioning System (GPS) data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP) with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM) approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources. PMID:27187392

  4. Error Sensitivity Model.

    DTIC Science & Technology

    1980-04-01

    Philosophy The Positioning/Error Model has been defined in three dis- tinct phases: I - Error Sensitivity Model II - Operonal Positioning Model III...X inv VH,’itat NX*YImpY -IY+X 364: mat AX+R 365: ara R+L+R 366: if NC1,1J-N[2,2)=O and N[1,2<135+T;j, 6 367: if NC1,1]-N2,2J=6 and NCI2=;0.T;jmp 5

  5. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  6. Evaluation and error apportionment of an ensemble of atmospheric chemistry transport modeling systems: multivariable temporal and spatial breakdown

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Bianconi, Roberto; Hogrefe, Christian; Curci, Gabriele; Tuccella, Paolo; Alyuz, Ummugulsum; Balzarini, Alessandra; Baró, Rocío; Bellasio, Roberto; Bieser, Johannes; Brandt, Jørgen; Christensen, Jesper H.; Colette, Augistin; Francis, Xavier; Fraser, Andrea; Garcia Vivanco, Marta; Jiménez-Guerrero, Pedro; Im, Ulas; Manders, Astrid; Nopmongcol, Uarporn; Kitwiroon, Nutthida; Pirovano, Guido; Pozzoli, Luca; Prank, Marje; Sokhi, Ranjeet S.; Unal, Alper; Yarwood, Greg; Galmarini, Stefano

    2017-02-01

    development perspective. This will require evaluation methods that are able to frame the impact on error of processes, conditions, and fluxes at the surface. For example, error due to emission and boundary conditions is dominant for primary species (CO, particulate matter (PM)), while errors due to meteorology and chemistry are most relevant to secondary species, such as ozone. Some further aspects emerged whose interpretation requires additional consideration, such as the uniformity of the synoptic error being region- and model-independent, observed for several pollutants; the source of unexplained variance for the diurnal component; and the type of error caused by deposition and at which scale.

  7. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  8. NLO error propagation exercise: statistical results

    SciTech Connect

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.

  9. Report of the Subpanel on Error Characterization and Error Budgets

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state of knowledge of both user positioning requirements and error models of current and proposed satellite systems is reviewed. In particular the error analysis models for LANDSAT D are described. Recommendations are given concerning the geometric error model for the thematic mapper; interactive user involvement in system error budgeting and modeling and verification on real data sets; and the identification of a strawman mission for modeling key error sources.

  10. Bond additivity corrections for quantum chemistry methods

    SciTech Connect

    Melius, C.F.; Allendorf, M.D.

    2000-03-23

    New bond additivity correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid density functional theory (DFT) Moller-Plesset (MP)2 method, BAC-hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method due to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-hybrid and BAC-MP4. The BAC-hybrid method is expected to scale well for large molecules. The BAC-hybrid method uses the differences between the DFT and MP2 predictions as an indication of the method's accuracy, whereas the BAC-G2 method uses its internal methods (G1 and G2MP2) to accomplish this. A statistical analysis of the error in each of the methods is presented on the basis of calculations performed for large sets (more than 120) of molecules.

  11. A Review of the Literature on Computational Errors With Whole Numbers. Mathematics Education Diagnostic and Instructional Centre (MEDIC).

    ERIC Educational Resources Information Center

    Burrows, J. K.

    Research on error patterns associated with whole number computation is reviewed. Details of the results of some of the individual studies cited are given in the appendices. In Appendix A, 33 addition errors, 27 subtraction errors, 41 multiplication errors, and 41 division errors are identified, and the frequency of these errors made by 352…

  12. Target Uncertainty Mediates Sensorimotor Error Correction

    PubMed Central

    Vijayakumar, Sethu; Wolpert, Daniel M.

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323

  13. Target Uncertainty Mediates Sensorimotor Error Correction.

    PubMed

    Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.

  14. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  15. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  16. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  17. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  18. Interpolation Errors in Thermistor Calibration Equations

    NASA Astrophysics Data System (ADS)

    White, D. R.

    2017-04-01

    Thermistors are widely used temperature sensors capable of measurement uncertainties approaching those of standard platinum resistance thermometers. However, the extreme nonlinearity of thermistors means that complicated calibration equations are required to minimize the effects of interpolation errors and achieve low uncertainties. This study investigates the magnitude of interpolation errors as a function of temperature range and the number of terms in the calibration equation. Approximation theory is used to derive an expression for the interpolation error and indicates that the temperature range and the number of terms in the calibration equation are the key influence variables. Numerical experiments based on published resistance-temperature data confirm these conclusions and additionally give guidelines on the maximum and minimum interpolation error likely to occur for a given temperature range and number of terms in the calibration equation.

  19. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  20. Antenna trajectory error analysis in backprojection-based SAR images

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Yazıcı, Birsen; Yanik, H. Cagri

    2014-06-01

    We present an analysis of the positioning errors in Backprojection (BP)-based Synthetic Aperture Radar (SAR) images due to antenna trajectory errors for a monostatic SAR traversing a straight linear trajectory. Our analysis is developed using microlocal analysis, which can provide an explicit quantitative relationship between the trajectory error and the positioning error in BP-based SAR images. The analysis is applicable to arbitrary trajectory errors in the antenna and can be extended to arbitrary imaging geometries. We present numerical simulations to demonstrate our analysis.

  1. Locked modes and magnetic field errors in MST

    SciTech Connect

    Almagri, A.F.; Assadi, S.; Prager, S.C.; Sarff, J.S.; Kerst, D.W.

    1992-06-01

    In the MST reversed field pinch magnetic oscillations become stationary (locked) in the lab frame as a result of a process involving interactions between the modes, sawteeth, and field errors. Several helical modes become phase locked to each other to form a rotating localized disturbance, the disturbance locks to an impulsive field error generated at a sawtooth crash, the error fields grow monotonically after locking (perhaps due to an unstable interaction between the modes and field error), and over the tens of milliseconds of growth confinement degrades and the discharge eventually terminates. Field error control has been partially successful in eliminating locking.

  2. Hyponatremia: management errors.

    PubMed

    Seo, Jang Won; Park, Tae Jin

    2006-11-01

    Rapid correction of hyponatremia is frequently associated with increased morbidity and mortality. Therefore, it is important to estimate the proper volume and type of infusate required to increase the serum sodium concentration predictably. The major common management errors during the treatment of hyponatremia are inadequate investigation, treatment with fluid restriction for diuretic-induced hyponatremia and treatment with fluid restriction plus intravenous isotonic saline simultaneously. We present two cases of management errors. One is about the problem of rapid correction of hyponatremia in a patient with sepsis and acute renal failure during continuous renal replacement therapy in the intensive care unit. The other is the case of hypothyroidism in which hyponatremia was aggravated by intravenous infusion of dextrose water and isotonic saline infusion was erroneously used to increase serum sodium concentration.

  3. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  4. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  5. Surface temperature measurement errors

    SciTech Connect

    Keltner, N.R.; Beck, J.V.

    1983-05-01

    Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.

  6. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  7. Precise accounting of bit errors in floating-point computations

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2009-08-01

    Floating-point computation generates errors at the bit level through four processes, namely, overflow, underflow, truncation, and rounding. Overflow and underflow can be detected electronically, and represent systematic errors that are not of interest in this study. Truncation occurs during shifting toward the least-significant bit (herein called right-shifting), and rounding error occurs at the least significant bit. Such errors are not easy to track precisely using published means. Statistical error propagation theory typically yields conservative estimates that are grossly inadequate for deep computational cascades. Forward error analysis theory developed for image and signal processing or matrix operations can yield a more realistic typical case, but the error of the estimate tends to be high in relationship to the estimated error. In this paper, we discuss emerging technology for forward error analysis, which allows an algorithm designer to precisely estimate the output error of a given operation within a computational cascade, under a prespecified set of constraints on input error and computational precision. This technique, called bit accounting, precisely tracks the number of rounding and truncation errors in each bit position of interest to the algorithm designer. Because all errors associated with specific bit positions are tracked, and because integer addition only is involved in error estimation, the error of the estimate is zero. The technique of bit accounting is evaluated for its utility in image and signal processing. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm being analyzed, and its error estimation algorithm. Because of the significant overhead involved in error representation, it is shown that bit accounting is less useful for real-time error estimation, but is well suited to analysis in support of algorithm design.

  8. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  9. Analysis of discretization errors in LES

    NASA Technical Reports Server (NTRS)

    Ghosal, Sandip

    1995-01-01

    All numerical simulations of turbulence (DNS or LES) involve some discretization errors. The integrity of such simulations therefore depend on our ability to quantify and control such errors. In the classical literature on analysis of errors in partial differential equations, one typically studies simple linear equations (such as the wave equation or Laplace's equation). The qualitative insight gained from studying such simple situations is then used to design numerical methods for more complex problems such as the Navier-Stokes equations. Though such an approach may seem reasonable as a first approximation, it should be recognized that strongly nonlinear problems, such as turbulence, have a feature that is absent in linear problems. This feature is the simultaneous presence of a continuum of space and time scales. Thus, in an analysis of errors in the one dimensional wave equation, one may, without loss of generality, rescale the equations so that the dependent variable is always of order unity. This is not possible in the turbulence problem since the amplitudes of the Fourier modes of the velocity field have a continuous distribution. The objective of the present research is to provide some quantitative measures of numerical errors in such situations. Though the focus of this work is LES, the methods introduced here can be just as easily applied to DNS. Errors due to discretization of the time-variable are neglected for the purpose of this analysis.

  10. Evaluating operating system vulnerability to memory errors.

    SciTech Connect

    Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  11. Transient design of landfill liquid addition systems.

    PubMed

    Jain, Pradeep; Townsend, Timothy G; Tolaymat, Thabet M

    2014-09-01

    This study presents the development of design charts that can be used to estimate lateral and vertical spacing of liquids addition devices (e.g., vertical well, horizontal trenches) and the operating duration needed for transient operating conditions (conditions until steady-state operating conditions are achieved). These design charts should be used in conjunction with steady-state design charts published earlier by Jain et al. (2010a, 2010b). The data suggest that the liquids addition system operating time can be significantly reduced by utilizing moderately closer spacing between liquids addition devices than the spacing needed for steady-state conditions. These design charts can be used by designers to readily estimate achievable flow rate and lateral and vertical extents of the zone of impact from liquid addition devices, and analyze the sensitivity of various input variables (e.g., hydraulic conductivity, anisotropy, well radius, screen length) to the design. The applicability of the design charts, which are developed based on simulations of a continuously operated system, was also evaluated for the design of a system that would be operated intermittently (e.g., systems only operated during facility operating hours). The design charts somewhat underestimates the flow rate achieved and overestimates the lateral extent of the zone of impact over an operating duration for an intermittently operated system. The associated estimation errors would be smaller than the margin of errors associated with measurement of other key design inputs such as waste properties (e.g., hydraulic conductivity) and wider variation of these properties at a given site due to heterogeneous nature of waste.

  12. Study for compensation of unexpected image placement error caused by VSB mask writer deflector

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-joo; Choi, Min-kyu; Moon, Seong-yong; Cho, Han-Ku; Doh, Jonggul; Ahn, Jinho

    2012-11-01

    The Electron Optical System (EOS) is designed for the electron beam machine employing a vector scanned variable shaped beam (VSB) with the deflector. Most VSB systems utilize multi stage deflection architecture to obtain a high precision and a high-speed deflection at the same time. Many companies use the VSB mask writer and they have a lot of experiences about Image Placement (IP) error suffering from contaminated EOS deflector. And also most of VSB mask writer users are having already this error. In order to use old VSB mask writer, we introduce the method how to compensate unexpected IP error from VSB mask writer. There are two methods to improve this error due to contaminated deflector. The one is the usage of 2nd stage grid correction in addition to the original stage grid. And the other is the usage of uncontaminated area in the deflector. According to the results of this paper, 30% of IP error can be reduced by 2nd stage grid correction and the change of deflection area in deflector. It is the effective method to reduce the deflector error at the VSB mask writer. And it can be the one of the solution for the long-term production of photomask.

  13. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  14. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  15. Neutron multiplication error in TRU waste measurements

    SciTech Connect

    Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  16. (Errors in statistical tests)3.

    PubMed

    Phillips, Carl V; MacLehose, Richard F; Kaufman, Jay S

    2008-07-14

    departure from uniformity, not just its test statistics. We found variation in digit frequencies in the additional data and describe the distinctive pattern of these results. Furthermore, we found that the combined data diverge unambiguously from a uniform distribution. The explanation for this divergence seems unlikely to be that suggested by the previous authors: errors in calculations and transcription.

  17. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  18. Detecting Errors in Programs

    DTIC Science & Technology

    1979-02-01

    unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 DETECTING ERRORS IN PROGRAMS* Lloyd D. Fosdick...from a finite set of tests [35,36]a Recently Howden [37] presented a result showing that for a particular class of Lindenmayer grammars it was possible...Diego, CA. 37o Howden, W.E.: Lindenmayer grammars and symbolic testing. Information Processing Letters 7,1 (Jano 1978), 36-39. 38~ Fitzsimmons, Ann

  19. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  20. Magnetic nanoparticle thermometer: an investigation of minimum error transmission path and AC bias error.

    PubMed

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-04-14

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method.

  1. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  2. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  3. Analysis of the impact of error detection on computer performance

    NASA Technical Reports Server (NTRS)

    Shin, K. C.; Lee, Y. H.

    1983-01-01

    Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.

  4. Errors in particle tracking velocimetry with high-speed cameras.

    PubMed

    Feng, Yan; Goree, J; Liu, Bin

    2011-05-01

    Velocity errors in particle tracking velocimetry (PTV) are studied. When using high-speed video cameras, the velocity error may increase at a high camera frame rate. This increase in velocity error is due to particle-position uncertainty, which is one of the two sources of velocity errors studied here. The other source of error is particle acceleration, which has the opposite trend of diminishing at higher frame rates. Both kinds of errors can propagate into quantities calculated from velocity, such as the kinetic temperature of particles or correlation functions. As demonstrated in a dusty plasma experiment, the kinetic temperature of particles has no unique value when measured using PTV, but depends on the sampling time interval or frame rate. It is also shown that an artifact appears in an autocorrelation function computed from particle positions and velocities, and it becomes more severe when a small sampling-time interval is used. Schemes to reduce these errors are demonstrated.

  5. Impacts of frequency increment errors on frequency diverse array beampattern

    NASA Astrophysics Data System (ADS)

    Gao, Kuandong; Chen, Hui; Shao, Huaizong; Cai, Jingye; Wang, Wen-Qin

    2015-12-01

    Different from conventional phased array, which provides only angle-dependent beampattern, frequency diverse array (FDA) employs a small frequency increment across the antenna elements and thus results in a range angle-dependent beampattern. However, due to imperfect electronic devices, it is difficult to ensure accurate frequency increments, and consequently, the array performance will be degraded by unavoidable frequency increment errors. In this paper, we investigate the impacts of frequency increment errors on FDA beampattern. We derive the beampattern errors caused by deterministic frequency increment errors. For stochastic frequency increment errors, the corresponding upper and lower bounds of FDA beampattern error are derived. They are verified by numerical results. Furthermore, the statistical characteristics of FDA beampattern with random frequency increment errors, which obey Gaussian distribution and uniform distribution, are also investigated.

  6. Drug Administration Errors in Hospital Inpatients: A Systematic Review

    PubMed Central

    Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

    2013-01-01

    Context Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. Objectives We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. Data Sources Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. Study Selection Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. Data Extraction Two reviewers (senior pharmacists) independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given), and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR), according to their study design. Results Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46). The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. Conclusions Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE), numerator and types of errors is essential for further publications. PMID:23818992

  7. Inborn Errors in Immunity

    PubMed Central

    Lionakis, M.S.; Hajishengallis, G.

    2015-01-01

    In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229

  8. Prospective issues for error detection.

    PubMed

    Blavier, Adélaïde; Rouy, Emmanuelle; Nyssen, Anne-Sophie; de Keyser, Véronique

    2005-06-10

    From the literature on error detection, the authors select several concepts relating error detection mechanisms and prospective memory features. They emphasize the central role of intention in the classification of the errors into slips/lapses/mistakes, in the error handling process and in the usual distinction between action-based and outcome-based detection. Intention is again a core concept in their investigation of prospective memory theory, where they point out the contribution of intention retrievals, intention persistence and output monitoring in the individual's possibilities for detecting their errors. The involvement of the frontal lobes in prospective memory and in error detection is also analysed. From the chronology of a prospective memory task, the authors finally suggest a model for error detection also accounting for neural mechanisms highlighted by studies on error-related brain activity.

  9. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm.

  10. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  11. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  12. The effect of the ancilla verification on the quantum error correction

    NASA Astrophysics Data System (ADS)

    Abu-Nada, Ali

    Communication is the prototypical application of error-correction methods. To communicate, a sender needs to convey information to a receiver over a noisy "communication channel." Such a channel can be thought of as a means of transmitting an information-carrying physical system from one place to another. During transmission, the physical system is subject to disturbances (noise) that can adversely affect the information carried. To use a communication channel, the sender needs to encode the information to be transmitted in the physical system. After transmission, the receiver decodes the information. Quantum error correction is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal with both noise on stored quantum information, and also with faulty quantum gates, faulty quantum preparation,and faulty measurements. In this dissertation, we look at how additional information about the structure of the quantum circuit and noise can improve or alter the performance of techniques in quantum error correction. Chapter 1 and 2, are an introduction to the quantum computation, quantum error correction codes and fault-tolerant quantum computing. These chapters are written to be a useful for students at the graduate and advanced undergraduate level. Also. The first two chapters of this dissertation will be useful to researchers in other fields who would like to understand how quantum error correction and fault-tolerant quantum computing are possible. In chapter 3, we present numerical simulation results comparing the logical error rates for the fault-tolerant [[7, 1, 3

  13. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  14. Error Patterns in Problem Solving.

    ERIC Educational Resources Information Center

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  15. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  16. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  17. Feature Referenced Error Correction Apparatus.

    DTIC Science & Technology

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  18. Error correction in adders using systematic subcodes.

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.

    1972-01-01

    A generalized theory is presented for the construction of a systematic subcode for a given AN code in such a way that error control properties of the AN code are preserved in this new code. The 'systematic weight' and 'systematic distance' functions in this new code depend not only on its number representation system but also on its addition structure. Finally, to illustrate this theory, a simple error-correcting adder organization using a systematic subcode of 29 N code is sketched in some detail.

  19. Error types and error positions in neglect dyslexia: comparative analyses in neglect patients and healthy controls.

    PubMed

    Weinzierl, Christiane; Kerkhoff, Georg; van Eimeren, Lucia; Keller, Ingo; Stenneken, Prisca

    2012-10-01

    Unilateral spatial neglect frequently involves a lateralised reading disorder, neglect dyslexia (ND). Reading of single words in ND is characterised by left-sided omissions and substitutions of letters. However, it is unclear whether the distribution of error types and positions within a word shows a unique pattern of ND when directly compared to healthy controls. This question has been difficult to answer so far, given the usually low number of reading errors in healthy controls. Therefore, the present study compared single word reading of 18 patients with left-sided neglect, due to right-hemisphere stroke, and 11 age-matched healthy controls, and adjusted individual task difficulty (by varying stimulus presentation times in participants) in order to reach approximately equal error rates between neglect patients and controls. Results showed that, while both omission and substitution errors were frequently produced in neglect patients and controls, only omissions appeared neglect-specific when task difficulty was adapted between groups. Analyses of individual letter positions within words revealed that the spatial distribution of reading errors in the neglect dyslexic patients followed an almost linear increase from the end to the beginning of the word (right-to-left-gradient). Both, the gradient in error positions and the predominance of omission errors presented a neglect-specific pattern. Consistent with current models of visual word processing, these findings suggest that ND reflects sublexical, visuospatial attentional mechanisms in letter string encoding.

  20. Structured error recovery for code-word-stabilized quantum codes

    SciTech Connect

    Li Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.

    2010-05-15

    Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3{sup t} times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.

  1. Structured error recovery for code-word-stabilized quantum codes

    NASA Astrophysics Data System (ADS)

    Li, Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.

    2010-05-01

    Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3t times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.

  2. Identification of Error Patterns in Terminal-Area ATC Communications

    NASA Technical Reports Server (NTRS)

    Quinn, Cheryl; Walter, Kim E.; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    Advancing air traffic management technologies have enabled a greater number of aircraft to use the same airspace more effectively. As aircraft separations are reduced and final approaches are more finely timed, there is less room for error. The present study examined 122 terminal-area, loss-of-separation and procedure violation incidents reported to the Aviation Safety Reporting System (ASRS) by air traffic controllers. Narrative description codes were used for the incidents for type of violation, contributing factors, recovery strategies, and consequences. Usually multiple errors occurred prior to the violation. Error sequences were analyzed and common patterns of errors were identified. In half of the incidents, errors were noticed in time to correct mistakes. Of these, almost 43% committed additional errors during the recovery attempt. This analysis shows that redundancies in the present air traffic control system may not be sufficient to support large increases in traffic density. Error prevention and design considerations for air traffic management systems are discussed.

  3. On typographical errors.

    PubMed

    Hamilton, J W

    1993-09-01

    In his overall assessment of parapraxes in 1901, Freud included typographical mistakes but did not elaborate on or study this subject nor did he have anything to say about it in his later writings. This paper lists textual errors from a variety of current literary sources and explores the dynamic importance of their execution and the failure to make necessary corrections during the editorial process. While there has been a deemphasis of the role of unconscious determinants in the genesis of all slips as a result of recent findings in cognitive psychology, the examples offered suggest that, with respect to motivation, lapses in compulsivity contribute to their original commission while thematic compliance and voyeuristic issues are important in their not being discovered prior to publication.

  4. Gender Bender: Gender Errors in L2 Pronoun Production

    ERIC Educational Resources Information Center

    Anton-Mendez, Ines

    2010-01-01

    To address questions about information processing at the message level, pronoun errors of second language (L2) speakers of English were studied. Some L2 pronoun errors--"he/she" confusions by Spanish speakers of L2 English--could be due to differences in the informational requirements of the speakers' two languages, providing a window into the…

  5. New Gear Transmission Error Measurement System Designed

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  6. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  7. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  8. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  9. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  10. The economics of health care quality and medical errors.

    PubMed

    Andel, Charles; Davidow, Stephen L; Hollander, Mark; Moreno, David A

    2012-01-01

    Hospitals have been looking for ways to improve quality and operational efficiency and cut costs for nearly three decades, using a variety of quality improvement strategies. However, based on recent reports, approximately 200,000 Americans die from preventable medical errors including facility-acquired conditions and millions may experience errors. In 2008, medical errors cost the United States $19.5 billion. About 87 percent or $17 billion were directly associated with additional medical cost, including: ancillary services, prescription drug services, and inpatient and outpatient care, according to a study sponsored by the Society for Actuaries and conducted by Milliman in 2010. Additional costs of $1.4 billion were attributed to increased mortality rates with $1.1 billion or 10 million days of lost productivity from missed work based on short-term disability claims. The authors estimate that the economic impact is much higher, perhaps nearly $1 trillion annually when quality-adjusted life years (QALYs) are applied to those that die. Using the Institute of Medicine's (IOM) estimate of 98,000 deaths due to preventable medical errors annually in its 1998 report, To Err Is Human, and an average of ten lost years of life at $75,000 to $100,000 per year, there is a loss of $73.5 billion to $98 billion in QALYs for those deaths--conservatively. These numbers are much greater than those we cite from studies that explore the direct costs of medical errors. And if the estimate of a recent Health Affairs article is correct-preventable death being ten times the IOM estimate-the cost is $735 billion to $980 billion. Quality care is less expensive care. It is better, more efficient, and by definition, less wasteful. It is the right care, at the right time, every time. It should mean that far fewer patients are harmed or injured. Obviously, quality care is not being delivered consistently throughout U.S. hospitals. Whatever the measure, poor quality is costing payers and

  11. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  12. On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.

    2013-12-01

    Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate

  13. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  14. Cosmic ray-induced soft errors in static MOS memory cells

    NASA Technical Reports Server (NTRS)

    Sivo, L. L.; Peden, J. C.; Brettschneider, M.; Price, W.; Pentecost, P.

    1979-01-01

    Previous analytical models were extended to predict cosmic ray-induced soft error rates in static MOS memory devices. The effect is due to ionization and can be introduced by high energy, heavy ion components of the galactic environment. The results indicate that the sensitivity of memory cells is directly related to the density of the particular MOS technology which determines the node capacitance values. Hence, CMOS is less sensitive than e.g., PMOS. In addition, static MOS memory cells are less sensitive than dynamic ones due to differences in the mechanisms of storing bits. The flip-flop of a static cell is inherently stable against cosmic ray-induced bit flips. Predicted error rates on a CMOS RAM and a PMOS shift register are in general agreement with previous spacecraft flight data.

  15. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  16. Improved Error Thresholds for Measurement-Free Error Correction.

    PubMed

    Crow, Daniel; Joynt, Robert; Saffman, M

    2016-09-23

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  17. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  18. Comparison of analytical error and sampling error for contaminated soil.

    PubMed

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  19. Spatial sampling errors for a satellite-borne scanning radiometer

    NASA Technical Reports Server (NTRS)

    Manalo, Natividad D.; Smith, G. L.

    1991-01-01

    The Clouds and Earth's Radiant Energy System (CERES) scanning radiometer is planned as the Earth radiation budget instrument for the Earth Observation System, to be flown in the late 1990's. In order to minimize the spatial sampling errors of the measurements, it is necessary to select design parameters for the instrument such that the resulting point spread function will minimize spatial sampling errors. These errors are described as aliasing and blurring errors. Aliasing errors are due to presence in the measurements of spatial frequencies beyond the Nyquist frequency, and blurring errors are due to attenuation of frequencies below the Nyquist frequency. The design parameters include pixel shape and dimensions, sampling rate, scan period, and time constants of the measurements. For a satellite-borne scanning radiometer, the pixel footprint grows quickly at large nadir angles. The aliasing errors thus decrease with increasing scan angle, but the blurring errors grow quickly. The best design minimizes the sum of these two errors over a range of scan angles. Results of a parameter study are presented, showing effects of data rates, pixel dimensions, spacecraft altitude, and distance from the spacecraft track.

  20. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  1. Error Sensitivity to Environmental Noise in Quantum Circuits for Chemical State Preparation.

    PubMed

    Sawaya, Nicolas P D; Smelyanskiy, Mikhail; McClean, Jarrod R; Aspuru-Guzik, Alán

    2016-07-12

    Calculating molecular energies is likely to be one of the first useful applications to achieve quantum supremacy, performing faster on a quantum than a classical computer. However, if future quantum devices are to produce accurate calculations, errors due to environmental noise and algorithmic approximations need to be characterized and reduced. In this study, we use the high performance qHiPSTER software to investigate the effects of environmental noise on the preparation of quantum chemistry states. We simulated 18 16-qubit quantum circuits under environmental noise, each corresponding to a unitary coupled cluster state preparation of a different molecule or molecular configuration. Additionally, we analyze the nature of simple gate errors in noise-free circuits of up to 40 qubits. We find that, in most cases, the Jordan-Wigner (JW) encoding produces smaller errors under a noisy environment as compared to the Bravyi-Kitaev (BK) encoding. For the JW encoding, pure dephasing noise is shown to produce substantially smaller errors than pure relaxation noise of the same magnitude. We report error trends in both molecular energy and electron particle number within a unitary coupled cluster state preparation scheme, against changes in nuclear charge, bond length, number of electrons, noise types, and noise magnitude. These trends may prove to be useful in making algorithmic and hardware-related choices for quantum simulation of molecular energies.

  2. Working Memory Capacity Predicts Selection and Identification Errors in Visual Search.

    PubMed

    Peltier, Chad; Becker, Mark W

    2016-11-17

    As public safety relies on the ability of professionals, such as radiologists and baggage screeners, to detect rare targets, it could be useful to identify predictors of visual search performance. Schwark, Sandry, and Dolgov found that working memory capacity (WMC) predicts hit rate and reaction time in low prevalence searches. This link was attributed to higher WMC individuals exhibiting a higher quitting threshold and increasing the probability of finding the target before terminating search in low prevalence search. These conclusions were limited based on the methods; without eye tracking, the researchers could not differentiate between an increase in accuracy due to fewer identification errors (failing to identify a fixated target), selection errors (failing to fixate a target), or a combination of both. Here, we measure WMC and correlate it with reaction time and accuracy in a visual search task. We replicate the finding that WMC predicts reaction time and hit rate. However, our analysis shows that it does so through both a reduction in selection and identification errors. The correlation between WMC and selection errors is attributable to increased quitting thresholds in those with high WMC. The correlation between WMC and identification errors is less clear, though potentially attributable to increased item inspection times in those with higher WMC. In addition, unlike Schwark and coworkers, we find that these WMC effects are fairly consistent across prevalence rates rather than being specific to low-prevalence searches.

  3. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  4. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-09

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors.

  5. Quantum Error Correction

    DTIC Science & Technology

    2005-07-06

    many families of quantum MDS codes. 15. SUBJECT TERMS Quantum Information Science , Quantum Algorithms, Quantum Cryptography 16. SECURITY...separable codes over alphabets of arbitrary size,” a preprint, 2005; to be presented at ERATO conference on quantum information science , Tokyo, Japan...β, γ〉〉 = 1. Due to the Chinese remainder theorem, we have one more equivalent ∗ERATO Conference on Quantum Information Science , 2005 †jkim

  6. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  7. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  8. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  9. Processor register error correction management

    SciTech Connect

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  10. Measurement process error determination and control

    SciTech Connect

    Everhart, J.

    1992-01-01

    Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960's by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.

  11. Measurement process error determination and control

    SciTech Connect

    Everhart, J.

    1992-11-01

    Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960`s by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.

  12. Quality assessment of speckle patterns for DIC by consideration of both systematic errors and random errors

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren

    2016-11-01

    The performance of digital image correlation (DIC) is influenced by the quality of speckle patterns significantly. Thus, it is crucial to present a valid and practical method to assess the quality of speckle patterns. However, existing assessment methods either lack a solid theoretical foundation or fail to consider the errors due to interpolation. In this work, it is proposed to assess the quality of speckle patterns by estimating the root mean square error (RMSE) of DIC, which is the square root of the sum of square of systematic error and random error. Two performance evaluation parameters, respectively the maximum and the quadratic mean of RMSE, are proposed to characterize the total error. An efficient algorithm is developed to estimate these parameters, and the correctness of this algorithm is verified by numerical experiments for both 1 dimensional signal and actual speckle images. The influences of correlation criterion, shape function order, and sub-pixel registration algorithm are briefly discussed. Compared to existing methods, method presented by this paper is more valid due to the consideration of both measurement accuracy and precision.

  13. Error compensation in a pointing system based on Risley prisms.

    PubMed

    Bravo-Medina, Beethoven; Strojnik, Marija; Garcia-Torales, Guillermo; Torres-Ortega, Hector; Estrada-Marmolejo, Ruben; Beltrán-González, Anuar; Flores, Jorge L

    2017-03-10

    Risley prisms are widely used for beam pointing in several optical systems. The exact solution for the inverse problem does not exist, except using numerical methods. However, the errors introduced by misalignment are usually greater than the approximation errors. We present a novel method to compensate alignment errors in pointing systems based on Risley prisms. The prism model that we used is based on paraxial approximation with an additional vector to compensate typical alignment errors. Simulation and experimental results show that the improvement in pointing accuracy is achievable even in comparison with exact ray tracing methods.

  14. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  15. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  16. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  17. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  18. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  19. Correcting numerical integration errors caused by small aliasing errors

    SciTech Connect

    Smallwood, D.O.

    1997-11-01

    Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

  20. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  1. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  2. Peeling Away Timing Error in NetFlow Data

    NASA Astrophysics Data System (ADS)

    Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin

    In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.

  3. Disarming smiles: irrelevant happy faces slow post-error responses.

    PubMed

    Gupta, Rashmi; Deák, Gedeon O

    2015-11-01

    When we make errors, we tend to experience a negative emotional state. In addition, if our errors are witnessed by other people, we might expect those observers to respond negatively. However, little is known about how implicit social feedback like facial expressions influences error processing. We explored this using the cognitive control phenomenon of post-error slowing: the tendency to slow the response immediately following an error. Adult participants performed a difficult perceptual task: estimating which of two lines (horizontal or vertical) was longer. The background showed an irrelevant distractor face with a happy, sad, or neutral expression. Participants slowed after errors only when the subsequent distractor face was happy, but not when the subsequent distractor was sad or neutral nor when a happy face followed a correct response. This suggests that information about others' affect, even non-interactive, task-irrelevant information, has performance- and valence-dependent effects on adaptive cognitive control.

  4. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  5. MEMS IMU Error Mitigation Using Rotation Modulation Technique.

    PubMed

    Du, Shuang; Sun, Wei; Gao, Yang

    2016-11-29

    Micro-electro-mechanical-systems (MEMS) inertial measurement unit (IMU) outputs are corrupted by significant sensor errors. The navigation errors of a MEMS-based inertial navigation system will therefore accumulate very quickly over time. This requires aiding from other sensors such as Global Navigation Satellite Systems (GNSS). However, it will still remain a significant challenge in the presence of GNSS outages, which are typically in urban canopies. This paper proposed a rotary inertial navigation system (INS) to mitigate navigation errors caused by MEMS inertial sensor errors when external aiding information is not available. A rotary INS is an inertial navigator in which the IMU is installed on a rotation platform. Application of proper rotation schemes can effectively cancel and reduce sensor errors. A rotary INS has the potential to significantly increase the time period that INS can bridge GNSS outages and make MEMS IMU possible to maintain longer autonomous navigation performance when there is no external aiding. In this research, several IMU rotation schemes (rotation about X-, Y- and Z-axes) are analyzed to mitigate the navigation errors caused by MEMS IMU sensor errors. As the IMU rotation induces additional sensor errors, a calibration process is proposed to remove the induced errors. Tests are further conducted with two MEMS IMUs installed on a tri-axial rotation table to verify the error mitigation by IMU rotations.

  6. MEMS IMU Error Mitigation Using Rotation Modulation Technique

    PubMed Central

    Du, Shuang; Sun, Wei; Gao, Yang

    2016-01-01

    Micro-electro-mechanical-systems (MEMS) inertial measurement unit (IMU) outputs are corrupted by significant sensor errors. The navigation errors of a MEMS-based inertial navigation system will therefore accumulate very quickly over time. This requires aiding from other sensors such as Global Navigation Satellite Systems (GNSS). However, it will still remain a significant challenge in the presence of GNSS outages, which are typically in urban canopies. This paper proposed a rotary inertial navigation system (INS) to mitigate navigation errors caused by MEMS inertial sensor errors when external aiding information is not available. A rotary INS is an inertial navigator in which the IMU is installed on a rotation platform. Application of proper rotation schemes can effectively cancel and reduce sensor errors. A rotary INS has the potential to significantly increase the time period that INS can bridge GNSS outages and make MEMS IMU possible to maintain longer autonomous navigation performance when there is no external aiding. In this research, several IMU rotation schemes (rotation about X-, Y- and Z-axes) are analyzed to mitigate the navigation errors caused by MEMS IMU sensor errors. As the IMU rotation induces additional sensor errors, a calibration process is proposed to remove the induced errors. Tests are further conducted with two MEMS IMUs installed on a tri-axial rotation table to verify the error mitigation by IMU rotations. PMID:27916852

  7. Automatic pronunciation error detection in non-native speech: the case of vowel errors in Dutch.

    PubMed

    van Doremalen, Joost; Cucchiarini, Catia; Strik, Helmer

    2013-08-01

    This research is aimed at analyzing and improving automatic pronunciation error detection in a second language. Dutch vowels spoken by adult non-native learners of Dutch are used as a test case. A first study on Dutch pronunciation by L2 learners with different L1s revealed that vowel pronunciation errors are relatively frequent and often concern subtle acoustic differences between the realization and the target sound. In a second study automatic pronunciation error detection experiments were conducted to compare existing measures to a metric that takes account of the error patterns observed to capture relevant acoustic differences. The results of the two studies do indeed show that error patterns bear information that can be usefully employed in weighted automatic measures of pronunciation quality. In addition, it appears that combining such a weighted metric with existing measures improves the equal error rate by 6.1 percentage points from 0.297, for the Goodness of Pronunciation (GOP) algorithm, to 0.236.

  8. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  9. Occurrence of Medication Errors and Comparison of Manual and Computerized Prescription Systems in Public Sector Hospitals in Lahore, Pakistan

    PubMed Central

    Riaz, Muhammad Kashif; Hashmi, Furqan Khurshid; Bukhari, Nadeem Irfan; Riaz, Mohammad; Hussain, Khalid

    2014-01-01

    The knowledge of medication errors is an essential prerequisite for better healthcare delivery. The present study investigated prescribing errors in prescriptions from outpatient departments (OPDs) and emergency wards of two public sector hospitals in Lahore, Pakistan. A manual prescription system was followed in Hospital A. Hospital B was running a semi-computerised prescription system in the OPD and a fully computerised prescription system in the emergency ward. A total of 510 prescriptions from both departments of these two hospitals were evaluated for patient characteristics, demographics and medication errors. The data was analysed using a chi square test for comparison of errors between both the hospitals. The medical departments in OPDs of both hospitals were the highest prescribers at 45%–60%. The age group receiving the most treatment in emergency wards of both the hospitals was 21–30 years (21%–24%). A trend of omitting patient addresses and diagnoses was observed in almost all prescriptions from both of the hospitals. Nevertheless, patient information such as name, age, gender and legibility of the prescriber’s signature were found in almost 100% of the electronic-prescriptions. In addition, no prescribing error was found pertaining to drug concentrations, quantity and rate of administration in e-prescriptions. The total prescribing errors in the OPD and emergency ward of Hospital A were found to be 44% and 60%, respectively. In hospital B, the OPD had 39% medication errors and the emergency department had 73.5% errors; this unexpected difference between the emergency ward and OPD of hospital B was mainly due to the inclusion of 69.4% omissions of route of administration in the prescriptions. The incidence of prescription overdose was approximately 7%–19% in the manual system and approximately 8% in semi and fully electronic system. The omission of information and incomplete information are contributors of prescribing errors in both manual and

  10. Occurrence of medication errors and comparison of manual and computerized prescription systems in public sector hospitals in Lahore, Pakistan.

    PubMed

    Riaz, Muhammad Kashif; Hashmi, Furqan Khurshid; Bukhari, Nadeem Irfan; Riaz, Mohammad; Hussain, Khalid

    2014-01-01

    The knowledge of medication errors is an essential prerequisite for better healthcare delivery. The present study investigated prescribing errors in prescriptions from outpatient departments (OPDs) and emergency wards of two public sector hospitals in Lahore, Pakistan. A manual prescription system was followed in Hospital A. Hospital B was running a semi-computerised prescription system in the OPD and a fully computerised prescription system in the emergency ward. A total of 510 prescriptions from both departments of these two hospitals were evaluated for patient characteristics, demographics and medication errors. The data was analysed using a chi square test for comparison of errors between both the hospitals. The medical departments in OPDs of both hospitals were the highest prescribers at 45%-60%. The age group receiving the most treatment in emergency wards of both the hospitals was 21-30 years (21%-24%). A trend of omitting patient addresses and diagnoses was observed in almost all prescriptions from both of the hospitals. Nevertheless, patient information such as name, age, gender and legibility of the prescriber's signature were found in almost 100% of the electronic-prescriptions. In addition, no prescribing error was found pertaining to drug concentrations, quantity and rate of administration in e-prescriptions. The total prescribing errors in the OPD and emergency ward of Hospital A were found to be 44% and 60%, respectively. In hospital B, the OPD had 39% medication errors and the emergency department had 73.5% errors; this unexpected difference between the emergency ward and OPD of hospital B was mainly due to the inclusion of 69.4% omissions of route of administration in the prescriptions. The incidence of prescription overdose was approximately 7%-19% in the manual system and approximately 8% in semi and fully electronic system. The omission of information and incomplete information are contributors of prescribing errors in both manual and electronic

  11. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  12. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  13. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  14. Error Estimates for Mixed Methods.

    DTIC Science & Technology

    1979-03-01

    This paper presents abstract error estimates for mixed methods for the approximate solution of elliptic boundary value problems. These estimates are...then applied to obtain quasi-optimal error estimates in the usual Sobolev norms for four examples: three mixed methods for the biharmonic problem and a mixed method for 2nd order elliptic problems. (Author)

  15. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  16. Error Correction, Revision, and Learning

    ERIC Educational Resources Information Center

    Truscott, John; Hsu, Angela Yi-ping

    2008-01-01

    Previous research has shown that corrective feedback on an assignment helps learners reduce their errors on that assignment during the revision process. Does this finding constitute evidence that learning resulted from the feedback? Differing answers play an important role in the ongoing debate over the effectiveness of error correction,…

  17. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  18. Twenty questions about student errors

    NASA Astrophysics Data System (ADS)

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    Errors in science learning (errors in expression of organized, purposeful thought within the domain of science) provide a window through which glimpses of mental functioning can be obtained. Errors are valuable and normal occurrences in the process of learning science. A student can use his/her errors to develop a deeper understanding of a concept as long as the error can be recognized and appropriate, informative feedback can be obtained. A safe, non-threatening, and nonpunitive environment which encourages dialogue helps students to express their conceptions and to risk making errors. Pedagogical methods that systematically address common student errors produce significant gains in student learning. Just as the nature-nurture interaction is integral to the development of living things, so the individual-environment interaction is basic to thought processes. At a minimum, four systems interact: (1) the individual problem solver (who has a worldview, relatively stable cognitive characteristics, relatively malleable mental states and conditions, and aims or intentions), (2) task to be performed (including relative importance and nature of the task), (3) knowledge domain in which task is contained, and (4) the environment (including orienting conditions and the social and physical context).Several basic assumptions underlie research on errors and alternative conceptions. Among these are: Knowledge and thought involve active, constructive processes; there are many ways to acquire, organize, store, retrieve, and think about a given concept or event; and understanding is achieved by successive approximations. Application of these ideas will require a fundamental change in how science is taught.

  19. Relationships between GPS-signal propagation errors and EISCAT observations

    NASA Astrophysics Data System (ADS)

    Jakowski, N.; Sardon, E.; Engler, E.; Jungstand, A.; Klähn, D.

    1996-12-01

    When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges -20°leqleq40°E and 32.5°leqleq70°N in longitude and latitude, respectively. The derived TEC maps over Europe contribute to the study of horizontal coupling and transport proces- ses during significant ionospheric events. Due to their comprehensive information about the high-latitude ionosphere, EISCAT observations may help to study the influence of ionospheric phenomena upon propagation errors in GPS navigation systems. Since there are still some accuracy limiting problems to be solved in TEC determination using GPS, data comparison of TEC with vertical electron density profiles derived from EISCAT observations is valuable to enhance the accuracy of propagation-error estimations. This is evident both for absolute TEC calibration as well as for the conversion of ray-path-related observations to vertical TEC. The combination of EISCAT data and GPS-derived TEC data enables a better understanding of large-scale ionospheric processes. Acknowledgements. This work has been supported by the UK Particle-Physics and Astronomy Research Council. The assistance of the director and staff of the EISCAT Scientific Association, the staff of the Norsk Polarinstitutt

  20. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  1. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  2. Angle interferometer cross axis errors

    NASA Astrophysics Data System (ADS)

    Bryan, J. B.; Carter, D. L.; Thompson, S. L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them.

  3. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  4. Sources of error in the retracted scientific literature.

    PubMed

    Casadevall, Arturo; Steen, R Grant; Fang, Ferric C

    2014-09-01

    Retraction of flawed articles is an important mechanism for correction of the scientific literature. We recently reported that the majority of retractions are associated with scientific misconduct. In the current study, we focused on the subset of retractions for which no misconduct was identified, in order to identify the major causes of error. Analysis of the retraction notices for 423 articles indexed in PubMed revealed that the most common causes of error-related retraction are laboratory errors, analytical errors, and irreproducible results. The most common laboratory errors are contamination and problems relating to molecular biology procedures (e.g., sequencing, cloning). Retractions due to contamination were more common in the past, whereas analytical errors are now increasing in frequency. A number of publications that have not been retracted despite being shown to contain significant errors suggest that barriers to retraction may impede correction of the literature. In particular, few cases of retraction due to cell line contamination were found despite recognition that this problem has affected numerous publications. An understanding of the errors leading to retraction can guide practices to improve laboratory research and the integrity of the scientific literature. Perhaps most important, our analysis has identified major problems in the mechanisms used to rectify the scientific literature and suggests a need for action by the scientific community to adopt protocols that ensure the integrity of the publication process.

  5. Phylogenomics of Lophotrochozoa with Consideration of Systematic Error.

    PubMed

    Kocot, Kevin M; Struck, Torsten H; Merkel, Julia; Waits, Damien S; Todt, Christiane; Brannock, Pamela M; Weese, David A; Cannon, Johanna T; Moroz, Leonid L; Lieb, Bernhard; Halanych, Kenneth M

    2017-03-01

    Phylogenomic studies have improved understanding of deep metazoan phylogeny and show promise for resolving incongruences among analyses based on limited numbers of loci. One region of the animal tree that has been especially difficult to resolve, even with phylogenomic approaches, is relationships within Lophotrochozoa (the animal clade that includes molluscs, annelids, and flatworms among others). Lack of resolution in phylogenomic analyses could be due to insufficient phylogenetic signal, limitations in taxon and/or gene sampling, or systematic error. Here, we investigated why lophotrochozoan phylogeny has been such a difficult question to answer by identifying and reducing sources of systematic error. We supplemented existing data with 32 new transcriptomes spanning the diversity of Lophotrochozoa and constructed a new set of Lophotrochozoa-specific core orthologs. Of these, 638 orthologous groups (OGs) passed strict screening for paralogy using a tree-based approach. In order to reduce possible sources of systematic error, we calculated branch-length heterogeneity, evolutionary rate, percent missing data, compositional bias, and saturation for each OG and analyzed increasingly stricter subsets of only the most stringent (best) OGs for these five variables. Principal component analysis of the values for each factor examined for each OG revealed that compositional heterogeneity and average patristic distance contributed most to the variance observed along the first principal component while branch-length heterogeneity and, to a lesser extent, saturation contributed most to the variance observed along the second. Missing data did not strongly contribute to either. Additional sensitivity analyses examined effects of removing taxa with heterogeneous branch lengths, large amounts of missing data, and compositional heterogeneity. Although our analyses do not unambiguously resolve lophotrochozoan phylogeny, we advance the field by reducing the list of viable hypotheses

  6. Concurrent remote entanglement with quantum error correction against photon losses

    NASA Astrophysics Data System (ADS)

    Roy, Ananda; Stone, A. Douglas; Jiang, Liang

    2016-09-01

    Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.

  7. Accurate identification and compensation of geometric errors of 5-axis CNC machine tools using double ball bar

    NASA Astrophysics Data System (ADS)

    Lasemi, Ali; Xue, Deyi; Gu, Peihua

    2016-05-01

    Five-axis CNC machine tools are widely used in manufacturing of parts with free-form surfaces. Geometric errors of machine tools have significant effects on the quality of manufactured parts. This research focuses on development of a new method to accurately identify geometric errors of 5-axis CNC machines, especially the errors due to rotary axes, using the magnetic double ball bar. A theoretical model for identification of geometric errors is provided. In this model, both position-independent errors and position-dependent errors are considered as the error sources. This model is simplified by identification and removal of the correlated and insignificant error sources of the machine. Insignificant error sources are identified using the sensitivity analysis technique. Simulation results reveal that the simplified error identification model can result in more accurate estimations of the error parameters. Experiments on a 5-axis CNC machine tool also demonstrate significant reduction in the volumetric error after error compensation.

  8. Influence of satellite geometry, range, clock, and altimeter errors on two-satellite GPS navigation

    NASA Astrophysics Data System (ADS)

    Bridges, Philip D.

    Flight tests were conducted at Yuma Proving Grounds, Yuma, AZ, to determine the performance of a navigation system capable of using only two GPS satellites. The effect of satellite geometry, range error, and altimeter error on the horizontal position solution were analyzed for time and altitude aided GPS navigation (two satellites + altimeter + clock). The east and north position errors were expressed as a function of satellite range error, altimeter error, and east and north Dilution of Precision. The equations for the Dilution of Precision were derived as a function of satellite azimuth and elevation angles for the two satellite case. The expressions for the position error were then used to analyze the flight test data. The results showed the correlation between satellite geometry and position error, the increase in range error due to clock drift, and the impact of range and altimeter error on the east and north position error.

  9. Evaluating concentration estimation errors in ELISA microarray experiments

    SciTech Connect

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.; Anderson, Kevin K.; Zangar, Richard C.

    2005-01-26

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Although propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.

  10. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  11. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  12. Human error mitigation initiative (HEMI) : summary report.

    SciTech Connect

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.; Brannon, Nathan Gregory

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operations indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.

  13. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  14. Observations of TOPEX/Poseidon Orbit Errors Due to Gravitational and Tidal Modeling Errors Using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Haines, B.; Christensen, E.; Guinn, J.; Norman, R.; Marshall, J.

    1995-01-01

    Satellite altimetry must measure variations in ocean topography with cm-level accuracy. The TOPEX/Poseidon mission is designed to do this by measuring the radial component of the orbit with an accuracy of 13 cm or better RMS. Recent advances, however, have improved this accuracy by about an order of magnitude.

  15. Mars gravitational field estimation error

    NASA Technical Reports Server (NTRS)

    Compton, H. R.; Daniels, E. F.

    1972-01-01

    The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

  16. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  17. Error bounds in cascading regressions

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1985-01-01

    Cascading regressions is a technique for predicting a value of a dependent variable when no paired measurements exist to perform a standard regression analysis. Biases in coefficients of a cascaded-regression line as well as error variance of points about the line are functions of the correlation coefficient between dependent and independent variables. Although this correlation cannot be computed because of the lack of paired data, bounds can be placed on errors through the required properties of the correlation coefficient. The potential meansquared error of a cascaded-regression prediction can be large, as illustrated through an example using geomorphologic data. ?? 1985 Plenum Publishing Corporation.

  18. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  19. Error Analysis and Propagation in Metabolomics Data Analysis.

    PubMed

    Moseley, Hunter N B

    2013-01-01

    Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.

  20. Reflection error correction of gas turbine blade temperature

    NASA Astrophysics Data System (ADS)

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan

    2016-03-01

    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  1. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    SciTech Connect

    Hess-Flores, Mauricio

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  2. Aging transition by random errors

    PubMed Central

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  3. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  4. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  5. Aging transition by random errors

    NASA Astrophysics Data System (ADS)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  6. Error resilient video coding using virtual reference picture

    NASA Astrophysics Data System (ADS)

    Zhang, Guanjun; Stevenson, Robert L.

    2005-03-01

    Due to widely used motion-compensated prediction coding, errors propagate along decoded video sequence and may result in severe quality degradation. Various methods have been reported to address this problem based on the common idea of diversifying prediction references. In this paper, we present an alternative way of concealing the references pictures errors. A generated virtual picture is used as a reference instead of an actual sequence picture in the temporal prediction. The virtual reference picture is generated in a way to filter damaged parts of previously decoded pictures so that the decoder can still get a clean reference picture in case of errors. Coding efficiency is effected due to the fact that the virtual reference is less correlated to the currently encoded picture. The simulations on H.264 codec have shown quality improvement of the proposed method over intra-coded macroblock refreshment. It can be used on any motion-compensated video codec to combat channel errors.

  7. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  8. Detecting Soft Errors in Stencil based Computations

    SciTech Connect

    Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  9. ISA accelerometer onboard the Mercury Planetary Orbiter: error budget

    NASA Astrophysics Data System (ADS)

    Iafolla, Valerio; Lucchesi, David M.; Nozzoli, Sergio; Santoli, Francesco

    2007-03-01

    We have estimated a preliminary error budget for the Italian Spring Accelerometer (ISA) that will be allocated onboard the Mercury Planetary Orbiter (MPO) of the European Space Agency (ESA) space mission to Mercury named BepiColombo. The role of the accelerometer is to remove from the list of unknowns the non-gravitational accelerations that perturb the gravitational trajectory followed by the MPO in the strong radiation environment that characterises the orbit of Mercury around the Sun. Such a role is of fundamental importance in the context of the very ambitious goals of the Radio Science Experiments (RSE) of the BepiColombo mission. We have subdivided the errors on the accelerometer measurements into two main families: (i) the pseudo-sinusoidal errors and (ii) the random errors. The former are characterised by a periodic behaviour with the frequency of the satellite mean anomaly and its higher order harmonic components, i.e., they are deterministic errors. The latter are characterised by an unknown frequency distribution and we assumed for them a noise-like spectrum, i.e., they are stochastic errors. Among the pseudo-sinusoidal errors, the main contribution is due to the effects of the gravity gradients and the inertial forces, while among the random-like errors the main disturbing effect is due to the MPO centre-of-mass displacements produced by the onboard High Gain Antenna (HGA) movements and by the fuel consumption and sloshing. Very subtle to be considered are also the random errors produced by the MPO attitude corrections necessary to guarantee the nadir pointing of the spacecraft. We have therefore formulated the ISA error budget and the requirements for the satellite in order to guarantee an orbit reconstruction for the MPO spacecraft with an along-track accuracy of about 1 m over the orbital period of the satellite around Mercury in such a way to satisfy the RSE requirements.

  10. Model of glucose sensor error components: identification and assessment for new Dexcom G4 generation devices.

    PubMed

    Facchinetti, Andrea; Del Favero, Simone; Sparacino, Giovanni; Cobelli, Claudio

    2015-12-01

    It is clinically well-established that minimally invasive subcutaneous continuous glucose monitoring (CGM) sensors can significantly improve diabetes treatment. However, CGM readings are still not as reliable as those provided by standard fingerprick blood glucose (BG) meters. In addition to unavoidable random measurement noise, other components of sensor error are distortions due to the blood-to-interstitial glucose kinetics and systematic under-/overestimations associated with the sensor calibration process. A quantitative assessment of these components, and the ability to simulate them with precision, is of paramount importance in the design of CGM-based applications, e.g., the artificial pancreas (AP), and in their in silico testing. In the present paper, we identify and assess a model of sensor error of for two sensors, i.e., the G4 Platinum (G4P) and the advanced G4 for artificial pancreas studies (G4AP), both belonging to the recently presented "fourth" generation of Dexcom CGM sensors but different in their data processing. Results are also compared with those obtained by a sensor belonging to the previous, "third," generation by the same manufacturer, the SEVEN Plus (7P). For each sensor, the error model is derived from 12-h CGM recordings of two sensors used simultaneously and BG samples collected in parallel every 15 ± 5 min. Thanks to technological innovations, G4P outperforms 7P, with average mean absolute relative difference (MARD) of 11.1 versus 14.2%, respectively, and lowering of about 30% the error of each component. Thanks to the more sophisticated data processing algorithms, G4AP resulted more reliable than G4P, with a MARD of 10.0%, and a further decrease to 20% of the error due to blood-to-interstitial glucose kinetics.

  11. Prevention of wrong-site and wrong-patient surgical errors.

    PubMed

    2013-01-01

    Surgical errors recorded between 2002 and 2008 in a US medical liability insurance database have been analysed. Twenty-five wrong-patient procedures were recorded, resulting in 5 serious adverse events: three unnecessary prostatectomies were performed after prostate biopsy samples were mislabelled; vitrectomy was performed on the wrong patient in an ophthalmology department after confusion between two patients with identical names; and a child scheduled for adenoidectomy received a tympanic drain. There were also 107 wrong-site procedures, with one death resulting from implantation of a pleural drain on the wrong side. Another 38 patients experienced significant harm: 5 patients had surgery on the wrong vertebrae; 4 had chest tubes placed on the wrong side; 4 underwent vascular surgery at the wrong site; and 4 underwent resection of the wrong segment of the intestine. In addition, there were: 4 organ resection errors; 6 wrong-site or wrong-sided limb surgeries; 2 wrong-sided ovariectomies; 2 wrong-sided eye operations; 2 wrong-sided craniotomies; 2 wrong-sided ureteric procedures; 1 wrong-sided maxillofacial operation; and 2 radiation therapy field errors. Most errors were due to poor communication, incorrect diagnosis, or failure to implement a final set of preoperative checks. Other studies conducted in the United Kingdom and the United States have provided similar results, while data are lacking in France. The World Health Organization Surgical Safety Checklist is an effective way of preventing such errors but its adoption by healthcare professionals is variable. In practice, surgical errors involving the wrong patient or wrong body site are preventable. Final pre-operative checks must be applied methodically and systematically.This includes asking the patient to confirm his/her identity and the intended site of the operation. Healthcare staff must be aware of these measures.

  12. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    PubMed

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  13. Empirical Error Analysis of GPS RO Atmospheric Profiles

    NASA Astrophysics Data System (ADS)

    Scherllin-Pirscher, B.; Steiner, A. K.; Foelsche, U.; Kirchengast, G.; Kuo, Y.

    2010-12-01

    In the upper troposphere and lower stratosphere (UTLS) region the radio occultation (RO) technique provides accurate profiles of atmospheric parameters. These profiles can be used in operational meteorology (i.e., numerical weather prediction), atmospheric and climate research. We present results of an empirical error analysis of GPS RO data retrieved at UCAR and at WEGC and compare data characteristics of CHAMP, GRACE-A, and Formosat-3/COSMIC. Retrieved atmospheric profiles of bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature are compared to reference profiles extracted from ECMWF analysis fields. This statistical error characterization yields a combined (RO observational plus ECMWF model) error. We restrict our analysis to the years 2007 to 2009 due to known ECMWF deficiencies prior to 2007 (e.g., deficiencies in the representation of the austral polar vortex or the weak representation of tropopause height variability). The GPS RO observational error is determined by subtracting the estimated ECMWF error from the combined error in terms of variances. Our results indicate that the estimated ECMWF error and the GPS RO observational error are approximately of the same order of magnitude. Differences between different satellites are small below 35 km. The GPS RO observational error features latitudinal and seasonal variations, which are most pronounced at stratospheric altitudes at high latitudes. We present simplified models for the observational error, which depend on a few parameters only (Steiner and Kirchengast, JGR 110, D15307, 2005). These global error models are derived from fitting simple analytical functions to the GPS RO observational error. From the lower troposphere up to the tropopause, the model error decreases closely proportional to an inverse height law. Within a core "tropopause region" of the upper troposphere/lower stratosphere the model error is constant and above this region it increases exponentially with

  14. Quantum error correction for beginners.

    PubMed

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  15. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  16. Dominant modes via model error

    NASA Technical Reports Server (NTRS)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  17. Truncation Error Analysis on Reconstruction of Signal From Unsymmetrical Local Average Sampling.

    PubMed

    Pang, Yanwei; Song, Zhanjie; Li, Xuelong; Pan, Jing

    2015-10-01

    The classical Shannon sampling theorem is suitable for reconstructing a band-limited signal from its sampled values taken at regular instances with equal step by using the well-known sinc function. However, due to the inertia of the measurement apparatus, it is impossible to measure the value of a signal precisely at such discrete time. In practice, only unsymmetrically local averages of signal near the regular instances can be measured and used as the inputs for a signal reconstruction method. In addition, when implemented in hardware, the traditional sinc function cannot be directly used for signal reconstruction. We propose using the Taylor expansion of sinc function to reconstruct signal sampled from unsymmetrically local averages and give the upper bound of the reconstruction error (i.e., truncation error). The convergency of the reconstruction method is also presented.

  18. A Tunable, Software-based DRAM Error Detection and Correction Library for HPC

    SciTech Connect

    Fiala, David J; Ferreira, Kurt Brian; Mueller, Frank; Engelmann, Christian

    2012-01-01

    Proposed exascale systems will present a number of considerable resiliency challenges. In particular, DRAM soft-errors, or bit-flips, are expected to greatly increase due to the increased memory density of these systems. Current hardware-based fault-tolerance methods will be unsuitable for addressing the expected soft error frequency rate. As a result, additional software will be needed to address this challenge. In this paper we introduce LIBSDC, a tunable, transparent silent data corruption detection and correction library for HPC applications. LIBSDC provides comprehensive SDC protection for program memory by implementing on-demand page integrity verification. Experimental benchmarks with Mantevo HPCCG show that once tuned, LIBSDC is able to achieve SDC protection with 50\\% overhead of resources, less than the 100\\% needed for double modular redundancy.

  19. Developing control charts to review and monitor medication errors.

    PubMed

    Ciminera, J L; Lease, M P

    1992-03-01

    There is a need to monitor reported medication errors in a hospital setting. Because the quantity of errors vary due to external reporting, quantifying the data is extremely difficult. Typically, these errors are reviewed using classification systems that often have wide variations in the numbers per class per month. The authors recommend the use of control charts to review historical data and to monitor future data. The procedure they have adopted is a modification of schemes using absolute (i.e., positive) values of successive differences to estimate the standard deviation when only single incidence values are available in time rather than sample averages, and when many successive differences may be zero.

  20. Errors and Their Mitigation at the Kirchhoff-Law-Johnson-Noise Secure Key Exchange

    PubMed Central

    Saez, Yessica; Kish, Laszlo B.

    2013-01-01

    A method to quantify the error probability at the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange is introduced. The types of errors due to statistical inaccuracies in noise voltage measurements are classified and the error probability is calculated. The most interesting finding is that the error probability decays exponentially with the duration of the time window of single bit exchange. The results indicate that it is feasible to have so small error probabilities of the exchanged bits that error correction algorithms are not required. The results are demonstrated with practical considerations. PMID:24303033

  1. Spaceborne estimate of atmospheric CO2 column by use of the differential absorption method: error analysis.

    PubMed

    Dufour, Emmanuel; Bréon, François-Marie

    2003-06-20

    For better knowledge of the carbon cycle, there is a need for spaceborne measurements of atmospheric CO2 concentration. Because the gradients are relatively small, the accuracy requirements are better than 1%. We analyze the feasibility of a CO2-weighted-column estimate, using the differential absorption technique, from high-resolution spectroscopic measurements in the 1.6- and 2-microm CO2 absorption bands. Several sources of uncertainty that can be neglected for other gases with less stringent accuracy requirements need to be assessed. We attempt a quantification of errors due to the radiometric noise, uncertainties in the temperature, humidity and surface pressure uncertainty, spectroscopic coefficients, and atmospheric scattering. Atmospheric scattering is the major source of error [5 parts per 10 (ppm) for a subvisual cirrus cloud with an assumed optical thickness of 0.03], and additional research is needed to properly assess the accuracy of correction methods. Spectroscopic data are currently a major source of uncertainty but can be improved with specific ground-based sunphotometry measurements. The other sources of error amount to several ppm, which is less than, but close to, the accuracy requirements. Fortunately, these errors are mostly random and will therefore be reduced by proper averaging.

  2. Chemical basis of Trotter-Suzuki errors in quantum chemistry simulation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan

    2015-02-01

    Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to discretization of the time evolution (known as "Trotterization") in terms of the norm of the error operator and analyzed scaling with respect to the number of spin orbitals. However, we find that these error bounds can be loose by up to 16 orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground-state error and number of spin orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.

  3. On the Chemical Basis of Trotter-Suzuki Errors in Quantum Chemistry Simulation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; McClean, Jarrod; Wecker, Dave; Aspuru-Guzik, Alán; Wiebe, Nathan

    2015-03-01

    Although the simulation of quantum chemistry is one of the most anticipated applications of quantum computing, the scaling of known upper bounds on the complexity of these algorithms is daunting. Prior work has bounded errors due to Trotterization in terms of the norm of the error operator and analyzed scaling with respect to the number of spin-orbitals. However, we find that these error bounds can be loose by up to sixteen orders of magnitude for some molecules. Furthermore, numerical results for small systems fail to reveal any clear correlation between ground state error and number of spin-orbitals. We instead argue that chemical properties, such as the maximum nuclear charge in a molecule and the filling fraction of orbitals, can be decisive for determining the cost of a quantum simulation. Our analysis motivates several strategies to use classical processing to further reduce the required Trotter step size and to estimate the necessary number of steps, without requiring additional quantum resources. Finally, we demonstrate improved methods for state preparation techniques which are asymptotically superior to proposals in the simulation literature.

  4. Error localization in RHIC by fitting difference orbits

    SciTech Connect

    Liu C.; Minty, M.; Ptitsyn, V.

    2012-05-20

    The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.

  5. Error Reduction for Weigh-In-Motion

    SciTech Connect

    Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T

    2009-01-01

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).

  6. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  7. The Use of Error Analysis to Assess Resident Performance

    PubMed Central

    D’Angelo, Anne-Lise D.; Law, Katherine E.; Cohen, Elaine R.; Greenberg, Jacob A.; Kwan, Calvin; Greenberg, Caprice; Wiegmann, Douglas A.; Pugh, Carla M.

    2015-01-01

    Background The aim of this study is to assess validity of a human factors error assessment method for evaluating resident performance during a simulated operative procedure. Methods Seven PGY4-5 residents had 30 minutes to complete a simulated laparoscopic ventral hernia (LVH) repair on Day 1 of a national, advanced laparoscopic course. Faculty provided immediate feedback on operative errors and residents participated in a final product analysis of their repairs. Residents then received didactic and hands-on training regarding several advanced laparoscopic procedures during a lecture session and animate lab. On Day 2, residents performed a nonequivalent LVH repair using a simulator. Three investigators reviewed and coded videos of the repairs using previously developed human error classification systems. Results Residents committed 121 total errors on Day 1 compared to 146 on Day 2. One of seven residents successfully completed the LVH repair on Day 1 compared to all seven residents on Day 2 (p=.001). The majority of errors (85%) committed on Day 2 were technical and occurred during the last two steps of the procedure. There were significant differences in error type (p=<.001) and level (p=.019) from Day 1 to Day 2. The proportion of omission errors decreased from Day 1 (33%) to Day 2 (14%). In addition, there were more technical and commission errors on Day 2. Conclusion The error assessment tool was successful in categorizing performance errors, supporting known-groups validity evidence. Evaluating resident performance through error classification has great potential in facilitating our understanding of operative readiness. PMID:26003910

  8. A Method for Correcting Typographical Errors in Subject Headings in OCLC Records. Research Report.

    ERIC Educational Resources Information Center

    O'Neill, Edward T.; Aluri, Rao

    The error-correcting algorithm described was constructed to examine subject headings in online catalog records for common errors such as omission, addition, substitution, and transposition errors, and to make needed changes. Essentially, the algorithm searches the authority file for a record whose primary key exactly matches the test key. If an…

  9. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    SciTech Connect

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximate weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.

  10. Laser tracker error determination using a network measurement

    NASA Astrophysics Data System (ADS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-04-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.

  11. Surface errors in the course of machining precision optics

    NASA Astrophysics Data System (ADS)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  12. Shuttle orbit IMU alignment. Single-precision computation error

    NASA Technical Reports Server (NTRS)

    Mcclain, C. R.

    1980-01-01

    The source of computational error in the inertial measurement unit (IMU) onorbit alignment software was investigated. Simulation runs were made on the IBM 360/70 computer with the IMU orbit alignment software coded in hal/s. The results indicate that for small IMU misalignment angles (less than 600 arc seconds), single precision computations in combination with the arc cosine method of eigen rotation angle extraction introduces an additional misalignment error of up to 230 arc seconds per axis. Use of the arc sine method, however, produced negligible misalignment error. As a result of this study, the arc sine method was recommended for use in the IMU onorbit alignment software.

  13. An estimation error bound for pixelated sensing

    NASA Astrophysics Data System (ADS)

    Kreucher, Chris; Bell, Kristine

    2016-05-01

    This paper considers the ubiquitous problem of estimating the state (e.g., position) of an object based on a series of noisy measurements. The standard approach is to formulate this problem as one of measuring the state (or a function of the state) corrupted by additive Gaussian noise. This model assumes both (i) the sensor provides a measurement of the true target (or, alternatively, a separate signal processing step has eliminated false alarms), and (ii) The error source in the measurement is accurately described by a Gaussian model. In reality, however, sensor measurement are often formed on a grid of pixels - e.g., Ground Moving Target Indication (GMTI) measurements are formed for a discrete set of (angle, range, velocity) voxels, and EO imagery is made on (x, y) grids. When a target is present in a pixel, therefore, uncertainty is not Gaussian (instead it is a boxcar function) and unbiased estimation is not generally possible as the location of the target within the pixel defines the bias of the estimator. It turns out that this small modification to the measurement model makes traditional bounding approaches not applicable. This paper discusses pixelated sensing in more detail and derives the minimum mean squared error (MMSE) bound for estimation in the pixelated scenario. We then use this error calculation to investigate the utility of using non-thresholded measurements.

  14. POSITION ERROR IN STATION-KEEPING SATELLITE

    DTIC Science & Technology

    of an error in satellite orientation and the sun being in a plane other than the equatorial plane may result in errors in position determination. The nature of the errors involved is described and their magnitudes estimated.

  15. Orbit IMU alignment: Error analysis

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  16. Sensation seeking and error processing.

    PubMed

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific.

  17. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  18. Constraint checking during error recovery

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Wong, Johnny S. K.

    1993-01-01

    The system-level software onboard a spacecraft is responsible for recovery from communication, power, thermal, and computer-health anomalies that may occur. The recovery must occur without disrupting any critical scientific or engineering activity that is executing at the time of the error. Thus, the error-recovery software may have to execute concurrently with the ongoing acquisition of scientific data or with spacecraft maneuvers. This work provides a technique by which the rules that constrain the concurrent execution of these processes can be modeled in a graph. An algorithm is described that uses this model to validate that the constraints hold for all concurrent executions of the error-recovery software with the software that controls the science and engineering activities of the spacecraft. The results are applicable to a variety of control systems with critical constraints on the timing and ordering of the events they control.

  19. Reduced error signalling in medication-naive children with ADHD: associations with behavioural variability and post-error adaptations

    PubMed Central

    Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom

    2016-01-01

    Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332

  20. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  1. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant.

  2. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  3. Geographically correlated errors observed from a laser-based short-arc technique

    NASA Astrophysics Data System (ADS)

    Bonnefond, P.; Exertier, P.; Barlier, F.

    1999-07-01

    The laser-based short-arc technique has been developed in order to avoid local errors which affect the dynamical orbit computation, such as those due to mismodeling in the geopotential. It is based on a geometric method and consists in fitting short arcs (about 4000 km), issued from a global orbit, with satellite laser ranging tracking measurements from a ground station network. Ninety-two TOPEX/Poseidon (T/P) cycles of laser-based short-arc orbits have then been compared to JGM-2 and JGM-3 T/P orbits computed by the Precise Orbit Determination (POD) teams (Service d'Orbitographie Doris/Centre National d'Etudes Spatiales and Goddard Space Flight Center/NASA) over two areas: (1) the Mediterranean area and (2) a part of the Pacific (including California and Hawaii) called hereafter the U.S. area. Geographically correlated orbit errors in these areas are clearly evidenced: for example, -2.6 cm and +0.7 cm for the Mediterranean and U.S. areas, respectively, relative to JGM-3 orbits. However, geographically correlated errors (GCE) which are commonly linked to errors in the gravity model, can also be due to systematic errors in the reference frame and/or to biases in the tracking measurements. The short-arc technique being very sensitive to such error sources, our analysis however demonstrates that the induced geographical systematic effects are at the level of 1-2 cm on the radial orbit component. Results are also compared with those obtained with the GPS-based reduced dynamic technique. The time-dependent part of GCE has also been studied. Over 6 years of T/P data, coherent signals in the radial component of T/P Precise Orbit Ephemeris (POE) are clearly evidenced with a time period of about 6 months. In addition, impact of time varying-error sources coming from the reference frame and the tracking data accuracy has been analyzed, showing a possible linear trend of about 0.5-1 mm/yr in the radial component of T/P POE.

  4. Due process traditionalism.

    PubMed

    Sunstein, Cass R

    2008-06-01

    In important cases, the Supreme Court has limited the scope of "substantive due process" by reference to tradition, but it has yet to explain why it has done so. Due process traditionalism might be defended in several distinctive ways. The most ambitious defense draws on a set of ideas associated with Edmund Burke and Friedrich Hayek, who suggested that traditions have special credentials by virtue of their acceptance by many minds. But this defense runs into three problems. Those who have participated in a tradition may not have accepted any relevant proposition; they might suffer from a systematic bias; and they might have joined a cascade. An alternative defense sees due process traditionalism as a second-best substitute for two preferable alternatives: a purely procedural approach to the Due Process Clause, and an approach that gives legislatures the benefit of every reasonable doubt. But it is not clear that in these domains, the first-best approaches are especially attractive; and even if they are, the second-best may be an unacceptably crude substitute. The most plausible defense of due process traditionalism operates on rule-consequentialist grounds, with the suggestion that even if traditions are not great, they are often good, and judges do best if they defer to traditions rather than attempting to specify the content of "liberty" on their own. But the rule-consequentialist defense depends on controversial and probably false assumptions about the likely goodness of traditions and the institutional incapacities of judges.

  5. Management of human error by design

    NASA Technical Reports Server (NTRS)

    Wiener, Earl

    1988-01-01

    Design-induced errors and error prevention as well as the concept of lines of defense against human error are discussed. The concept of human error prevention, whose main focus has been on hardware, is extended to other features of the human-machine interface vulnerable to design-induced errors. In particular, it is pointed out that human factors and human error prevention should be part of the process of transport certification. Also, the concept of error tolerant systems is considered as a last line of defense against error.

  6. Verification of the Forecast Errors Based on Ensemble Spread

    NASA Astrophysics Data System (ADS)

    Vannitsem, S.; Van Schaeybroeck, B.

    2014-12-01

    The use of ensemble prediction systems allows for an uncertainty estimation of the forecast. Most end users do not require all the information contained in an ensemble and prefer the use of a single uncertainty measure. This measure is the ensemble spread which serves to forecast the forecast error. It is however unclear how best the quality of these forecasts can be performed, based on spread and forecast error only. The spread-error verification is intricate for two reasons: First for each probabilistic forecast only one observation is substantiated and second, the spread is not meant to provide an exact prediction for the error. Despite these facts several advances were recently made, all based on traditional deterministic verification of the error forecast. In particular, Grimit and Mass (2007) and Hopson (2014) considered in detail the strengths and weaknesses of the spread-error correlation, while Christensen et al (2014) developed a proper-score extension of the mean squared error. However, due to the strong variance of the error given a certain spread, the error forecast should be preferably considered as probabilistic in nature. In the present work, different probabilistic error models are proposed depending on the spread-error metrics used. Most of these models allow for the discrimination of a perfect forecast from an imperfect one, independent of the underlying ensemble distribution. The new spread-error scores are tested on the ensemble prediction system of the European Centre of Medium-range forecasts (ECMWF) over Europe and Africa. ReferencesChristensen, H. M., Moroz, I. M. and Palmer, T. N., 2014, Evaluation of ensemble forecast uncertainty using a new proper score: application to medium-range and seasonal forecasts. In press, Quarterly Journal of the Royal Meteorological Society. Grimit, E. P., and C. F. Mass, 2007: Measuring the ensemble spread-error relationship with a probabilistic approach: Stochastic ensemble results. Mon. Wea. Rev., 135, 203

  7. Factors that influence the generation of autobiographical memory conjunction errors.

    PubMed

    Devitt, Aleea L; Monk-Fromont, Edwin; Schacter, Daniel L; Addis, Donna Rose

    2016-01-01

    The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors.

  8. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  9. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    SciTech Connect

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  10. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  11. Typical errors of ESP users

    NASA Astrophysics Data System (ADS)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  12. Error Analysis and Remedial Teaching.

    ERIC Educational Resources Information Center

    Corder, S. Pit

    The purpose of this paper is to analyze the role of error analysis in specifying and planning remedial treatment in second language learning. Part 1 discusses situations that demand remedial action. This is a quantitative assessment that requires measurement of the varying degrees of disparity between the learner's knowledge and the demands of the…

  13. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  14. The error of our ways

    NASA Astrophysics Data System (ADS)

    Swartz, Clifford E.

    1999-10-01

    In Victorian literature it was usually some poor female who came to see the error of her ways. How prescient of her! How I wish that all writers of manuscripts for The Physics Teacher would come to similar recognition of this centerpiece of measurement. For, Brothers and Sisters, we all err.

  15. Amplify Errors to Minimize Them

    ERIC Educational Resources Information Center

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  16. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  17. RM2: rms error comparisons

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1976-01-01

    The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.

  18. The Zero Product Principle Error.

    ERIC Educational Resources Information Center

    Padula, Janice

    1996-01-01

    Argues that the challenge for teachers of algebra in Australia is to find ways of making the structural aspects of algebra accessible to a greater percentage of students. Uses the zero product principle to provide an example of a common student error grounded in the difficulty of understanding the structure of algebra. (DDR)

  19. Competing Criteria for Error Gravity.

    ERIC Educational Resources Information Center

    Hughes, Arthur; Lascaratou, Chryssoula

    1982-01-01

    Presents study in which native-speaker teachers of English, Greek teachers of English, and English native-speakers who were not teachers judged seriousness of errors made by Greek-speaking students of English in their last year of high school. Results show native English speakers were more lenient than Greek teachers, and three groups differed in…

  20. What Is a Reading Error?

    ERIC Educational Resources Information Center

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  1. ISMP Medication Error Report Analysis.

    PubMed

    2013-10-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  2. ISMP Medication Error Report Analysis.

    PubMed

    2014-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  3. ISMP Medication Error Report Analysis.

    PubMed

    2013-05-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  4. ISMP Medication Error Report Analysis.

    PubMed

    2013-12-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  5. ISMP Medication Error Report Analysis.

    PubMed

    2013-11-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  6. ISMP Medication error report analysis.

    PubMed

    2013-04-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  7. ISMP Medication Error Report Analysis.

    PubMed

    2013-06-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  8. ISMP Medication Error Report Analysis.

    PubMed

    2013-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  9. ISMP Medication Error Report Analysis.

    PubMed

    2013-02-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  10. ISMP Medication Error Report Analysis.

    PubMed

    2013-03-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  11. ISMP Medication Error Report Analysis.

    PubMed

    2013-09-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  12. ISMP Medication Error Report Analysis.

    PubMed

    2013-07-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  13. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  14. The Errors of Our Ways

    ERIC Educational Resources Information Center

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  15. THE SIGNIFICANCE OF LEARNER'S ERRORS.

    ERIC Educational Resources Information Center

    CORDER, S.P.

    ERRORS (NOT MISTAKES) MADE IN BOTH SECOND LANGUAGE LEARNING AND CHILD LANGUAGE ACQUISITION PROVIDE EVIDENCE THAT A LEARNER USES A DEFINITE SYSTEM OF LANGUAGE AT EVERY POINT IN HIS DEVELOPMENT. THIS SYSTEM, OR "BUILT-IN SYLLABUS," MAY YIELD A MORE EFFICIENT SEQUENCE THAN THE INSTRUCTOR-GENERATED SEQUENCE BECAUSE IT IS MORE MEANINGFUL TO THE…

  16. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  17. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  18. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  19. Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Minnett, P. J.

    2014-12-01

    Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.

  20. Prospective, multidisciplinary recording of perioperative errors in cerebrovascular surgery: is error in the eye of the beholder?

    PubMed

    Michalak, Suzanne M; Rolston, John D; Lawton, Michael T

    2016-06-01

    OBJECT Surgery requires careful coordination of multiple team members, each playing a vital role in mitigating errors. Previous studies have focused on eliciting errors from only the attending surgeon, likely missing events observed by other team members. METHODS Surveys were administered to the attending surgeon, resident surgeon, anesthesiologist, and nursing staff immediately following each of 31 cerebrovascular surgeries; participants were instructed to record any deviation from optimal course (DOC). DOCs were categorized and sorted by reporter and perioperative timing, then correlated with delays and outcome measures. RESULTS Errors were recorded in 93.5% of the 31 cases surveyed. The number of errors recorded per case ranged from 0 to 8, with an average of 3.1 ± 2.1 errors (± SD). Overall, technical errors were most common (24.5%), followed by communication (22.4%), management/judgment (16.0%), and equipment (11.7%). The resident surgeon reported the most errors (52.1%), followed by the circulating nurse (31.9%), the attending surgeon (26.6%), and the anesthesiologist (14.9%). The attending and resident surgeons were most likely to report technical errors (52% and 30.6%, respectively), while anesthesiologists and circulating nurses mostly reported anesthesia errors (36%) and communication errors (50%), respectively. The overlap in reported errors was 20.3%. If this study had used only the surveys completed by the attending surgeon, as in prior studies, 72% of equipment errors, 90% of anesthesia and communication errors, and 100% of nursing errors would have been missed. In addition, it would have been concluded that errors occurred in only 45.2% of cases (rather than 93.5%) and that errors resulting in a delay occurred in 3.2% of cases instead of the 74.2% calculated using data from 4 team members. Compiled results from all team members yielded significant correlations between technical DOCs and prolonged hospital stays and reported and actual delays (p = 0

  1. Toward a cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  2. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  3. Enthalpy difference between conformations of normal alkanes: effects of basis set and chain length on intramolecular basis set superposition error

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.

    2011-03-01

    The quantum chemistry of conformation equilibrium is a field where great accuracy (better than 100 cal mol-1) is needed because the energy difference between molecular conformers rarely exceeds 1000-3000 cal mol-1. The conformation equilibrium of straight-chain (normal) alkanes is of particular interest and importance for modern chemistry. In this paper, an extra error source for high-quality ab initio (first principles) and DFT calculations of the conformation equilibrium of normal alkanes, namely the intramolecular basis set superposition error (BSSE), is discussed. In contrast to out-of-plane vibrations in benzene molecules, diffuse functions on carbon and hydrogen atoms were found to greatly reduce the relative BSSE of n-alkanes. The corrections due to the intramolecular BSSE were found to be almost identical for the MP2, MP4, and CCSD(T) levels of theory. Their cancelation is expected when CCSD(T)/CBS (CBS, complete basis set) energies are evaluated by addition schemes. For larger normal alkanes (N > 12), the magnitude of the BSSE correction was found to be up to three times larger than the relative stability of the conformer; in this case, the basis set superposition error led to a two orders of magnitude difference in conformer abundance. No error cancelation due to the basis set superposition was found. A comparison with amino acid, peptide, and protein data was provided.

  4. Transfer Error and Correction Approach in Mobile Network

    NASA Astrophysics Data System (ADS)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  5. Error resiliency of distributed video coding in wireless video communication

    NASA Astrophysics Data System (ADS)

    Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj

    2008-08-01

    Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.

  6. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  7. Sensitivity to Error Fields in NSTX High Beta Plasmas

    SciTech Connect

    Park, Jong-Kyu; Menard, Jonathan E.; Gerhardt, Stefan P.; Buttery, Richard J.; Sabbagh, Steve A.; Bell, Steve E.; LeBlanc, Benoit P.

    2011-11-07

    It was found that error field threshold decreases for high β in NSTX, although the density correlation in conventional threshold scaling implies the threshold would increase since higher β plasmas in our study have higher plasma density. This greater sensitivity to error field in higher β plasmas is due to error field amplification by plasmas. When the effect of amplification is included with ideal plasma response calculations, the conventional density correlation can be restored and threshold scaling becomes more consistent with low β plasmas. However, it was also found that the threshold can be significantly changed depending on plasma rotation. When plasma rotation was reduced by non-resonant magnetic braking, the further increase of sensitivity to error field was observed.

  8. Characterization of errors in a coupled snow hydrology-microwave emission model

    USGS Publications Warehouse

    Andreadis, K.M.; Liang, D.; Tsang, L.; Lettenmaier, D.P.; Josberger, E.G.

    2008-01-01

    Traditional approaches to the direct estimation of snow properties from passive microwave remote sensing have been plagued by limitations such as the tendency of estimates to saturate for moderately deep snowpacks and the effects of mixed land cover within remotely sensed pixels. An alternative approach is to assimilate satellite microwave emission observations directly, which requires embedding an accurate microwave emissions model into a hydrologic prediction scheme, as well as quantitative information of model and observation errors. In this study a coupled snow hydrology [Variable Infiltration Capacity (VIC)] and microwave emission [Dense Media Radiative Transfer (DMRT)] model are evaluated using multiscale brightness temperature (TB) measurements from the Cold Land Processes Experiment (CLPX). The ability of VIC to reproduce snowpack properties is shown with the use of snow pit measurements, while TB model predictions are evaluated through comparison with Ground-Based Microwave Radiometer (GBMR), air-craft [Polarimetric Scanning Radiometer (PSR)], and satellite [Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E)] TB measurements. Limitations of the model at the point scale were not as evident when comparing areal estimates. The coupled model was able to reproduce the TB spatial patterns observed by PSR in two of three sites. However, this was mostly due to the presence of relatively dense forest cover. An interesting result occurs when examining the spatial scaling behavior of the higher-resolution errors; the satellite-scale error is well approximated by the mode of the (spatial) histogram of errors at the smaller scale. In addition, TB prediction errors were almost invariant when aggregated to the satellite scale, while forest-cover fractions greater than 30% had a significant effect on TB predictions. ?? 2008 American Meteorological Society.

  9. Least squares evaluations for form and profile errors of ellipse using coordinate data

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan

    2016-09-01

    To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.

  10. An Additive Manufacturing Test Artifact

    PubMed Central

    Moylan, Shawn; Slotwinski, John; Cooke, April; Jurrens, Kevin; Donmez, M Alkan

    2014-01-01

    A test artifact, intended for standardization, is proposed for the purpose of evaluating the performance of additive manufacturing (AM) systems. A thorough analysis of previously proposed AM test artifacts as well as experience with machining test artifacts have inspired the design of the proposed test artifact. This new artifact is designed to provide a characterization of the capabilities and limitations of an AM system, as well as to allow system improvement by linking specific errors measured in the test artifact to specific sources in the AM system. The proposed test artifact has been built in multiple materials using multiple AM technologies. The results of several of the builds are discussed, demonstrating how the measurement results can be used to characterize and improve a specific AM system. PMID:26601039

  11. An Additive Manufacturing Test Artifact.

    PubMed

    Moylan, Shawn; Slotwinski, John; Cooke, April; Jurrens, Kevin; Donmez, M Alkan

    2014-01-01

    A test artifact, intended for standardization, is proposed for the purpose of evaluating the performance of additive manufacturing (AM) systems. A thorough analysis of previously proposed AM test artifacts as well as experience with machining test artifacts have inspired the design of the proposed test artifact. This new artifact is designed to provide a characterization of the capabilities and limitations of an AM system, as well as to allow system improvement by linking specific errors measured in the test artifact to specific sources in the AM system. The proposed test artifact has been built in multiple materials using multiple AM technologies. The results of several of the builds are discussed, demonstrating how the measurement results can be used to characterize and improve a specific AM system.

  12. Enhanced orbit determination filter sensitivity analysis: Error budget development

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Burkhart, P. D.

    1994-01-01

    An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

  13. Toward improved statistical treatments of wind power forecast errors

    NASA Astrophysics Data System (ADS)

    Hart, E.; Jacobson, M. Z.

    2011-12-01

    The ability of renewable resources to reliably supply electric power demand is of considerable interest in the context of growing renewable portfolio standards and the potential for future carbon markets. Toward this end, a number of probabilistic models have been applied to the problem of grid integration of intermittent renewables, such as wind power. Most of these models rely on simple Markov or autoregressive models of wind forecast errors. While these models generally capture the bulk statistics of wind forecast errors, they often fail to reproduce accurate ramp rate distributions and do not accurately describe extreme forecast error events, both of which are of considerable interest to those seeking to comment on system reliability. The problem often lies in characterizing and reproducing not only the magnitude of wind forecast errors, but also the timing or phase errors (ie. when a front passes over a wind farm). Here we compare time series wind power data produced using different forecast error models to determine the best approach for capturing errors in both magnitude and phase. Additionally, new metrics are presented to characterize forecast quality with respect to both considerations.

  14. Correcting false memories: Errors must be noticed and replaced.

    PubMed

    Mullet, Hillary G; Marsh, Elizabeth J

    2016-04-01

    Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.

  15. Measuring worst-case errors in a robot workcell

    SciTech Connect

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.

  16. Neuromotor Noise Is Malleable by Amplifying Perceived Errors

    PubMed Central

    Zhang, Zhaoran; Abe, Masaki O.; Sternad, Dagmar

    2016-01-01

    Variability in motor performance results from the interplay of error correction and neuromotor noise. This study examined whether visual amplification of error, previously shown to improve performance, affects not only error correction, but also neuromotor noise, typically regarded as inaccessible to intervention. Seven groups of healthy individuals, with six participants in each group, practiced a virtual throwing task for three days until reaching a performance plateau. Over three more days of practice, six of the groups received different magnitudes of visual error amplification; three of these groups also had noise added. An additional control group was not subjected to any manipulations for all six practice days. The results showed that the control group did not improve further after the first three practice days, but the error amplification groups continued to decrease their error under the manipulations. Analysis of the temporal structure of participants’ corrective actions based on stochastic learning models revealed that these performance gains were attained by reducing neuromotor noise and, to a considerably lesser degree, by increasing the size of corrective actions. Based on these results, error amplification presents a promising intervention to improve motor function by decreasing neuromotor noise after performance has reached an asymptote. These results are relevant for patients with neurological disorders and the elderly. More fundamentally, these results suggest that neuromotor noise may be accessible to practice interventions. PMID:27490197

  17. Analyzing the errors of DFT approximations for compressed water systems

    SciTech Connect

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-07

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid

  18. QuorUM: An Error Corrector for Illumina Reads

    PubMed Central

    Marçais, Guillaume; Yorke, James A.; Zimin, Aleksey

    2015-01-01

    Motivation Illumina Sequencing data can provide high coverage of a genome by relatively short (most often 100 bp to 150 bp) reads at a low cost. Even with low (advertised 1%) error rate, 100 × coverage Illumina data on average has an error in some read at every base in the genome. These errors make handling the data more complicated because they result in a large number of low-count erroneous k-mers in the reads. However, there is enough information in the reads to correct most of the sequencing errors, thus making subsequent use of the data (e.g. for mapping or assembly) easier. Here we use the term “error correction” to denote the reduction in errors due to both changes in individual bases and trimming of unusable sequence. We developed an error correction software called QuorUM. QuorUM is mainly aimed at error correcting Illumina reads for subsequent assembly. It is designed around the novel idea of minimizing the number of distinct erroneous k-mers in the output reads and preserving the most true k-mers, and we introduce a composite statistic π that measures how successful we are at achieving this dual goal. We evaluate the performance of QuorUM by correcting actual Illumina reads from genomes for which a reference assembly is available. Results We produce trimmed and error-corrected reads that result in assemblies with longer contigs and fewer errors. We compared QuorUM against several published error correctors and found that it is the best performer in most metrics we use. QuorUM is efficiently implemented making use of current multi-core computing architectures and it is suitable for large data sets (1 billion bases checked and corrected per day per core). We also demonstrate that a third-party assembler (SOAPdenovo) benefits significantly from using QuorUM error-corrected reads. QuorUM error corrected reads result in a factor of 1.1 to 4 improvement in N50 contig size compared to using the original reads with SOAPdenovo for the data sets investigated

  19. Reducing Error in Mail Surveys. ERIC Digest.

    ERIC Educational Resources Information Center

    Cui, Weiwei

    This Digest describes four types of errors in mail surveys and summarizes the ways they can be reduced. Any one of these sources of error can make survey results unacceptable. Sampling error is examined through inferential statistics applied to sample survey results. In general, increasing sample size will decrease sampling error when simple…

  20. Error Correction in Oral Classroom English Teaching

    ERIC Educational Resources Information Center

    Jing, Huang; Xiaodong, Hao; Yu, Liu

    2016-01-01

    As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…