Science.gov

Sample records for additional error due

  1. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  2. Straightness error evaluation of additional constraints

    NASA Astrophysics Data System (ADS)

    Pei, Ling; Wang, Shenghuai; Liu, Yong

    2011-05-01

    A new generation of Dimensional and Geometrical Product Specifications (GPS) and Verification standard system is based on both the Mathematical structure and the Metrology. To determine the eligibility of the product should be adapt to modern digital measuring instruments. But in mathematizating measurement when the geometric tolerance specifications has additional constraints requirement, such as straightness with an additional constraint, required to qualify the additional form requirements of the feature within the tolerance zone. Knowing how to close the geometrical specification to the functional specification will result in the correctness of measurement results. Adopting the methodology to evaluate by analyzing various forms including the ideal features and the extracted features and their combinations in an additional form constraint of the straightness in tolerance zone had been found correctly acceptance decision for products. The results show that different combinations of the various forms had affected acceptance on the product qualification and the appropriate forms matching can meet the additional form requirements for product features.

  3. Straightness error evaluation of additional constraints

    NASA Astrophysics Data System (ADS)

    Pei, Ling; Wang, Shenghuai; Liu, Yong

    2010-12-01

    A new generation of Dimensional and Geometrical Product Specifications (GPS) and Verification standard system is based on both the Mathematical structure and the Metrology. To determine the eligibility of the product should be adapt to modern digital measuring instruments. But in mathematizating measurement when the geometric tolerance specifications has additional constraints requirement, such as straightness with an additional constraint, required to qualify the additional form requirements of the feature within the tolerance zone. Knowing how to close the geometrical specification to the functional specification will result in the correctness of measurement results. Adopting the methodology to evaluate by analyzing various forms including the ideal features and the extracted features and their combinations in an additional form constraint of the straightness in tolerance zone had been found correctly acceptance decision for products. The results show that different combinations of the various forms had affected acceptance on the product qualification and the appropriate forms matching can meet the additional form requirements for product features.

  4. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  5. Surface tension increment due to solute addition

    NASA Astrophysics Data System (ADS)

    Hsin, Wei Lun; Sheng, Yu-Jane; Lin, Shi-Yow; Tsao, Heng-Kwong

    2004-03-01

    Addition of solute into solvent may lead to an increase in surface tension, such as salt in water and water in alcohol, due to solute depletion at the interface. The repulsion of the solute from the interface may originate from electrostatic forces or solute-solvent attraction. On the basis of the square-well model for the interface-solute interaction, we derive the surface tension increment Δγ by both canonical and grand-canonical routes (Gibbs adsorption isotherm) for a spherical droplet. The surface tension is increased linearly with the bulk concentration of the solute cb and the interaction range λ. The theoretical results are consistent with those obtained by experiments and Monte Carlo simulations up to a few molarity. For weak repulsion, the increment is internal energy driven. When the repulsion is large enough, the surface tension increment is entropy driven and approaches the asymptotic limit, Δγ≃cbkBTλ, due to the nearly complete depletion of the solute at the interface. Our result may shed some light on the surface tension increment for electrolyte solutions with concentration above 0.2M.

  6. Evidence Report: Risk of Performance Errors Due to Training Deficiencies

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel

    2012-01-01

    The Risk of Performance Errors Due to Training Deficiencies is identified by the National Aeronautics and Space Administration (NASA) Human Research Program (HRP) as a recognized risk to human health and performance in space. The HRP Program Requirements Document (PRD) defines these risks. This Evidence Report provides a summary of the evidence that has been used to identify and characterize this risk. Given that training content, timing, intervals, and delivery methods must support crew task performance, and given that training paradigms will be different for long-duration missions with increased crew autonomy, there is a risk that operators will lack the skills or knowledge necessary to complete critical tasks, resulting in flight and ground crew errors and inefficiencies, failed mission and program objectives, and an increase in crew injuries.

  7. Geodetic secular velocity errors due to interannual surface loading deformation

    NASA Astrophysics Data System (ADS)

    Santamaría-Gómez, Alvaro; Mémin, Anthony

    2015-08-01

    Geodetic vertical velocities derived from data as short as 3 yr are often assumed to be representative of linear deformation over past decades to millennia. We use two decades of surface loading deformation predictions due to variations of atmospheric, oceanic and continental water mass to assess the effect on secular velocities estimated from short time-series. The interannual deformation is time-correlated at most locations over the globe, with the level of correlation depending mostly on the chosen continental water model. Using the most conservative loading model and 5-yr-long time-series, we found median vertical velocity errors of 0.5 mm yr-1 over the continents (0.3 mm yr-1 globally), exceeding 1 mm yr-1 in regions around the southern Tropic. Horizontal velocity errors were seven times smaller. Unless an accurate loading model is available, a decade of continuous data is required in these regions to mitigate the impact of the interannual loading deformation on secular velocities.

  8. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  9. Evidence Report: Risk of Performance Errors Due to Training Deficiencies

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel; Dempsey, Donna L.

    2016-01-01

    Substantial evidence supports the claim that inadequate training leads to performance errors. Barshi and Loukopoulos (2012) demonstrate that even a task as carefully developed and refined over many years as operating an aircraft can be significantly improved by a systematic analysis, followed by improved procedures and improved training (see also Loukopoulos, Dismukes, & Barshi, 2009a). Unfortunately, such a systematic analysis of training needs rarely occurs during the preliminary design phase, when modifications are most feasible. Training is often seen as a way to compensate for deficiencies in task and system design, which in turn increases the training load. As a result, task performance often suffers, and with it, the operators suffer and so does the mission. On the other hand, effective training can indeed compensate for such design deficiencies, and can even go beyond to compensate for failures of our imagination to anticipate all that might be needed when we send our crew members to go where no one else has gone before. Much of the research literature on training is motivated by current training practices aimed at current training needs. Although there is some experience with operations in extreme environments on Earth, there is no experience with long-duration space missions where crews must practice semi-autonomous operations, where ground support must accommodate significant communication delays, and where so little is known about the environment. Thus, we must develop robust methodologies and tools to prepare our crews for the unknown. The research necessary to support such an endeavor does not currently exist, but existing research does reveal general challenges that are relevant to long-duration, high-autonomy missions. The evidence presented here describes issues related to the risk of performance errors due to training deficiencies. Contributing factors regarding training deficiencies may pertain to organizational process and training programs for

  10. Error Due to Wing Bending in Single-Camera Photogrammetric Technique

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W., Jr.; Barrows, Danny A.

    2005-01-01

    The error due to wing bending introduced into single-camera photogrammetric computations used for the determination of wing twist or control surface angular deformation is described. It is shown that the error due to wing bending when determining main wing element-induced twist is typically less than 0.05deg at the wing tip and may not warrant additional correction. It is also shown that the angular error in control surface deformation due to bending can be as large as 1deg or more if the control surface is at a large deflection angle compared to the main wing element. A correction procedure suitable for control surface measurements is presented. Simulations of the error based on typical wind tunnel measurement geometry, and results from a controlled experimental test in the test section of the National Transonic Facility (NTF) are presented to confirm the validity of the method used for correction of control surface photogrammetric deformation data. An example of a leading edge (LE) slat measurement is presented to illustrate the error due to wing bending and its correction.

  11. Treatable newborn and infant seizures due to inborn errors of metabolism.

    PubMed

    Campistol, Jaume; Plecko, Barbara

    2015-09-01

    About 25% of seizures in the neonatal period have causes other than asphyxia, ischaemia or intracranial bleeding. Among these are primary genetic epileptic encephalopathies with sometimes poor prognosis and high mortality. In addition, some forms of neonatal infant seizures are due to inborn errors of metabolism that do not respond to common AEDs, but are amenable to specific treatment. In this situation, early recognition can allow seizure control and will prevent neurological deterioration and long-term sequelae. We review the group of inborn errors of metabolism that lead to newborn/infant seizures and epilepsy, of which the treatment with cofactors is very different to that used in typical epilepsy management.

  12. Reflector sidelobe degradation due to random surface errors

    NASA Technical Reports Server (NTRS)

    Ling, H.; Lo, Y. T.; Rahmat-Samii, Y.

    1986-01-01

    It is well known that the sidelobe structure of a reflector antenna is highly susceptible to random surface errors, and that in most applications it is not adequate to investigate only the average behavior of the antenna. In this study, an attempt is made to determine the probability distribution of the sidelobe level of a reflector antenna subject to some random surface errors. Specifically, the random pattern function is considered and its sidelobe level studied using the level-upcrossing theory. Both the degradation of the maximum sidelobe and the degradation of the sidelobe region with respect to an International Radio Consultative Committee (CCIR) sidelobe envelope are obtained. The theoretical results are found in excellent agreement with those obtained by Monte Carlo simulations. Finally, some useful tolerance charts are presented.

  13. Dimensional errors in LIGA-produced metal structures due to thermal expansion and swelling of PMMA.

    SciTech Connect

    Kistler, Bruce L.; Dryden, Andrew S.; Crowell, Jeffrey A.W.; Griffiths, Stewart K.

    2004-04-01

    Numerical methods are used to examine dimensional errors in metal structures microfabricated by the LIGA process. These errors result from elastic displacements of the PMMA mold during electrodeposition and arise from thermal expansion of the PMMA when electroforming is performed at elevated temperatures and from PMMA swelling due to absorption of water from aqueous electrolytes. Both numerical solutions and simple analytical approximations describing PMMA displacements for idealized linear and axisymmetric geometries are presented and discussed. We find that such displacements result in tapered metal structures having sidewall slopes up to 14 {micro}m per millimeter of height for linear structures bounded by large areas of PMMA. Tapers for curved structures are of similar magnitude, but these structures are additionally skewed from the vertical. Potential remedies for reducing dimensional errors are also discussed. Here we find that auxiliary moat-like features patterned into the PMMA surrounding mold cavities can reduce taper by an order of magnitude or more. Such moats dramatically reduce tapers for all structures, but increase skew for curved structures when the radius of curvature is comparable to the structure height.

  14. Compensation of modeling errors due to unknown domain boundary in diffuse optical tomography.

    PubMed

    Mozumder, Meghdoot; Tarvainen, Tanja; Kaipio, Jari P; Arridge, Simon R; Kolehmainen, Ville

    2014-08-01

    Diffuse optical tomography is a highly unstable problem with respect to modeling and measurement errors. During clinical measurements, the body shape is not always known, and an approximate model domain has to be employed. The use of an incorrect model domain can, however, lead to significant artifacts in the reconstructed images. Recently, the Bayesian approximation error theory has been proposed to handle model-based errors. In this work, the feasibility of the Bayesian approximation error approach to compensate for modeling errors due to unknown body shape is investigated. The approach is tested with simulations. The results show that the Bayesian approximation error method can be used to reduce artifacts in reconstructed images due to unknown domain shape.

  15. Eddy-covariance flux errors due to biases in gas concentration measurements: origins, quantification and correction

    NASA Astrophysics Data System (ADS)

    Fratini, G.; McDermitt, D. K.; Papale, D.

    2013-08-01

    Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (www.licor.com/eddypro).

  16. Estimation of radiation risk in presence of classical additive and Berkson multiplicative errors in exposure doses.

    PubMed

    Masiuk, S V; Shklyar, S V; Kukush, A G; Carroll, R J; Kovgan, L N; Likhtarov, I A

    2016-07-01

    In this paper, the influence of measurement errors in exposure doses in a regression model with binary response is studied. Recently, it has been recognized that uncertainty in exposure dose is characterized by errors of two types: classical additive errors and Berkson multiplicative errors. The combination of classical additive and Berkson multiplicative errors has not been considered in the literature previously. In a simulation study based on data from radio-epidemiological research of thyroid cancer in Ukraine caused by the Chornobyl accident, it is shown that ignoring measurement errors in doses leads to overestimation of background prevalence and underestimation of excess relative risk. In the work, several methods to reduce these biases are proposed. They are new regression calibration, an additive version of efficient SIMEX, and novel corrected score methods.

  17. Model of Head-Positioning Error Due to Rotational Vibration of Hard Disk Drives

    NASA Astrophysics Data System (ADS)

    Matsuda, Yasuhiro; Yamaguchi, Takashi; Saegusa, Shozo; Shimizu, Toshihiko; Hamaguchi, Tetsuya

    An analytical model of head-positioning error due to rotational vibration of a hard disk drive is proposed. The model takes into account the rotational vibration of the base plate caused by the reaction force of the head-positioning actuator, the relationship between the rotational vibration and head-track offset, and the sensitivity function of track-following feedback control. Error calculated by the model agrees well with measured error. It is thus concluded that this model can predict the data transfer performance of a disk drive in read mode.

  18. Dynamic modelling and estimation of the error due to asynchronism in a redundant asynchronous multiprocessor system

    NASA Technical Reports Server (NTRS)

    Huynh, Loc C.; Duval, R. W.

    1986-01-01

    The use of Redundant Asynchronous Multiprocessor System to achieve ultrareliable Fault Tolerant Control Systems shows great promise. The development has been hampered by the inability to determine whether differences in the outputs of redundant CPU's are due to failures or to accrued error built up by slight differences in CPU clock intervals. This study derives an analytical dynamic model of the difference between redundant CPU's due to differences in their clock intervals and uses this model with on-line parameter identification to idenitify the differences in the clock intervals. The ability of this methodology to accurately track errors due to asynchronisity generate an error signal with the effect of asynchronisity removed and this signal may be used to detect and isolate actual system failures.

  19. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions.

    PubMed

    Karachun, Volodimir; Mel'nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the "false" angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  20. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions

    PubMed Central

    Karachun, Volodimir; Mel’nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the “false” angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  1. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions.

    PubMed

    Karachun, Volodimir; Mel'nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-02-26

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the "false" angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined.

  2. Scene identification probabilities for evaluating radiation flux errors due to scene misidentification

    NASA Technical Reports Server (NTRS)

    Manalo, Natividad D.; Smith, G. L.

    1991-01-01

    The scene identification probabilities (Pij) are fundamentally important in evaluations of the top-of-the-atmosphere (TOA) radiation-flux errors due to the scene misidentification. In this paper, the scene identification error probabilities were empirically derived from data collected in 1985 by the Earth Radiation Budget Experiment (ERBE) scanning radiometer when the ERBE satellite and the NOAA-9 spacecraft were rotated so as to scan alongside during brief periods in January and August 1985. Radiation-flux error computations utilizing these probabilities were performed, using orbit specifications for the ERBE, the Cloud and Earth's Radiant Energy System (CERES), and the SCARAB missions for a scene that was identified as partly cloudy over ocean. Typical values of the standard deviation of the random shortwave error were in the order of 1.5-5 W/sq m, but could reach values as high as 18.0 W/sq m as computed from NOAA-9.

  3. Optimized Pulse Parameters for Reducing Quantitation Errors Due to Saturation Factor Changes in Magnetic Resonance Spectroscopy

    NASA Astrophysics Data System (ADS)

    Galbán, Craig J.; Spencer, Richard G. S.

    2002-06-01

    We present an analysis of the effects of chemical exchange and changes in T1 on metabolite quantitation for heart, skeletal muscle, and brain using the one-pulse experiment for a sample which is subject to temporal variation. We use an optimization algorithm to calculate interpulse delay times, TRs, and flip angles, θ, resulting in maximal root-mean-squared signal-to-noise per unit time ( S/ N) for all exchanging species under 5 and 10% constraints on quantitation errors. The optimization yields TR and θ pairs giving signal-to-noise per unit time close or superior to typical literature values. Additional simulations were performed to demonstrate explicitly the dependence of the quantitation errors on pulse parameters and variations in the properties of the sample, such as may occur after an intervention. We find that (i) correction for partial saturation in accordance with the usual analysis neglecting variations in metabolite concentrations and rate constants may readily result in quantitation errors of 15% or more; the exact degree of error depends upon the details of the system under consideration; (ii) if T1's vary as well, significantly larger quantitation errors may occur; and (iii) optimal values of pulse parameters may minimize errors in quantitation with minimal S/ N loss.

  4. Efficiency degradation due to tracking errors for point focusing solar collectors

    NASA Technical Reports Server (NTRS)

    Hughes, R. O.

    1978-01-01

    An important parameter in the design of point focusing solar collectors is the intercept factor which is a measure of efficiency and of energy available for use in the receiver. Using statistical methods, an expression of the expected value of the intercept factor is derived for various configurations and control law implementations. The analysis assumes that a radially symmetric flux distribution (not necessarily Gaussian) is generated at the focal plane due to the sun's finite image and various reflector errors. The time-varying tracking errors are assumed to be uniformly distributed within the threshold limits and allows the expected value calculation.

  5. Systematic errors in conductimetric instrumentation due to bubble adhesions on the electrodes: An experimental assessment

    NASA Astrophysics Data System (ADS)

    Neelakantaswamy, P. S.; Rajaratnam, A.; Kisdnasamy, S.; Das, N. P.

    1985-02-01

    Systematic errors in conductimetric measurements are often encountered due to partial screening of interelectrode current paths resulting from adhesion of bubbles on the electrode surfaces of the cell. A method of assessing this error quantitatively by a simulated electrolytic tank technique is proposed here. The experimental setup simulates the bubble-curtain effect in the electrolytic tank by means of a pair of electrodes partially covered by a monolayer of small polystyrene-foam spheres representing the bubble adhesions. By varying the number of spheres stuck on the electrode surface, the fractional area covered by the bubbles is controlled; and by measuring the interelectrode impedance, the systematic error is determined as a function of the fractional area covered by the simulated bubbles. A theoretical model which depicts the interelectrode resistance and, hence, the systematic error caused by bubble adhesions is calculated by considering the random dispersal of bubbles on the electrodes. Relevant computed results are compared with the measured impedance data obtained from the electrolytic tank experiment. Results due to other models are also presented and discussed. A time-domain measurement on the simulated cell to study the capacitive effects of the bubble curtain is also explained.

  6. Reduction of undulator radiation and FEL small gain due to wiggler errors

    SciTech Connect

    Friedman, A.

    1991-01-01

    A deterministic approach is taken to study the effect of errors in the wiggler magnet field on the spontaneous emission and the gain of Free Electron Lasers. A 3D formulation is used to derive the reduction in spontaneous emission due to changes in the time of flight of the electrons. A generalization of Madey's theorem to 3D is then used to calculate the reduction in the FEL small gain. 6 refs.

  7. Dose calculation errors due to inaccurate representation of heterogeneity correction obtained from computerized tomography.

    PubMed

    Williams, Greg; Tobler, Matthew; Gaffney, David; Moeller, John; Leavitt, Dennis D

    2002-01-01

    Computerized tomography (CT) is used routinely in evaluating radiation therapy isodose plans. With the introduction of 3D algorithms such as the voxel raytrace, which determines inhomogeneity corrections from actual CT Hounsfield numbers, caution must be used when evaluating isodose calculations. Artifacts from contrast media and dental work, radiopaque markers placed by the treatment planner, and changing bowel and rectal air patterns all have the potential to introduce error into the calculation due to inaccurate assessment of high or low density. Radiopaque makers such as x-spot BB's or solder wire are placed externally on the patient. Barium contrast media introduced at the time of simulation may be necessary to visualize specific anatomical structures on the CT images. While these localization and visualization tools may be necessary, it is important to understand the effects they may introduce in the planning process. Other problems encountered are patient specific and out of the control of the treatment planner. These include high- and low-density streaking caused by dental work, which produce computational errors due to overestimation, and small bowel and rectal air, the patterns of which change on a daily basis and may result in underestimation of structure density. It is important for each treatment planner to have an understanding of how this potentially tainted CT information may be applied in dose calculations and the possible effects they may have. At our institution, the voxel raytrace calculation is automatically forced any time couch angle is introduced. Errors in the calculation from the above mentioned situations may be introduced if a heterogeneity correction is applied. Examples of potential calculation errors and the magnitude of each will be discussed. The methods used to minimize these errors and the possible solutions will also be evaluated.

  8. Clark's nutcracker spatial memory: many errors might not be due to forgetting

    PubMed

    Bednekoff; Balda

    1997-09-01

    Clark's nutcrackers, Nucifraga columbianarely upon cached seeds for both winter survival and breeding. Laboratory studies have confirmed that nutcrackers use spatial memory to recover their caches. In the laboratory, however, nutcrackers seem to perform less accurately than they do in nature. Two lines of evidence indicate that nutcrackers make 'errors' in the laboratory that are not due to failures of memory. First, when digging in sand-filled cups, nutcrackers were 89% accurate when they plunged their bills directly into the middle of cups but only 21% accurate when they swept their bills across the cups. Second, nutcrackers were more accurate when the cost of probing was increased by covering sand-filled cups with either petri dishes or heavy glass bowls. Birds recovered caches in order of increasing costs. As costs increased, nutcrackers made somewhat fewer errors nearer to cache sites before recovering the caches and dramatically fewer errors further away from cache sites or near cache sites after recovering the caches. Some errors may be a form of environmental sampling. We conclude that the impressive achievements documented by previous studies are underestimates of the spatial memory abilities of Clark's nutcrackers.1997The Association for the Study of Animal Behaviour

  9. Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error

    ERIC Educational Resources Information Center

    González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén

    2015-01-01

    An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…

  10. A study of GPS measurement errors due to noise and multipath interference for CGADS

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.

    1996-01-01

    This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.

  11. Extending the impulse response in order to reduce errors due to impulse noise and signal fading

    NASA Technical Reports Server (NTRS)

    Webb, Joseph A.; Rolls, Andrew J.; Sirisena, H. R.

    1988-01-01

    A finite impulse response (FIR) digital smearing filter was designed to produce maximum intersymbol interference and maximum extension of the impulse response of the signal in a noiseless binary channel. A matched FIR desmearing filter at the receiver then reduced the intersymbol interference to zero. Signal fades were simulated by means of 100 percent signal blockage in the channel. Smearing and desmearing filters of length 256, 512, and 1024 were used for these simulations. Results indicate that impulse response extension by means of bit smearing appears to be a useful technique for correcting errors due to impulse noise or signal fading in a binary channel.

  12. Evaluation and mitigation of potential errors in radiochromic film dosimetry due to film curvature at scanning.

    PubMed

    Palmer, Antony L; Bradley, David A; Nisbet, Andrew

    2015-03-08

    This work considers a previously overlooked uncertainty present in film dosimetry which results from moderate curvature of films during the scanning process. Small film samples are particularly susceptible to film curling which may be undetected or deemed insignificant. In this study, we consider test cases with controlled induced curvature of film and with film raised horizontally above the scanner plate. We also evaluate the difference in scans of a film irradiated with a typical brachytherapy dose distribution with the film naturally curved and with the film held flat on the scanner. Typical naturally occurring curvature of film at scanning, giving rise to a maximum height 1 to 2 mm above the scan plane, may introduce dose errors of 1% to 4%, and considerably reduce gamma evaluation passing rates when comparing film-measured doses with treatment planning system-calculated dose distributions, a common application of film dosimetry in radiotherapy. The use of a triple-channel dosimetry algorithm appeared to mitigate the error due to film curvature compared to conventional single-channel film dosimetry. The change in pixel value and calibrated reported dose with film curling or height above the scanner plate may be due to variations in illumination characteristics, optical disturbances, or a Callier-type effect. There is a clear requirement for physically flat films at scanning to avoid the introduction of a substantial error source in film dosimetry. Particularly for small film samples, a compression glass plate above the film is recommended to ensure flat-film scanning. This effect has been overlooked to date in the literature.

  13. Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations

    NASA Technical Reports Server (NTRS)

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang

    2006-01-01

    In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:

  14. Estimating random errors due to shot noise in backscatter lidar observations.

    PubMed

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang

    2006-06-20

    We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.

  15. 38 CFR 3.361 - Benefits under 38 U.S.C. 1151(a) for additional disability or death due to hospital care, medical...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., error in judgment, or similar instance of fault on VA's part in furnishing hospital care, medical or.... 1151(a) for additional disability or death due to hospital care, medical or surgical treatment.... 1151(a) for additional disability or death due to hospital care, medical or surgical...

  16. 38 CFR 3.361 - Benefits under 38 U.S.C. 1151(a) for additional disability or death due to hospital care, medical...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., error in judgment, or similar instance of fault on VA's part in furnishing hospital care, medical or.... 1151(a) for additional disability or death due to hospital care, medical or surgical treatment.... 1151(a) for additional disability or death due to hospital care, medical or surgical...

  17. Computational methods to compute wavefront error due to aero-optic effects

    NASA Astrophysics Data System (ADS)

    Genberg, Victor; Michels, Gregory; Doyle, Keith; Bury, Mark; Sebastian, Thomas

    2013-09-01

    Aero-optic effects can have deleterious effects on high performance airborne optical sensors that must view through turbulent flow fields created by the aerodynamic effects of windows and domes. Evaluating aero-optic effects early in the program during the design stages allows mitigation strategies and optical system design trades to be performed to optimize system performance. This necessitates a computationally efficient means to evaluate the impact of aero-optic effects such that the resulting dynamic pointing errors and wavefront distortions due to the spatially and temporally varying flow field can be minimized or corrected. To this end, an aero-optic analysis capability was developed within the commercial software SigFit that couples CFD results with optical design tools. SigFit reads the CFD generated density profile using the CGNS file format. OPD maps are then created by converting the three-dimensional density field into an index of refraction field and then integrating along specified paths to compute OPD errors across the optical field. The OPD maps may be evaluated directly against system requirements or imported into commercial optical design software including Zemax® and Code V® for a more detailed assessment of the impact on optical performance from which design trades may be performed.

  18. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  19. Approximation error method can reduce artifacts due to scalp blood flow in optical brain activation imaging

    NASA Astrophysics Data System (ADS)

    Heiskala, Juha; Kolehmainen, Ville; Tarvainen, Tanja; Kaipio, Jari. P.; Arridge, Simon R.

    2012-09-01

    Diffuse optical tomography can image the hemodynamic response to an activation in the human brain by measuring changes in optical absorption of near-infrared light. Since optodes placed on the scalp are used, the measurements are very sensitive to changes in optical attenuation in the scalp, making optical brain activation imaging susceptible to artifacts due to effects of systemic circulation and local circulation of the scalp. We propose to use the Bayesian approximation error approach to reduce these artifacts. The feasibility of the approach is evaluated using simulated brain activations. When a localized cortical activation occurs simultaneously with changes in the scalp blood flow, these changes can mask the cortical activity causing spurious artifacts. We show that the proposed approach is able to recover from these artifacts even when the nominal tissue properties are not well known.

  20. Global Vision Impairment and Blindness Due to Uncorrected Refractive Error, 1990-2010.

    PubMed

    Naidoo, Kovin S; Leasher, Janet; Bourne, Rupert R; Flaxman, Seth R; Jonas, Jost B; Keeffe, Jill; Limburg, Hans; Pesudovs, Konrad; Price, Holly; White, Richard A; Wong, Tien Y; Taylor, Hugh R; Resnikoff, Serge

    2016-03-01

    The purpose of this systematic review was to estimate worldwide the number of people with moderate and severe visual impairment (MSVI; presenting visual acuity <6/18, ≥3/60) or blindness (presenting visual acuity <3/60) due to uncorrected refractive error (URE), to estimate trends in prevalence from 1990 to 2010, and to analyze regional differences. The review focuses on uncorrected refractive error which is now the most common cause of avoidable visual impairment globally. : The systematic review of 14,908 relevant manuscripts from 1990 to 2010 using Medline, Embase, and WHOLIS yielded 243 high-quality, population-based cross-sectional studies which informed a meta-analysis of trends by region. The results showed that in 2010, 6.8 million (95% confidence interval [CI]: 4.7-8.8 million) people were blind (7.9% increase from 1990) and 101.2 million (95% CI: 87.88-125.5 million) vision impaired due to URE (15% increase since 1990), while the global population increased by 30% (1990-2010). The all-age age-standardized prevalence of URE blindness decreased 33% from 0.2% (95% CI: 0.1-0.2%) in 1990 to 0.1% (95% CI: 0.1-0.1%) in 2010, whereas the prevalence of URE MSVI decreased 25% from 2.1% (95% CI: 1.6-2.4%) in 1990 to 1.5% (95% CI: 1.3-1.9%) in 2010. In 2010, URE contributed 20.9% (95% CI: 15.2-25.9%) of all blindness and 52.9% (95% CI: 47.2-57.3%) of all MSVI worldwide. The contribution of URE to all MSVI ranged from 44.2 to 48.1% in all regions except in South Asia which was at 65.4% (95% CI: 62-72%). : We conclude that in 2010, uncorrected refractive error continues as the leading cause of vision impairment and the second leading cause of blindness worldwide, affecting a total of 108 million people or 1 in 90 persons.

  1. Responsibility for reporting patient death due to hospital error in Japan when an error occurred at a referring institution.

    PubMed

    Maeda, Shoichi; Starkey, Jay; Kamishiraki, Etsuko; Ikeda, Noriaki

    2013-12-01

    In Japan, physicians are required to report unexpected health care-associated patient deaths to the police. Patients needing to be transferred to another institution often have complex medical problems. If a medical error occurs, it may be either at the final or the referring institution. Some fear that liability will fall on the final institution regardless of where the error occurred or that the referring facility may oppose such reporting, leading to a failure to report to police or to recommend an autopsy. Little is known about the actual opinions of physicians and risk managers in this regard. The authors sent standardised, self-administered questionnaires to all hospitals in Japan that participate in the national general residency program. Most physicians and risk managers in Japan indicated that they would report a patient's death to the police where the patient has been transferred. Of those who indicated they would not report to the police, the majority still indicated they would recommend an autopsy PMID:24597392

  2. A table of integrals of the error function. II - Additions and corrections.

    NASA Technical Reports Server (NTRS)

    Geller, M.; Ng, E. W.

    1971-01-01

    Integrals of products of error functions with other functions are presented, taking into account a combination of the error function with powers, a combination of the error function with exponentials and powers, a combination of the error function with exponentials of more complicated arguments, definite integrals from Laplace transforms, and a combination of the error function with trigonometric functions. Other integrals considered include a combination of the error function with logarithms and powers, a combination of two error functions, and a combination of the error function with other special functions.

  3. Errors in Expected Human Losses Due to Incorrect Seismic Hazard Estimates

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Nekrasova, A.; Kossobokov, V. G.

    2011-12-01

    The probability of strong ground motion is presented in seismic hazard maps, in which peak ground accelerations (PGA) with 10% probability of exceedance in 50 years are shown by color codes. It has become evident that these maps do not correctly give the seismic hazard. On the seismic hazard map of Japan, the epicenters of the recent large earthquakes are located in the regions of relatively low hazard. The errors of the GSHAP maps have been measured by the difference between observed and expected intensities due to large earthquakes. Here, we estimate how the errors in seismic hazard estimates propagate into errors in estimating the potential fatalities and affected population. We calculated the numbers of fatalities that would have to be expected in the regions of the nine earthquakes with more than 1,000 fatalities during the last 10 years with relatively reliable estimates of fatalities, assuming a magnitude which generates as a maximum intensity the one given by the GSHAP maps. This value is the number of fatalities to be exceeded with probability of 10% during 50 years. In most regions of devastating earthquakes, there are no instruments to measure ground accelerations. Therefore, we converted the PGA expected as a likely maximum based on the GSHAP maps to intensity. The magnitude of the earthquake that would cause the intensity expected by GSHAP as a likely maximum was calculated by M(GSHAP) = (I0 +1.5)/1.5. The numbers of fatalities, which were expected, based on earthquakes with M(GSHAP), were calculated using the loss estimating program QLARM. We calibrated this tool for each case by calculating the theoretical damage and numbers of fatalities (Festim) for the disastrous test earthquakes, generating a match with the observe numbers of fatalities (Fobs=Festim) by adjusting the attenuation relationship within the bounds of commonly observed laws. Calculating the numbers of fatalities expected for the earthquakes with M(GSHAP) will thus yield results that

  4. Errors of five-day mean surface wind and temperature conditions due to inadequate sampling

    NASA Technical Reports Server (NTRS)

    Legler, David M.

    1991-01-01

    Surface meteorological reports of wind components, wind speed, air temperature, and sea-surface temperature from buoys located in equatorial and midlatitude regions are used in a simulation of random sampling to determine errors of the calculated means due to inadequate sampling. Subsampling the data with several different sample sizes leads to estimates of the accuracy of the subsampled means. The number N of random observations needed to compute mean winds with chosen accuracies of 0.5 (N sub 0.5) and 1.0 (N sub 1,0) m/s and mean air and sea surface temperatures with chosen accuracies of 0.1 (N sub 0.1) and 0.2 (N sub 0.2) C were calculated for each 5-day and 30-day period in the buoy datasets. Mean values of N for the various accuracies and datasets are given. A second-order polynomial relation is established between N and the variability of the data record. This relationship demonstrates that for the same accuracy, N increases as the variability of the data record increases. The relationship is also independent of the data source. Volunteer-observing ship data do not satisfy the recommended minimum number of observations for obtaining 0.5 m/s and 0.2 C accuracy for most locations. The effect of having remotely sensed data is discussed.

  5. Theory of point-spread function artifacts due to structured mid-spatial frequency surface errors.

    PubMed

    Tamkin, John M; Dallas, William J; Milster, Tom D

    2010-09-01

    Optical design and tolerancing of aspheric or free-form surfaces require attention to surface form, structured surface errors, and nonstructured errors. We describe structured surface error profiles and effects on the image point-spread function using harmonic (Fourier) decomposition. Surface errors over the beam footprint map onto the pupil, where multiple structured surface frequencies mix to create sum and difference diffraction orders in the image plane at each field point. Difference frequencies widen the central lobe of the point-spread function and summation frequencies create ghost images.

  6. Pedigree error due to extra-pair reproduction substantially biases estimates of inbreeding depression.

    PubMed

    Reid, Jane M; Keller, Lukas F; Marr, Amy B; Nietlisbach, Pirmin; Sardell, Rebecca J; Arcese, Peter

    2014-03-01

    Understanding the evolutionary dynamics of inbreeding and inbreeding depression requires unbiased estimation of inbreeding depression across diverse mating systems. However, studies estimating inbreeding depression often measure inbreeding with error, for example, based on pedigree data derived from observed parental behavior that ignore paternity error stemming from multiple mating. Such paternity error causes error in estimated coefficients of inbreeding (f) and reproductive success and could bias estimates of inbreeding depression. We used complete "apparent" pedigree data compiled from observed parental behavior and analogous "actual" pedigree data comprising genetic parentage to quantify effects of paternity error stemming from extra-pair reproduction on estimates of f, reproductive success, and inbreeding depression in free-living song sparrows (Melospiza melodia). Paternity error caused widespread error in estimates of f and male reproductive success, causing inbreeding depression in male and female annual and lifetime reproductive success and juvenile male survival to be substantially underestimated. Conversely, inbreeding depression in adult male survival tended to be overestimated when paternity error was ignored. Pedigree error stemming from extra-pair reproduction therefore caused substantial and divergent bias in estimates of inbreeding depression that could bias tests of evolutionary theories regarding inbreeding and inbreeding depression and their links to variation in mating system.

  7. Compensation of modelling errors due to unknown domain boundary in electrical impedance tomography.

    PubMed

    Nissinen, Antti; Kolehmainen, Ville Petteri; Kaipio, Jari P

    2011-02-01

    Electrical impedance tomography is a highly unstable problem with respect to measurement and modeling errors. This instability is especially severe when absolute imaging is considered. With clinical measurements, accurate knowledge about the body shape is usually not available, and therefore an approximate model domain has to be used in the computational model. It has earlier been shown that large reconstruction artefacts result if the geometry of the model domain is incorrect. In this paper, we adapt the so-called approximation error approach to compensate for the modeling errors caused by inaccurately known body shape. This approach has previously been shown to be applicable to a variety of modeling errors, such as coarse discretization in the numerical approximation of the forward model and domain truncation. We evaluate the approach with a simulated example of thorax imaging, and also with experimental data from a laboratory setting, with absolute imaging considered in both cases. We show that the related modeling errors can be efficiently compensated for by the approximation error approach. We also show that recovery from simultaneous discretization related errors is feasible, allowing the use of computationally efficient reduced order models.

  8. Pointing and tracking errors due to localized deformation in inter-satellite laser communication links.

    PubMed

    Tan, Liying; Yang, Yuqiang; Ma, Jing; Yu, Jianjie

    2008-08-18

    Instead of Zernike polynomials, ellipse Gaussian model is proposed to represent localized wave-front deformation in researching pointing and tracking errors in inter-satellite laser communication links, which can simplify the calculation. It is shown that both pointing and tracking errors depend on the center deepness h, the radiuses a and b, and the distance d of the Gaussian distortion and change regularly as they increase. The maximum peak values of pointing and tracking errors always appear around h=0.2lambda. The influence of localized deformation is up to 0.7microrad for pointing error, and 0.5microrad for tracking error. To reduce the impact of localized deformation on pointing and tracking errors, the machining precision of optical devices, which should be more greater than 0.2?, is proposed. The principle of choosing the optical devices with localized deformation is presented, and the method that adjusts the pointing direction to compensate pointing and tracking errors is given. We hope the results can be used in the design of inter-satellite lasercom systems.

  9. Standard addition/absorption detection microfluidic system for salt error-free nitrite determination.

    PubMed

    Ahn, Jae-Hoon; Jo, Kyoung Ho; Hahn, Jong Hoon

    2015-07-30

    A continuous-flow microfluidic chip-based standard addition/absorption detection system has been developed for accurate determination of nitrite in water of varying salinity. The absorption detection of nitrite is made via color development using the Griess reaction. We have found the yield of the reaction is significantly affected by salinity (e.g., -12% error for 30‰ NaCl, 50.0 μg L(-1)N-NO2(-) solution). The microchip has been designed to perform standard addition, color development, and absorbance detection in sequence. To effectively block stray light, the microchip made from black poly(dimethylsiloxane) is placed on the top of a compact housing that accommodates a light-emitting diode, a photomultiplier tube, and an interference filter, where the light source and the detector are optically isolated. An 80-mm liquid-core waveguide mounted on the chip externally has been employed as the absorption detection flow cell. These designs for optics secure a wide linear response range (up to 500 μg L(-1)N-NO2(-)) and a low detection limit (0.12 μg L(-1)N-NO2(-) = 8.6 nM N-NO2(-), S/N = 3). From determination of nitrite in standard samples and real samples collected from an estuary, it has been demonstrated that our microfluidic system is highly accurate (<1% RSD, n = 3) and precise (<1% RSD, n = 3). PMID:26320643

  10. Reducing On-Board Computer Propagation Errors Due to Omitted Geopotential Terms by Judicious Selection of Uploaded State Vector

    NASA Technical Reports Server (NTRS)

    Greatorex, Scott (Editor); Beckman, Mark

    1996-01-01

    Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would

  11. Uncertainties in interpretation of data from turbulent boundary layers due to measurement errors

    NASA Astrophysics Data System (ADS)

    Vinuesa, Ricardo; Nagib, Hassan

    2011-11-01

    Composite expansions based on log law and power law were used to generate synthetic velocity profiles of ZPG turbulent boundary layers in the range 800 <= Reθ <= 8 . 6 ×105 . Several artificial errors were then added to the velocity profiles to simulate dispersion in velocity measurements, error in determining probe position and uncertainty in measured skin friction. The effects of the simulated errors were studied by extracting log-law and power-law parameters from all these pseudo-experimental profiles, regardless of their original overlap region description. Various techniques were used, including the diagnostic functions (Ξ and Γ) and direct fits to logarithmic and power laws, to establish a measure of the deviations in the overlap region. The differences between extracted parameters and their expected values are compared for each case, with different magnitudes of error, to reveal when the pseudo-experimental profile leads to ambiguous conclusions; i.e., when parameters extracted for log law and power law are associated with similar levels of deviations. This ambiguity was observed up to Reθ =16,000 for a 3 % dispersion in the velocity measurements and Reθ =2,000 when the skin friction was overestimated by only 2 %. With respect to the error in the probe position, an uncertainty of 400 μm made even the highest Re profile ambiguous. The results from the present study are valid for air flow at atmospheric conditions.

  12. Review of Aircraft Altitude Errors Due to Static-Pressure Source and Description of Nose-Boom Installations for Aerodynamic Compensation of Error

    NASA Technical Reports Server (NTRS)

    Gracey, William; Ritchie, Virgil S.

    1959-01-01

    A brief review of airplane altitude errors due to typical pressure installations at the fuselage nose, the wing tip, and the vertical fins is presented. A static-pressure tube designed to compensate for the position errors of fuselage-nose installations in the subsonic speed range is described. This type of tube has an ogival nose shape with the static-pressure orifices located in the low-pressure region near the tip. The results of wind-tunnel tests of these compensated tubes at two distances ahead of a model of an aircraft showed the position errors to be compensated to within 1/2 percent of the static pressure through a Mach number range up to about 1.0. This accuracy of sensing free-stream static pressure was extended up to a Mach number of about 1.15 by use of an orifice arrangement for producing approximate free-stream pressures at supersonic speeds and induced pressures for compensation of error at subsonic speeds.

  13. Compensation of errors due to incident beam drift in a 3 DOF measurement system for linear guide motion.

    PubMed

    Hu, Pengcheng; Mao, Shuai; Tan, Jiu-Bin

    2015-11-01

    A measurement system with three degrees of freedom (3 DOF) that compensates for errors caused by incident beam drift is proposed. The system's measurement model (i.e. its mathematical foundation) is analyzed, and a measurement module (i.e. the designed orientation measurement unit) is developed and adopted to measure simultaneously straightness errors and the incident beam direction; thus, the errors due to incident beam drift can be compensated. The experimental results show that the proposed system has a deviation of 1 μm in the range of 200 mm for distance measurements, and a deviation of 1.3 μm in the range of 2 mm for straightness error measurements.

  14. QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES

    EPA Science Inventory

    The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...

  15. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    ERIC Educational Resources Information Center

    Bishara, Anthony J.; Hittner, James B.

    2015-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…

  16. Using kriging to bound satellite ranging errors due to the ionosphere

    NASA Astrophysics Data System (ADS)

    Blanch, Juan

    The Global Positioning System (GPS) has the potential to become the primary navigational aid for civilian aircraft, thanks to satellite based augmentation systems (SBAS). SBAS systems, including the United State's Wide Area Augmentation System (WAAS), provide corrections and hard bounds on the user errors. The ionosphere is the largest and least predictable source of error. The only ionospheric information available to WAAS is a set of range delay measurements taken at reference stations. From this data, the master station must compute a real time estimate of the ionospheric delay and a hard error bound valid for any user. The variability of the ionospheric behavior has caused the confidence bounds corresponding to the ionosphere to be very large in WAAS. These ranging bounds translate into conservative bounds on user position error. These position error bounds (called protection levels) have values of 30 to 50 meters. Since these values fluctuate near the maximum tolerable limit, WAAS is not always available. In order to increase the availability of WAAS, we must decrease the confidence bounds corresponding to ionospheric uncertainty while maintaining integrity. In this work, I present an ionospheric estimation algorithm based on kriging. I first introduce a simple model of the Vertical Ionospheric Delay that captures both the deterministic behavior and the random behavior of the ionosphere. Under this model, the kriging method is optimal. More importantly kriging provides an estimation variance that can be translated into an error bound. However, this method must be modified for three reasons; first, the state of the ionosphere is unknown and can only be estimated through real time measurements; second, because of bandwidth constraints, the user cannot receive all the measurements and third there is noise in the measurements. I will show how these three obstacles can be overcome. The algorithm presented here provides a reduction in the error bound corresponding

  17. Simulation of Dose to Surrounding Normal Structures in Tangential Breast Radiotherapy Due to Setup Error

    SciTech Connect

    Prabhakar, Ramachandran Rath, Goura K.; Julka, Pramod K.; Ganesh, Tharmar; Haresh, K.P.; Joshi, Rakesh C.; Senthamizhchelvan, S.; Thulkar, Sanjay; Pant, G.S.

    2008-04-01

    Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL), contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.

  18. Simulation of dose to surrounding normal structures in tangential breast radiotherapy due to setup error.

    PubMed

    Prabhakar, Ramachandran; Rath, Goura K; Julka, Pramod K; Ganesh, Tharmar; Haresh, K P; Joshi, Rakesh C; Senthamizhchelvan, S; Thulkar, Sanjay; Pant, G S

    2008-01-01

    Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL), contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues. PMID:18262128

  19. Distorted orbit due to field errors and particle trajectories in combined undulator and axial magnetic field

    SciTech Connect

    Papadichev, V.A.

    1995-12-31

    Undulator and solenoid field errors cause electron trajectory deviation from the ideal orbit. Even small errors can result in a large lower frequency excursion from the undulator axis of a distorted orbit and of betatron oscillations performed now around it, especially near resonant conditions. Numerical calculation of a trajectory step by step requires large computing time and treats only particular cases, thus lacking generality. Theoretical treatment is traditionally based on random distribution of field errors, which allows a rather general approach, but is not convenient for practical purposes. In contrast, analytical treatment shows explicitly how distorted orbit and betatron oscillation amplitude depend on field parameters and errors and indicates how to eliminate these distortions. An analytical solution of the equations of motion can be found by expanding field errors and distorted orbit in Fourier series as was done earlier for the simplest case of a plane undulator without axial magnetic field. The same method is applied now to the more general case of combined generlized undulator and axial magnetic fields. The undulator field is a superposition of the fields of two plane undulators with mutually orthogonal fields and an arbitrary axial shift of the second undulator relative to the first. Beam space-charge forces and external linear focusing are taken into account. The particle trajectory is a superposition of ideal and distorted orbits with cyclotron gyration and slow drift gyration in the axial magnetic field caused by a balance of focusing and defocusing forces. The amplitudes of these gyrations depend on transverse coordinate and velocity at injection and can nearly double the total deviation of an electron from the undulator axis even after an adiabatic undulator entry. If the wavenumber of any Fourier harmonic is close to the wavenumbers of cyclotron or drift gyrations, a resonant increase of orbit distortion occurs.

  20. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  1. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGES

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  2. A quantification of errors in surface albedo due to common assumptions

    NASA Technical Reports Server (NTRS)

    Arduini, Robert F.; Suttles, J. T.

    1990-01-01

    A study comparing the performance of three approaches to estimating the spectral albedo of a typical land surface is presented. The most accurate albedo estimates under all atmospheric situations are those for which the scattering properties of the atmosphere can be used. Simply utilizing the direct-to-total ratio as a weight between direct and Lambertian albedos reduced the errors in broadband albedo to less than one percent for almost all simulated atmospheric conditions.

  3. Relativistic positioning: errors due to uncertainties in the satellite world lines

    NASA Astrophysics Data System (ADS)

    Puchades, Neus; Sáez, Diego

    2014-07-01

    Global navigation satellite systems use appropriate satellite constellations to get the coordinates of an user—close to Earth—in an almost inertial reference system. We have simulated both GPS and GALILEO constellations. Uncertainties in the satellite world lines lead to dominant positioning errors. In this paper, a detailed analysis of these errors is developed inside a great region surrounding Earth. This analysis is performed in the framework of the so-called relativistic positioning systems. Our study is based on the Jacobian ( J) of the transformation giving the emission coordinates in terms of the inertial ones. Around points of vanishing J, positioning errors are too large. We show that, for any 4-tuple of satellites, the points with J=0 are located at distances, D, from the Earth centre greater than about 2 R/3, where R is the radius of the satellite orbits which are assumed to be circumferences. Our results strongly suggest that, for D-distances greater than 2 R/3 and smaller than 105 km, a rather good positioning may be achieved by using appropriate satellite 4-tuples without J=0 points located in the user vicinity. The way to find these 4-tuples is discussed for arbitrary users with D<105 km and, then, preliminary considerations about satellite navigation at D<105 km are presented. Future work on the subject of space navigation—based on appropriate simulations—is in progress.

  4. Seasonal GPS Positioning Errors due to Water Content Variations in Atmosphere

    NASA Astrophysics Data System (ADS)

    Tian, Y.

    2013-12-01

    There are still non-tectonic signals, e.g. the seasonal variations and common-mode errors (CME), in Global Positioning System (GPS) positioning results derived using the state-of-the-art software and models, which blurs the detection of transient events. Previous studies had shown that there are also seasonal variations in the GPS positioning accuracy, i.e., the scattering degree of GPS positions in the summer is larger than that in the winter for some regional networks. In this work, a consistent reprocessing of historical data for global GPS stations is done to confirm the existence of such variations and figure out its spatial characteristics at the global scale. It is found that GPS stations in the north hemisphere have larger positioning error in the summer than in the winter; and contrarily the south hemisphere stations have larger errors in the winter than in the summer. Results for several typical stations are shown in Fig. 1. After excluding several possible origins of this phenomenon, it is found that the variation of precipitable water vapor (PWV) content in the atmosphere is highly correlated with this kind of seasonal positioning errors of GPS (Fig. 2). Although it currently cannot be validated that the GPS positioning accuracy will increase by eliminating PWV effect thoroughly in the rainy days during the GPS observation data processing step, it is most likely that this phenomenon is caused by the water vapor content in the troposphere. The solving of this problem will surely enhance our ability to detect weak transient signals that are blurred in the continuous GPS positions. Fig. 1 Position time series deprived of CME for BJFS (left), HRAO (middle), and WTZR (right). BJFS and WTZR are located in the north hemisphere where positions are much scattered in the summer; the situation is the contrary at HRAO which is seated in the south hemisphere. Fig. 2 GPS positioning errors (represented here by the one-way postfit residuals (OWPR) from GAMIT solution

  5. Soft-error generation due to heavy-ion tracks in bipolar integrated circuits

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.

    1984-01-01

    Both bipolar and MOS integrated circuits have been empirically demonstrated to be susceptible to single-particle soft-error generation, commonly referred to as single-event upset (SEU), which is manifested in a bit-flip in a latch-circuit construction. Here, the intrinsic characteristics of SEU in bipolar (static) RAM's are demonstrated through results obtained from the modeling of this effect using computer circuit-simulation techniques. It is shown that as the dimensions of the devices decrease, the critical charge required to cause SEU decreases in proportion to the device cross-section. The overall results of the simulations are applicable to most integrated circuit designs.

  6. Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators

    NASA Technical Reports Server (NTRS)

    Curtis, H. B.

    1976-01-01

    Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.

  7. Signal distortion due to beam-pointing error in a chopper modulated laser system.

    PubMed

    Eklund, H

    1978-01-15

    The detector output has been studied for a long-distance system with a chopped cw laser as transmitter source. It is shown experimentally that the pulse distortion of the detected signal is dependent on the beam-pointing error. Parameters reflecting the pulse distortion are defined. The beam deviation in 1-D is found to be strongly related to these parameters. The result is in agreement with a theoretical model based upon the Fresnel diffraction theory. Possible applications in beam-tracking systems, communications systems, and atmospheric studies are discussed. PMID:20174398

  8. [Error structure and additivity of individual tree biomass model for four natural conifer species in Northeast China].

    PubMed

    Dogn, Li-hu; Li, Feng-ri; Song, Yu-wen

    2015-03-01

    Based on the biomass data of 276 sampling trees of Pinus koraiensis, Abies nephrolepis, Picea koraiensis and Larix gmelinii, the mono-element and dual-element additive system of biomass equations for the four conifer species was developed. The model error structure (additive vs. multiplicative) of the allometric equation was evaluated using the likelihood analysis, while nonlinear seemly unrelated regression was used to estimate the parameters in the additive system of biomass equations. The results indicated that the assumption of multiplicative error structure was strongly supported for the biomass equations of total and tree components for the four conifer species. Thus, the additive system of log-transformed biomass equations was developed. The adjusted coefficient of determination (Ra 2) of the additive system of biomass equations for the four conifer species was 0.85-0.99, the mean relative error was between -7.7% and 5.5%, and the mean absolute relative error was less than 30.5%. Adding total tree height in the additive systems of biomass equations could significantly improve model fitting performance and predicting precision, and the biomass equations of total, aboveground and stem were better than biomass equations of root, branch, foliage and crown. The precision of each biomass equation in the additive system varied from 77.0% to 99.7% with a mean value of 92.3% that would be suitable for predicting the biomass of the four natural conifer species.

  9. Error in Dasibi flight measurements of atmospheric ozone due to instrument wall-loss

    NASA Technical Reports Server (NTRS)

    Ainsworth, J. E.; Hagemeyer, J. R.; Reed, E. I.

    1981-01-01

    Theory suggests that in laminar flow the percent loss of a trace constituent to the walls of a measuring instrument varies as P to the -2/3, where P is the total gas pressure. Preliminary laboratory ozone wall-loss measurements confirm this P to the -2/3 dependence. Accurate assessment of wall-loss is thus of particular importance for those balloon-borne instruments utilizing laminar flow at ambient pressure, since the ambient pressure decreases by a factor of 350 during ascent to 40 km. Measurements and extrapolations made for a Dasibi ozone monitor modified for balloon flight indicate that the wall-loss error at 40 km was between 6 and 30 percent and that the wall-loss error in the derived total ozone column-content for the region from the surface to 40 km altitude was between 2 and 10 percent. At 1000 mb, turbulence caused an order of magnitude increase in the Dasibi wall-loss.

  10. Mitigation of Angle Tracking Errors Due to Color Dependent Centroid Shifts in SIM-Lite

    NASA Technical Reports Server (NTRS)

    Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.; Zhai, Chengxing

    2010-01-01

    The SIM-Lite astrometric interferometer will search for Earth-size planets in the habitable zones of nearby stars. In this search the interferometer will monitor the astrometric position of candidate stars relative to nearby reference stars over the course of a 5 year mission. The elemental measurement is the angle between a target star and a reference star. This is a two-step process, in which the interferometer will each time need to use its controllable optics to align the starlight in the two arms with each other and with the metrology beams. The sensor for this alignment is an angle tracking CCD camera. Various constraints in the design of the camera subject it to systematic alignment errors when observing a star of one spectrum compared with a start of a different spectrum. This effect is called a Color Dependent Centroid Shift (CDCS) and has been studied extensively with SIM-Lite's SCDU testbed. Here we describe results from the simulation and testing of this error in the SCDU testbed, as well as effective ways that it can be reduced to acceptable levels.

  11. Modeling nonlinear errors in surface electromyography due to baseline noise: a new methodology.

    PubMed

    Law, Laura Frey; Krishnan, Chandramouli; Avin, Keith

    2011-01-01

    The surface electromyographic (EMG) signal is often contaminated by some degree of baseline noise. It is customary for scientists to subtract baseline noise from the measured EMG signal prior to further analyses based on the assumption that baseline noise adds linearly to the observed EMG signal. The stochastic nature of both the baseline and EMG signal, however, may invalidate this assumption. Alternately, "true" EMG signals may be either minimally or nonlinearly affected by baseline noise. This information is particularly relevant at low contraction intensities when signal-to-noise ratios (SNR) may be lowest. Thus, the purpose of this simulation study was to investigate the influence of varying levels of baseline noise (approximately 2-40% maximum EMG amplitude) on mean EMG burst amplitude and to assess the best means to account for signal noise. The simulations indicated baseline noise had minimal effects on mean EMG activity for maximum contractions, but increased nonlinearly with increasing noise levels and decreasing signal amplitudes. Thus, the simple baseline noise subtraction resulted in substantial error when estimating mean activity during low intensity EMG bursts. Conversely, correcting EMG signal as a nonlinear function of both baseline and measured signal amplitude provided highly accurate estimates of EMG amplitude. This novel nonlinear error modeling approach has potential implications for EMG signal processing, particularly when assessing co-activation of antagonist muscles or small amplitude contractions where the SNR can be low.

  12. Inhalation errors due to device switch in patients with chronic obstructive pulmonary disease and asthma: critical health and economic issues

    PubMed Central

    Roggeri, Alessandro; Micheletto, Claudio; Roggeri, Daniela Paola

    2016-01-01

    Background Different inhalation devices are characterized by different techniques of use. The untrained switching of device in chronic obstructive pulmonary disease (COPD) and asthma patients may be associated with inadequate inhalation technique and, consequently, could lead to a reduction in adherence to treatment and limit control of the disease. The aim of this analysis was to estimate the potential economic impact related to errors in inhalation in patients switching device without adequate training. Methods An Italian real-practice study conducted in patients affected by COPD and asthma has shown an increase in health care resource consumption associated with misuse of inhalers. Particularly, significantly higher rates of hospitalizations, emergency room visits (ER), and pharmacological treatments (steroids and antimicrobials) were observed. In this analysis, those differences in resource consumption were monetized considering the Italian National Health Service (INHS) perspective. Results Comparing a hypothetical cohort of 100 COPD patients with at least a critical error in inhalation vs 100 COPD patients without errors in inhalation, a yearly excess of 11.5 hospitalizations, 13 ER visits, 19.5 antimicrobial courses, and 47 corticosteroid courses for the first population were revealed. In the same way, considering 100 asthma patients with at least a critical error in inhalation vs 100 asthma patients without errors in inhalation, the first population is associated with a yearly excess of 19 hospitalizations, 26.5 ER visits, 4.5 antimicrobial courses, and 21.5 corticosteroid courses. These differences in resource consumption could be associated with an increase in health care expenditure for INHS, due to inhalation errors, of €23,444/yr in COPD and €44,104/yr in asthma for the considered cohorts of 100 patients. Conclusion This evaluation highlights that misuse of inhaler devices, due to inadequate training or nonconsented switch of inhaled medications

  13. Prevalence of visual impairment due to uncorrected refractive error: Results from Delhi-Rapid Assessment of Visual Impairment Study

    PubMed Central

    Senjam, Suraj Singh; Vashist, Praveen; Gupta, Noopur; Malhotra, Sumit; Misra, Vasundhara; Bhardwaj, Amit; Gupta, Vivek

    2016-01-01

    Aim: To estimate the prevalence of visual impairment (VI) due to uncorrected refractive error (URE) and to assess the barriers to utilization of services in the adult urban population of Delhi. Materials and Methods: A population-based rapid assessment of VI was conducted among people aged 40 years and above in 24 randomly selected clusters of East Delhi district. Presenting visual acuity (PVA) was assessed in each eye using Snellen's E chart. Pinhole examination was done if PVA was <20/60 in either eye and ocular examination to ascertain the cause of VI. Barriers to utilization of services for refractive error were recorded with questionnaires. Results: Of 2421 individuals enumerated, 2331 (96%) individuals were examined. Females were 50.7% among them. The mean age of all examined subjects was 51.32 ± 10.5 years (standard deviation). VI in either eye due to URE was present in 275 individuals (11.8%, 95% confidence interval [CI]: 10.5–13.1). URE was identified as the most common cause (53.4%) of VI. The overall prevalence of VI due to URE in the study population was 6.1% (95% CI: 5.1 CI: 5.1–7.0). The elder population as well as females were more likely to have VI due to URE (odds ratio [OR] = 12.3; P < 0.001 and OR = 1.5; P < 0.02). Lack of felt need was the most common reported barrier (31.5%). Conclusions: The prevalence of VI due to URE among the urban adult population of Delhi is still high despite the availability of abundant eye care facilities. The majority of reported barriers are related to human behavior and attitude toward the refractive error. Understanding these aspects will help in planning appropriate strategies to eliminate VI due to URE. PMID:27380979

  14. a Measuring System with AN Additional Channel for Eliminating the Dynamic Error

    NASA Astrophysics Data System (ADS)

    Dichev, Dimitar; Koev, Hristofor; Louda, Petr

    2014-03-01

    The present article views a measuring system for determining the parameters of vessels. The system has high measurement accuracy when operating in both static and dynamic mode. It is designed on a gyro-free principle for plotting a vertical. High accuracy of measurement is achieved by using a simplified design of the mechanical module as well by minimizing the instrumental error. A new solution for improving the measurement accuracy in dynamic mode is offered. The approach presented is based on a method where the dynamic error is eliminated in real time, unlike the existing measurement methods and tools where stabilization of the vertical in the inertial space is used. The results obtained from the theoretical experiments, which have been performed on the basis of the developed mathematical model, demonstrate the effectiveness of the suggested measurement approach.

  15. Prediction of DVH parameter changes due to setup errors for breast cancer treatment based on 2D portal dosimetry

    SciTech Connect

    Nijsten, S. M. J. J. G.; Elmpt, W. J. C. van; Mijnheer, B. J.; Minken, A. W. H.; Persoon, L. C. G. G.; Lambin, P.; Dekker, A. L. A. J.

    2009-01-15

    Electronic portal imaging devices (EPIDs) are increasingly used for portal dosimetry applications. In our department, EPIDs are clinically used for two-dimensional (2D) transit dosimetry. Predicted and measured portal dose images are compared to detect dose delivery errors caused for instance by setup errors or organ motion. The aim of this work is to develop a model to predict dose-volume histogram (DVH) changes due to setup errors during breast cancer treatment using 2D transit dosimetry. First, correlations between DVH parameter changes and 2D gamma parameters are investigated for different simulated setup errors, which are described by a binomial logistic regression model. The model calculates the probability that a DVH parameter changes more than a specific tolerance level and uses several gamma evaluation parameters for the planning target volume (PTV) projection in the EPID plane as input. Second, the predictive model is applied to clinically measured portal images. Predicted DVH parameter changes are compared to calculated DVH parameter changes using the measured setup error resulting from a dosimetric registration procedure. Statistical accuracy is investigated by using receiver operating characteristic (ROC) curves and values for the area under the curve (AUC), sensitivity, specificity, positive and negative predictive values. Changes in the mean PTV dose larger than 5%, and changes in V{sub 90} and V{sub 95} larger than 10% are accurately predicted based on a set of 2D gamma parameters. Most pronounced changes in the three DVH parameters are found for setup errors in the lateral-medial direction. AUC, sensitivity, specificity, and negative predictive values were between 85% and 100% while the positive predictive values were lower but still higher than 54%. Clinical predictive value is decreased due to the occurrence of patient rotations or breast deformations during treatment, but the overall reliability of the predictive model remains high. Based on our

  16. The signal in total-body plethysmography: errors due to adiabatic-isothermic difference.

    PubMed

    Chaui-Berlinck, J G; Bicudo, J E

    1998-09-01

    Total-body plethysmography is a technique often employed in comparative physiology studies because it avoids excessive handling of the animals. The pressure signal obtained is generated by an increase in internal energy of the gas phase of the system. Currently, this increase in internal energy is ascribed to heating (and water vapour saturation) of the inspired gas. The standard equation for computing tidal-volume implies that only temperature and saturation differences can be responsible for generating the ventilation signal. In this study, we were able to demonstrate that the difference between the external process of the thoracic expansion, which is adiabatic, and the internal process of it, which is isothermic, is an important factor of internal energy change in the total-body plethysmography method. In other words, organic tissues transfer heat to the entering gas but also to the present gas, in a way that keeps internal expansion an isothermic process. This extra amount of energy was never taken into account before. Therefore, experiments using such a technique to measure tidal-volume should be done using isothermic chambers. Moreover, due to uncertainties of the complementary measurements (ambient and lung temperatures, ambient water vapour saturation) needed to compute tidal-volume using total-body plethysmography, a minimal temperature difference about 15 degrees C between body and ambient should exist to keep uncertainties in tidal-volume values below 5%. However, this limit is not absolute, because it varies as a function of humidity and degree of uncertainty of the complementary measurements.

  17. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  18. Systematic errors in respiratory gating due to intrafraction deformations of the liver

    SciTech Connect

    Siebenthal, Martin von; Szekely, Gabor; Lomax, Antony J.; Cattin, Philippe C.

    2007-09-15

    This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift.

  19. n-Alkane isodesmic reaction energy errors in density functional theory are due to electron correlation effects.

    PubMed

    Grimme, Stefan

    2010-10-15

    The isodesmic reaction energies of n-alkanes to ethane, which have so far been known to give systematic errors in standard DFT calculations, are successfully reproduced by SCS-MP2 and dispersion-corrected double-hybrid functionals. The failure of conventional DFT is not due to the lack of long-range exchange interactions but results from an inaccurate account of medium-range electron correlation that is attractive for 1,3-interactions (proto-branching). Highly accurate CCSD(T)/CBS data are provided that are recommended in thermochemical benchmarks.

  20. Dispersion sensitivity of the eight inch advanced ramjet munitions technology projectile due to wind and minor thrust errors

    NASA Astrophysics Data System (ADS)

    Poole, S. R.

    1984-09-01

    Advanced Ramjet Munitions Technology (ARMT) is an ongoing DARPA project to research ramjet munitions. The ARMT eight inch projectile uses ramjet thrust for a boosted trajectory, but operates on a thrust drag balance concept to create pseudovacuum trajectory during powered flight. The trajectory was analyzed using an IBM-370 computer simulation for three and five degrees of freedom. Work was also done to adapt the Ballistics Research Laboratories six degrees of freedom program to the IBM system. Projectile aerodynamic and mass properties were obtained from the Norden Systems Wind Tunnel Data. Dispersion from the vaccuum trajectory due to wind prior to ramjet burnout proved minor. Dispersion due to constant thrust errors under 5% was within a 600 radius at terminal guidance over a range of 33 miles.

  1. ANALYSIS OF DISTRIBUTION FEEDER LOSSES DUE TO ADDITION OF DISTRIBUTED PHOTOVOLTAIC GENERATORS

    SciTech Connect

    Tuffner, Francis K.; Singh, Ruchi

    2011-08-09

    Distributed generators (DG) are small scale power supplying sources owned by customers or utilities and scattered throughout the power system distribution network. Distributed generation can be both renewable and non-renewable. Addition of distributed generation is primarily to increase feeder capacity and to provide peak load reduction. However, this addition comes with several impacts on the distribution feeder. Several studies have shown that addition of DG leads to reduction of feeder loss. However, most of these studies have considered lumped load and distributed load models to analyze the effects on system losses, where the dynamic variation of load due to seasonal changes is ignored. It is very important for utilities to minimize the losses under all scenarios to decrease revenue losses, promote efficient asset utilization, and therefore, increase feeder capacity. This paper will investigate an IEEE 13-node feeder populated with photovoltaic generators on detailed residential houses with water heater, Heating Ventilation and Air conditioning (HVAC) units, lights, and other plug and convenience loads. An analysis of losses for different power system components, such as transformers, underground and overhead lines, and triplex lines, will be performed. The analysis will utilize different seasons and different solar penetration levels (15%, 30%).

  2. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q of Part 502 Shipping FEDERAL MARITIME... of Freight Charges Due to Tariff or Quoting Error Federal Maritime Commission Special Docket No... on the date the intended rate would have become effective and ending on the day before the...

  3. Phantom Effects in School Composition Research: Consequences of Failure to Control Biases Due to Measurement Error in Traditional Multilevel Models

    ERIC Educational Resources Information Center

    Televantou, Ioulia; Marsh, Herbert W.; Kyriakides, Leonidas; Nagengast, Benjamin; Fletcher, John; Malmberg, Lars-Erik

    2015-01-01

    The main objective of this study was to quantify the impact of failing to account for measurement error on school compositional effects. Multilevel structural equation models were incorporated to control for measurement error and/or sampling error. Study 1, a large sample of English primary students in Years 1 and 4, revealed a significantly…

  4. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media.

    PubMed

    Suess, D; Fuger, M; Abert, C; Bruckner, F; Vogler, C

    2016-06-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media.

  5. Superior bit error rate and jitter due to improved switching field distribution in exchange spring magnetic recording media

    PubMed Central

    Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.

    2016-01-01

    We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287

  6. Error in Airspeed Measurement Due to the Static-Pressure Field Ahead of an Airplane at Transonic Speeds

    NASA Technical Reports Server (NTRS)

    O'Bryan, Thomas C; Danforth, Edward C B; Johnston, J Ford

    1955-01-01

    The magnitude and variation of the static-pressure error for various distances ahead of sharp-nose bodies and open-nose air inlets and for a distance of 1 chord ahead of the wing tip of a swept wing are defined by a combination of experiment and theory. The mechanism of the error is discussed in some detail to show the contributing factors that make up the error. The information presented provides a useful means for choosing a proper location for measurement of static pressure for most purposes.

  7. Errors in the determination of the solar constant by the Langley method due to the presence of volcanic aerosol

    SciTech Connect

    Schotland, R.M.; Hartman, J.E.

    1989-02-01

    The accuracy in the determination of the solar constant by means of the Langley method is strongly influenced by the spatial inhomogeneities of the atmospheric aerosol. Volcanos frequently inject aerosol into the upper troposphere and lower stratosphere. This paper evaluates the solar constant error that would occur if observations had been taken throughout the plume of El Chichon observed by NASA aircraft in the fall of 1982 and the spring of 1983. A lidar method is suggested to minimize this error. 15 refs.

  8. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  9. Systematic Errors that are Due to the Monochromatic-Equivalent Radiative Transfer Approximation in Thermal Emission Problems.

    PubMed

    Turner, D S

    2000-11-01

    An underlying assumption of data assimilation models is that the radiative transfer model used by them can simulate observed radiances with zero bias and small error. For practical reasons a fast parameterized radiative transfer model is used instead of a highly accurate line-by-line model. These fast models usually replace the spectral integration of the product of the transmittance and the Planck function with a monochromatic equivalent, namely, the product of a spectrally averaged transmittance and a spectrally averaged Planck function. The error of using this equivalent form is commonly assumed to be negligible. However, this error is not necessarily negligible and introduces a systematic height-dependent bias to the assimilation scheme. Although the bias could be corrected by a separate bias correction scheme, it is more effective to correct its source, the fast radiative transfer model. I examine the magnitude of error when the monochromatic-equivalent approach is used and demonstrate how a fast parameterized radiative model with Planck-weighted mean transmittances can effectively reduce if not eliminate these errors at source. I focus on channel 12 of the High-Resolution Infrared Radiation Sounder onboard the National Oceanic and Atmospheric Administration (NOAA)-14 satellite that, among all the channels of this instrument, displays the largest error.

  10. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future

  11. Anomalous yield reduction in direct-drive DT implosions due to 3He addition

    SciTech Connect

    Herrmann, Hans W; Langenbrunner, James R; Mack, Joseph M; Cooley, James H; Wilson, Douglas C; Evans, Scott C; Sedillo, Tom J; Kyrala, George A; Caldwell, Stephen E; Young, Carlton A; Nobile, Arthur; Wermer, Joseph R; Paglieri, Stephen N; Mcevoy, Aaron M; Kim, Yong Ho; Batha, Steven H; Horsfield, Colin J; Drew, Dave; Garbett, Warren; Rubery, Michael; Glebov, Vladimir Yu; Roberts, Samuel; Frenje, Johan A

    2008-01-01

    Glass capsules were imploded in direct drive on the OMEGA laser [T. R. Boehly et aI., Opt. Commun. 133, 495, 1997] to look for anomalous degradation in deuterium/tritium (DT) yield (i.e., beyond what is predicted) and changes in reaction history with {sup 3}He addition. Such anomalies have previously been reported for D/{sup 3}He plasmas, but had not yet been investigated for DT/{sup 3}He. Anomalies such as these provide fertile ground for furthering our physics understanding of ICF implosions and capsule performance. A relatively short laser pulse (600 ps) was used to provide some degree of temporal separation between shock and compression yield components for analysis. Anomalous degradation in the compression component of yield was observed, consistent with the 'factor of two' degradation previously reported by MIT at a 50% {sup 3}He atom fraction in D{sub 2} using plastic capsules [Rygg et aI., Phys. Plasmas 13, 052702 (2006)]. However, clean calculations (i.e., no fuel-shell mixing) predict the shock component of yield quite well, contrary to the result reported by MIT, but consistent with LANL results in D{sub 2}/{sup 3}He [Wilson, et aI., lml Phys: Conf Series 112, 022015 (2008)]. X-ray imaging suggests less-than-predicted compression ofcapsules containing {sup 3}He. Leading candidate explanations are poorly understood Equation-of-State (EOS) for gas mixtures, and unanticipated particle pressure variation with increasing {sup 3}He addition.

  12. Mechanism of wiggling enhancement due to HBr gas addition during amorphous carbon etching

    NASA Astrophysics Data System (ADS)

    Kofuji, Naoyuki; Ishimura, Hiroaki; Kobayashi, Hitoshi; Une, Satoshi

    2015-06-01

    The effect of gas chemistry during etching of an amorphous carbon layer (ACL) on wiggling has been investigated, focusing especially on the changes in residual stress. Although the HBr gas addition reduces critical dimension loss, it enhances the surface stress and therefore increases wiggling. Attenuated total reflectance Fourier transform infrared spectroscopy revealed that the increase in surface stress was caused by hydrogenation of the ACL surface with hydrogen radicals. Three-dimensional (3D) nonlinear finite element method analysis confirmed that the increase in surface stress is large enough to cause the wiggling. These results also suggest that etching with hydrogen compound gases using an ACL mask has high potential to cause the wiggling.

  13. EFFECT ON 105KW NORTH WALL DUE TO ADDITION OF FILTRATION SYSTEM

    SciTech Connect

    CHO CS

    2010-03-08

    CHPRC D&D Projects is adding three filtration system on two 1-ft concrete pads adjacent to the north side of existing KW Basin building. This analysis is prepared to provide qualitative assessment based on the review of design information available for 105KW basin substructure. In the proposed heating, ventilation and air conditioning (HVAC) filtration pad designs a 2 ft gap will be maintained between the pads and the north end of the existing 1 05KW -Basin building. Filtration Skids No.2 and No.3 share one pad. It is conservative to evaluate the No.2 and No.3 skid pad for the wall assessment. Figure 1 shows the plan layout of the 105KW basin site and the location of the pads for the filtration system or HVAC skids. Figure 2 shows the cross-section elevation view of the pad. The concrete pad Drawing H-1-91482 directs the replacement of the existing 8-inch concrete pad with two new 1-ft think pads. The existing 8-inch pad is separated from the 105KW basin superstructure by an expansion joint of only half an inch. The concrete pad Drawing H-1-91482 shows the gap between the new proposed pads and the north wall and any overflow pits and sumps is 2-ft. Following analysis demonstrates that the newly added filtration units and their pads do not exceed the structural capacity of existing wall. The calculation shows that the total bending moment on the north wall due to newly added filtration units and pads including seismic load is 82.636 ft-kip/ft and is within the capacity of wall which is 139.0ft-kip/ft.

  14. Accounting for error due to misclassification of exposures in case-control studies of gene-environment interaction.

    PubMed

    Zhang, Li; Mukherjee, Bhramar; Ghosh, Malay; Gruber, Stephen; Moreno, Victor

    2008-07-10

    We consider analysis of data from an unmatched case-control study design with a binary genetic factor and a binary environmental exposure when both genetic and environmental exposures could be potentially misclassified. We devise an estimation strategy that corrects for misclassification errors and also exploits the gene-environment independence assumption. The proposed corrected point estimates and confidence intervals for misclassified data reduce back to standard analytical forms as the misclassification error rates go to zero. We illustrate the methods by simulating unmatched case-control data sets under varying levels of disease-exposure association and with different degrees of misclassification. A real data set on a case-control study of colorectal cancer where a validation subsample is available for assessing genotyping error is used to illustrate our methods.

  15. Theory of modulation transfer function artifacts due to mid-spatial-frequency errors and its application to optical tolerancing.

    PubMed

    Tamkin, John M; Milster, Tom D; Dallas, William

    2010-09-01

    Aspheric and free-form surfaces are powerful surface forms that allow designers to achieve better performance with fewer lenses and smaller packages. Unlike spheres, these surfaces are fabricated with processes that leave a signature, or "structure," that is primarily in the mid-spatial-frequency region. These structured surface errors create ripples in the modulation transfer function (MTF) profile. Using Fourier techniques with generalized functions, the drop in MTF is derived and shown to exhibit a nonlinear relationship with the peak-to-valley height of the structured surface error.

  16. Harmonic Resonance in Power Transmission Systems due to the Addition of Shunt Capacitors

    NASA Astrophysics Data System (ADS)

    Patil, Hardik U.

    Shunt capacitors are often added in transmission networks at suitable locations to improve the voltage profile. In this thesis, the transmission system in Arizona is considered as a test bed. Many shunt capacitors already exist in the Arizona transmission system and more are planned to be added. Addition of these shunt capacitors may create resonance conditions in response to harmonic voltages and currents. Such resonance, if it occurs, may create problematic issues in the system. It is main objective of this thesis to identify potential problematic effects that could occur after placing new shunt capacitors at selected buses in the Arizona network. Part of the objective is to create a systematic plan for avoidance of resonance issues. For this study, a method of capacitance scan is proposed. The bus admittance matrix is used as a model of the networked transmission system. The calculations on the admittance matrix were done using Matlab. The test bed is the actual transmission system in Arizona; however, for proprietary reasons, bus names are masked in the thesis copy intended for the public domain. The admittance matrix was obtained from data using the PowerWorld Simulator after equivalencing the 2016 summer peak load (planning case). The full Western Electricity Coordinating Council (WECC) system data were used. The equivalencing procedure retains only the Arizona portion of the WECC. The capacitor scan results for single capacitor placement and multiple capacitor placement cases are presented. Problematic cases are identified in the form of 'forbidden response. The harmonic voltage impact of known sources of harmonics, mainly large scale HVDC sources, is also presented. Specific key results for the study indicated include: (1) The forbidden zones obtained as per the IEEE 519 standard indicates the bus 10 to be the most problematic bus. (2) The forbidden zones also indicate that switching values for the switched shunt capacitor (if used) at bus 3 should be

  17. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles.

    PubMed

    Lievens, Hans; Vernieuwe, Hilde; Alvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E C

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration.

  18. Optics for five-dimensional measurement for correction of vertical displacement error due to attitude of floating body in superconducting magnetic levitation system

    SciTech Connect

    Shiota, Fuyuhiko; Morokuma, Tadashi

    2006-09-15

    An improved optical system for five-dimensional measurement has been developed for the correction of vertical displacement error due to the attitude change of a superconducting floating body that shows five degrees of freedom besides a vertical displacement of 10 mm. The available solid angle for the optical measurement is extremely limited because of the cryogenic laser interferometer sharing the optical window of a vacuum chamber in addition to the basic structure of the cryogenic vessel for liquid helium. The aim of the design was to develop a more practical as well as better optical system compared with the prototype system. Various artifices were built into this optical system and the result shows a satisfactory performance and easy operation overcoming the extremely severe spatial difficulty in the levitation system. Although the system described here is specifically designed for our magnetic levitation system, the concept and each artifice will be applicable to the optical measurement system for an object in a high-vacuum chamber and/or cryogenic vessel where the available solid angle for an optical path is extremely limited.

  19. 77 FR 14041 - Major Portion Prices and Due Date for Additional Royalty Payments on Indian Gas Production in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-08

    ... Office of Natural Resources Revenue Major Portion Prices and Due Date for Additional Royalty Payments on... leases, published August 10, 1999, require the Office of Natural Resources Revenue (ONRR) to determine... . Mailing address: Office of Natural Resources Revenue, Western Audit and Compliance Management, Team B,...

  20. Estimation of Cyclic Error Due to Scattering in the Internal OPD Metrology of the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Tang, Hong; Zhao, Feng

    2005-01-01

    A common-path laser heterodyne interferometer capable of measuring the internal optical path difference (OPD) with accuracy of the order of 10 pm was demonstrated at JPL. To achieve this accuracy, the relative power received by the detector that is contributed by the scattering of light at the optical surfaces should be less than -97 dB. A method has been developed to estimate the cyclic error caused by the scattering of the optical surfaces. The result of the analysis is presented.

  1. Managing Uncertainty Due to a Fundamental Error Source Arising from Scatterer Distribution Complexity in Radar Remote Sensing of Precipitation

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Kuo, Kwo-Sen; Meneghini, Robert; Mugnai, Alberto

    2007-01-01

    The assumption that cloud and rain drops are spatially distributed according to a Poisson distribution within a scattering volume probed by a radar being used to estimate precipitation has represented bedrock theory in establishing 'rules of the game' for pulse averaging--the process needed to beat down noise to an acceptable level in the measurement of radar reflectivity factor. Based on relatively recent observations of 'realistic' spatial distributions of hydrometeor scatterers in a cloudy atmosphere motivates a renewed examination of the consequences of using a too simplified assumption underlying volume scattering--particularly in regards to the standard pulse averaging rule. Our investigation addresses two extremes, simple to complex, insofar as allowed for complexities in an underlying scatterer distribution. It is demonstrated that as the spatial distribution ranges from Poisson (a narrow distribution) to multi-fractal (much broader distribution), uncertainty in a measurement increases if the rule for pulse averaging goes unchanged from its Poisson distribution reference county. [A bounded cascade is used for the multi-fractal distribution, a regularly observed distribution vis-a-vis cloud liquid water content.] The resultant measurement uncertainty leads to a fundamental source of error in the estimation of rain rate from radar measurements, one that has been disregarded since the early 1950s when radar sets first began to be used for rainfall measuring. It is shown how this source of error can be 'managed'--under the assumption that number of data analysis experiments would be carried out, experiments involving pulse-by-pulse measurements obtained from a radar set modified to output individual pulses of reflectivity factor. For practical applications, a new parameter called normalized k-sample intensity invariance is developed to enable defining the required pulse average count according to a preferred degree of uncertainty.

  2. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  3. Dosimetric impact of geometric errors due to respiratory motion prediction on dynamic multileaf collimator-based four-dimensional radiation delivery

    SciTech Connect

    Vedam, S.; Docef, A.; Fix, M.; Murphy, M.; Keall, P.

    2005-06-15

    The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effects of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the

  4. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  5. Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error

    PubMed Central

    Fan, Qiao; Verhoeven, Virginie J. M.; Wojciechowski, Robert; Barathi, Veluchamy A.; Hysi, Pirro G.; Guggenheim, Jeremy A.; Höhn, René; Vitart, Veronique; Khawaja, Anthony P.; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W.; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E.; Williams, Katie M.; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F.; Joshi, Peter K.; McMahon, George; St Pourcain, Beate; Evans, David M.; Simpson, Claire L.; Schwantes-An, Tae-Hwi; Igo, Robert P.; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S.; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M.; Amin, Najaf; Uitterlinden, André G.; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R.; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M. Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E. H.; Lim, Wan'e; Beuerman, Roger W.; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N.; Foster, Paul J.; Klein, Barbara E. K.; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L.; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M.; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B.; Teo, Yik-Ying; Mackey, David A.; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D.; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N.; Stambolian, Dwight; Wilson, Joan E. Bailey; Cheng, Ching-Yu; Hammond, Christopher J.; Klaver, Caroline C. W.; Saw, Seang-Mei; Rahi, Jugnoo S.; Korobelnik, Jean-François; Kemp, John P.; Timpson, Nicholas J.; Smith, George Davey; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G.; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F.; Fondran, Jeremy R.; Lass, Jonathan H.; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J.; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O.; Jhanji, Vishal; Young, Alvin L.; Döring, Angela; Raffel, Leslie J.; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K.H.; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L.; Tedja, Milly; Deangelis, Margaret M.; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  6. Meta-analysis of gene-environment-wide association scans accounting for education level identifies additional loci for refractive error.

    PubMed

    Fan, Qiao; Verhoeven, Virginie J M; Wojciechowski, Robert; Barathi, Veluchamy A; Hysi, Pirro G; Guggenheim, Jeremy A; Höhn, René; Vitart, Veronique; Khawaja, Anthony P; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E; Williams, Katie M; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F; Joshi, Peter K; McMahon, George; St Pourcain, Beate; Evans, David M; Simpson, Claire L; Schwantes-An, Tae-Hwi; Igo, Robert P; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M; Amin, Najaf; Uitterlinden, André G; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E H; Lim, Wan'e; Beuerman, Roger W; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B; Teo, Yik-Ying; Mackey, David A; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N; Stambolian, Dwight; Wilson, Joan E Bailey; Cheng, Ching-Yu; Hammond, Christopher J; Klaver, Caroline C W; Saw, Seang-Mei; Rahi, Jugnoo S; Korobelnik, Jean-François; Kemp, John P; Timpson, Nicholas J; Smith, George Davey; Craig, Jamie E; Burdon, Kathryn P; Fogarty, Rhys D; Iyengar, Sudha K; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F; Fondran, Jeremy R; Lass, Jonathan H; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O; Jhanji, Vishal; Young, Alvin L; Döring, Angela; Raffel, Leslie J; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K H; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L; Tedja, Milly; Deangelis, Margaret M; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-03-29

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10(-5)), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia.

  7. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis.

    PubMed

    Cornforth, Daniel M; Matthews, Andrew; Brown, Sam P; Raymond, Ben

    2015-04-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis.

  8. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis.

    PubMed

    Cornforth, Daniel M; Matthews, Andrew; Brown, Sam P; Raymond, Ben

    2015-04-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  9. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  10. The Effect of Additional Dead Space on Respiratory Exchange Ratio and Carbon Dioxide Production Due to Training

    PubMed Central

    Smolka, Lukasz; Borkowski, Jacek; Zaton, Marek

    2014-01-01

    The purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training. The primary outcome measures were respiratory exchange ratio (RER) and carbon dioxide production (VCO2). Two groups of young healthy males: Experimental (Exp, n = 15) and Control (Con, n = 15), participated in this study. The training consisted of 12 sessions, performed twice a week for 6 weeks. A single training session consisted of continuous, constant-rate exercise on a cycle ergometer at 60% of VO2max which was maintained for 30 minutes. Subjects in Exp group were breathing through additional respiratory dead space (1200ml), while subjects in Con group were breathing without additional dead space. Pre-test and two post-training incremental exercise tests were performed for the detection of gas exchange variables. In all training sessions, pCO2 was higher and blood pH was lower in the Exp group (p < 0.001) ensuring respiratory acidosis. A 12-session training program resulted in significant increase in performance time in both groups (from 17”29 ± 1”31 to 18”47 ± 1”37 in Exp; p=0.02 and from 17”20 ± 1”18 to 18”45 ± 1”44 in Con; p = 0.02), but has not revealed a significant difference in RER and VCO2 in both post-training tests, performed at rest and during submaximal workload. We interpret the lack of difference in post-training values of RER and VCO2 between groups as an absence of inhibition in glycolysis and glycogenolysis during exercise with additional dead space. Key Points The purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training on respiratory exchange ratio and carbon dioxide production. In all training sessions, respiratory acidosis was gained by experimental group only. No significant difference in RER and VCO2 between experimental and control group due to the trainings. The lack of

  11. The effect of additional dead space on respiratory exchange ratio and carbon dioxide production due to training.

    PubMed

    Smolka, Lukasz; Borkowski, Jacek; Zaton, Marek

    2014-01-01

    The purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training. The primary outcome measures were respiratory exchange ratio (RER) and carbon dioxide production (VCO2). Two groups of young healthy males: Experimental (Exp, n = 15) and Control (Con, n = 15), participated in this study. The training consisted of 12 sessions, performed twice a week for 6 weeks. A single training session consisted of continuous, constant-rate exercise on a cycle ergometer at 60% of VO2max which was maintained for 30 minutes. Subjects in Exp group were breathing through additional respiratory dead space (1200ml), while subjects in Con group were breathing without additional dead space. Pre-test and two post-training incremental exercise tests were performed for the detection of gas exchange variables. In all training sessions, pCO2 was higher and blood pH was lower in the Exp group (p < 0.001) ensuring respiratory acidosis. A 12-session training program resulted in significant increase in performance time in both groups (from 17"29 ± 1"31 to 18"47 ± 1"37 in Exp; p=0.02 and from 17"20 ± 1"18 to 18"45 ± 1"44 in Con; p = 0.02), but has not revealed a significant difference in RER and VCO2 in both post-training tests, performed at rest and during submaximal workload. We interpret the lack of difference in post-training values of RER and VCO2 between groups as an absence of inhibition in glycolysis and glycogenolysis during exercise with additional dead space. Key PointsThe purpose of the study was to investigate the effects of implementing additional respiratory dead space during cycloergometry-based aerobic training on respiratory exchange ratio and carbon dioxide production.In all training sessions, respiratory acidosis was gained by experimental group only.No significant difference in RER and VCO2 between experimental and control group due to the trainings.The lack of difference in post

  12. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  13. Too generous to a fault? Is reliable earthquake safety a lost art? Errors in expected human losses due to incorrect seismic hazard estimates

    NASA Astrophysics Data System (ADS)

    Bela, James

    2014-11-01

    "One is well advised, when traveling to a new territory, to take a good map and then to check the map with the actual territory during the journey." In just such a reality check, Global Seismic Hazard Assessment Program (GSHAP) maps (prepared using PSHA) portrayed a "low seismic hazard," which was then also assumed to be the "risk to which the populations were exposed." But time-after-time-after-time the actual earthquakes that occurred were not only "surprises" (many times larger than those implied on the maps), but they were often near the maximum potential size (Maximum Credible Earthquake or MCE) that geologically could occur. Given these "errors in expected human losses due to incorrect seismic hazard estimates" revealed globally in these past performances of the GSHAP maps (> 700,000 deaths 2001-2011), we need to ask not only: "Is reliable earthquake safety a lost art?" but also: "Who and what were the `Raiders of the Lost Art?' "

  14. 38 CFR 3.361 - Benefits under 38 U.S.C. 1151(a) for additional disability or death due to hospital care, medical...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... 1151(a) for additional disability or death due to hospital care, medical or surgical treatment.... 1151(a) for additional disability or death due to hospital care, medical or surgical treatment..., VA compares the veteran's condition immediately before the beginning of the hospital care, medical...

  15. 38 CFR 3.361 - Benefits under 38 U.S.C. 1151(a) for additional disability or death due to hospital care, medical...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... 1151(a) for additional disability or death due to hospital care, medical or surgical treatment.... 1151(a) for additional disability or death due to hospital care, medical or surgical treatment..., VA compares the veteran's condition immediately before the beginning of the hospital care, medical...

  16. SU-E-J-164: Estimation of DVH Variation for PTV Due to Interfraction Organ Motion in Prostate VMAT Using Gaussian Error Function

    SciTech Connect

    Lewis, C; Jiang, R; Chow, J

    2015-06-15

    Purpose: We developed a method to predict the change of DVH for PTV due to interfraction organ motion in prostate VMAT without repeating the CT scan and treatment planning. The method is based on a pre-calculated patient database with DVH curves of PTV modelled by the Gaussian error function (GEF). Methods: For a group of 30 patients with different prostate sizes, their VMAT plans were recalculated by shifting their PTVs 1 cm with 10 increments in the anterior-posterior, left-right and superior-inferior directions. The DVH curve of PTV in each replan was then fitted by the GEF to determine parameters describing the shape of curve. Information of parameters, varying with the DVH change due to prostate motion for different prostate sizes, was analyzed and stored in a database of a program written by MATLAB. Results: To predict a new DVH for PTV due to prostate interfraction motion, prostate size and shift distance with direction were input to the program. Parameters modelling the DVH for PTV were determined based on the pre-calculated patient dataset. From the new parameters, DVH curves of PTVs with and without considering the prostate motion were plotted for comparison. The program was verified with different prostate cases involving interfraction prostate shifts and replans. Conclusion: Variation of DVH for PTV in prostate VMAT can be predicted using a pre-calculated patient database with DVH curve fitting. The computing time is fast because CT rescan and replan are not required. This quick DVH estimation can help radiation staff to determine if the changed PTV coverage due to prostate shift is tolerable in the treatment. However, it should be noted that the program can only consider prostate interfraction motions along three axes, and is restricted to prostate VMAT plan using the same plan script in the treatment planning system.

  17. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  18. 76 FR 13431 - Major Portion Prices and Due Date for Additional Royalty Payments on Indian Gas Production in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-11

    ... Natural Resources Revenue (ONRR) to determine major portion prices and notify industry by publishing the prices in the Federal Register. The regulations also require ONRR to publish a due date for industry to... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF THE...

  19. Prenatal detection of mosaic trisomy 1q due to an unbalanced translocation in one fetus of a twin pregnancy following in vitro fertilization: a postzygotic error.

    PubMed

    Zeng, Shemin; Patil, Shivanand R; Yankowitz, Jerome

    2003-08-01

    Complete or mosaic trisomy for all of chromosome 1q has been seen rarely in a recognized pregnancy. A patient presented with twins following in vitro fertilization (IVF). Ultrasound showed twin A to have a diaphragmatic hernia, thick nuchal fold, and subtle intracranial abnormalities. Twin B appeared normal and a thick dividing membrane was seen. Amniocentesis of twin A showed a male karyotype with mosaic trisomy 1q in 57% of cells resulting from a translocation between chromosomes Yq12 and 1q12. Parental karyotypes were normal. The twins were delivered at 33 weeks. Twin A died at 1 hr of life. Autopsy confirmed the left diaphragmatic hernia and hypoplastic lungs. Autopsy also revealed a partial cleft palate, syndactyly of the second and third toes bilaterally, external deviation of the left 5th toe, and contractures of the index fingers bilaterally. A recent report documented formation of a chimera resulting from embryo amalgamation after IVF. Given the rarity of the cytogenetic findings in our case, we sought to determine if the mosaicism was a result of chimera formation related to the IVF. Thirteen polymorphic loci throughout the genome, in addition to four on 1q and four on 1p, were amplified by PCR. Only two alleles were observed at each of these loci in twin A, one paternal and the other maternal. We present further clinical findings of this case with a rare cytogenetic abnormality that appears to have originated from a postzygotic mitotic error and not embryo amalgamation.

  20. Common Ion Effects In Zeoponic Substrates: Dissolution And Cation Exchange Variations Due to Additions of Calcite, Dolomite and Wollastonite

    NASA Technical Reports Server (NTRS)

    Beiersdorfer, R. E.; Ming, D. W.; Galindo, C., Jr.

    2003-01-01

    c1inoptilolite-rich tuff-hydroxyapatite mixture (zeoponic substrate) has the potential to serve as a synthetic soil-additive for plant growth. Essential plant macro-nutrients such as calcium, phosphorous, magnesium, ammonium and potassium are released into solution via dissolution of the hydroxyapatite and cation exchange on zeolite charged sites. Plant growth experiments resulting in low yield for wheat have been attributed to a Ca deficiency caused by a high degree of cation exchange by the zeolite. Batch-equilibration experiments were performed in order to determine if the Ca deficiency can be remedied by the addition of a second Ca-bearing, soluble, mineral such as calcite, dolomite or wollastonite. Variations in the amount of calcite, dolomite or wollastonite resulted in systematic changes in the concentrations of Ca and P. The addition of calcite, dolomite or wollastonite to the zeoponic substrate resulted in an exponential decrease in the phosphorous concentration in solution. The exponential rate of decay was greatest for calcite (5.60 wt. % -I), intermediate for wollastonite (2.85 wt.% -I) and least for dolomite (1.58 wt.% -I). Additions of the three minerals resulted in linear increases in the calcium concentration in solution. The rate of increase was greatest for calcite (3.64), intermediate for wollastonite (2.41) and least for dolomite (0.61). The observed changes in P and Ca concentration are consistent with the solubilities of calcite, dolomite and wollastonite and with changes expected from a common ion effect with Ca. Keywords: zeolite, zeoponics, common-ion effect, clinoptilolite, hydroxyapatite

  1. Correction for ‘artificial’ electron disequilibrium due to cone-beam CT density errors: implications for on-line adaptive stereotactic body radiation therapy of lung

    NASA Astrophysics Data System (ADS)

    Disher, Brandon; Hajdok, George; Wang, An; Craig, Jeff; Gaede, Stewart; Battista, Jerry J.

    2013-06-01

    Cone-beam computed tomography (CBCT) has rapidly become a clinically useful imaging modality for image-guided radiation therapy. Unfortunately, CBCT images of the thorax are susceptible to artefacts due to scattered photons, beam hardening, lag in data acquisition, and respiratory motion during a slow scan. These limitations cause dose errors when CBCT image data are used directly in dose computations for on-line, dose adaptive radiation therapy (DART). The purpose of this work is to assess the magnitude of errors in CBCT numbers (HU), and determine the resultant effects on derived tissue density and computed dose accuracy for stereotactic body radiation therapy (SBRT) of lung cancer. Planning CT (PCT) images of three lung patients were acquired using a Philips multi-slice helical CT simulator, while CBCT images were obtained with a Varian On-Board Imaging system. To account for erroneous CBCT data, three practical correction techniques were tested: (1) conversion of CBCT numbers to electron density using phantoms, (2) replacement of individual CBCT pixel values with bulk CT numbers, averaged from PCT images for tissue regions, and (3) limited replacement of CBCT lung pixels values (LCT) likely to produce artificial lateral electron disequilibrium. For each corrected CBCT data set, lung SBRT dose distributions were computed for a 6 MV volume modulated arc therapy (VMAT) technique within the Philips Pinnacle treatment planning system. The reference prescription dose was set such that 95% of the planning target volume (PTV) received at least 54 Gy (i.e. D95). Further, we used the relative depth dose factor as an a priori index to predict the effects of incorrect low tissue density on computed lung dose in regions of severe electron disequilibrium. CT number profiles from co-registered CBCT and PCT patient lung images revealed many reduced lung pixel values in CBCT data, with some pixels corresponding to vacuum (-1000 HU). Similarly, CBCT data in a plastic lung

  2. Calculation of enhanced slowing and cooling due to the addition of a traveling wave to an intense optical standing wave

    NASA Astrophysics Data System (ADS)

    Gottesman, D.; Mervis, J.; Prentiss, M.; Bigelow, N. P.

    1992-07-01

    We investigate the force on a two-level atom interacting with intense monochromatic laser fields which are combinations of standing and traveling waves. We present a continued-fraction solution to the optical Bloch equations. Using this solution to calculate the force on an atom, we have examined the slowing and cooling of a thermal Na atomic beam. We find that the addition of a traveling wave to an intense standing wave can significantly improve the slowing rate and simultaneously decrease the final velocity of the cooled beam.

  3. Growth and parameters of microflora in intestinal and faecal samples of piglets due to application of a phytogenic feed additive.

    PubMed

    Muhl, A; Liebert, F

    2007-10-01

    A commercial phytogenic feed additive (PFA), containing the fructopolysaccharide inulin, an essential oil mix (carvacrol, thymol), chestnut meal (tannins) and cellulose powder as carrier substance, was examined for effects on growth and faecal and intestinal microflora of piglets. Two experiments (35 days) were conducted, each with 40 male castrated weaned piglets. In experiment 1, graded levels of the PFA were supplied (A1: control; B1: 0.05% PFA; C1: 0.1% PFA; D1: 0.15% PFA) in diets based on wheat, barley, soybean meal and fish meal with lysine as the limiting amino acid. In experiment 2, a similar diet with 0.1% of the PFA (A2: control; B2: 0.1% PFA; C2: +0.35% lysine; D2: 0.1% PFA + 0.35% lysine) and lysine supplementation was utilized. During experiment 1, no significant effect of the PFA on growth, feed intake and feed conversion rate was observed (p > 0.05). Lysine supplementation in experiment 2 improved growth performance significantly, but no significant effect of the PFA was detected. Microbial counts in faeces (aerobes, Gram negatives, anaerobes and lactobacilli) during the first and fifth week did not indicate any significant PFA effect (p > 0.05). In addition, microflora in intestinal samples was not significantly modified by supplementing the PFA (p > 0.05). Lysine supplementation indicated lysine as limiting amino acid in the basal diet, but did not influence the microbial counts in faeces and small intestine respectively.

  4. Decrease in Corneal Damage due to Benzalkonium Chloride by the Addition of Mannitol into Timolol Maleate Eye Drops.

    PubMed

    Nagai, Noriaki; Yoshioka, Chiaki; Tanino, Tadatoshi; Ito, Yoshimasa; Okamoto, Norio; Shimomura, Yoshikazu

    2015-01-01

    We investigated the protective effects of mannitol on corneal damage caused by benzalkonium chloride (BAC), which is used as a preservative in commercially available timolol maleate eye drops, using rat debrided corneal epithelium and a human cornea epithelial cell line (HCE-T). Corneal wounds were monitored using a fundus camera TRC-50X equipped with a digital camera; eye drops were instilled into rat eyes five times a day after corneal epithelial abrasion. The viability of HCE-T cells was calculated by TetraColor One; and Escherichia coli (ATCC 8739) were used to measure antimicrobial activity. The reducing effects on transcorneal penetration and intraocular pressure (IOP) of the eye drops were determined using rabbits. The corneal wound healing rate and rate constant (kH), as well as cell viability, were higher following treatment with 0.005% BAC solution containing 0.5% mannitol than in the case BAC solution alone; the antimicrobial activity was approximately the same for BAC solutions with and without mannitol. In addition, the kH for rat eyes instilled with commercially available timolol maleate eye drops containing 0.5% mannitol was significantly higher than that for eyes instilled with timolol maleate eye drops without mannitol, and the addition of mannitol did not affect the corneal penetration or IOP reducing effect of the timolol maleate eye drops. A preservative system comprising BAC and mannitol may provide effective therapy for glaucoma patients requiring long-term treatment with anti-glaucoma agents.

  5. Anomalous yield reduction in direct-drive deuterium/tritium implosions due to {sup 3}He addition

    SciTech Connect

    Herrmann, H. W.; Langenbrunner, J. R.; Mack, J. M.; Cooley, J. H.; Wilson, D. C.; Evans, S. C.; Sedillo, T. J.; Kyrala, G. A.; Caldwell, S. E.; Young, C. S.; Nobile, A.; Wermer, J.; Paglieri, S.; McEvoy, A. M.; Kim, Y.; Batha, S. H.; Horsfield, C. J.; Drew, D.; Garbett, W.; Rubery, M.

    2009-05-15

    Glass capsules were imploded in direct drive on the OMEGA laser [Boehly et al., Opt. Commun. 133, 495 (1997)] to look for anomalous degradation in deuterium/tritium (DT) yield and changes in reaction history with {sup 3}He addition. Such anomalies have previously been reported for D/{sup 3}He plasmas but had not yet been investigated for DT/{sup 3}He. Anomalies such as these provide fertile ground for furthering our physics understanding of inertial confinement fusion implosions and capsule performance. Anomalous degradation in the compression component of yield was observed, consistent with the ''factor of 2'' degradation previously reported by Massachusetts Institute of Technology (MIT) at a 50%{sup 3}He atom fraction in D{sub 2} using plastic capsules [Rygg, Phys. Plasmas 13, 052702 (2006)]. However, clean calculations (i.e., no fuel-shell mixing) predict the shock component of yield quite well, contrary to the result reported by MIT but consistent with Los Alamos National Laboratory results in D{sub 2}/{sup 3}He[Wilson et al., J. Phys.: Conf. Ser. 112, 022015 (2008)]. X-ray imaging suggests less-than-predicted compression of capsules containing {sup 3}He. Leading candidate explanations are poorly understood equation of state for gas mixtures and unanticipated particle pressure variation with increasing {sup 3}He addition.

  6. Additional Radiative Cooling of the Mesopause Region due to Small-Scale Temperature Fluctuations Associated with Gravity Waves

    NASA Astrophysics Data System (ADS)

    Kutepov, A.; Feofilov, A.; Medvedev, A.; Pauldrach, A.; Hartogh, P.

    2008-05-01

    We address a previously unknown effect of the radiative cooling of the mesosphere and lower thermosphere (MLT) produced by small-scale irregular temperature fluctuations (ITFs) associated with gravity waves. These disturbances are not resolved by present GCMs, but they alter the radiative transfer and the heating/cooling rates significantly. We apply a statistical model of gravity waves superimposed on large-scale temperature profiles, and perform direct calculations of the radiative cooling/heating in the MLT in the IR bands of CO2, O3 and H2O molecules taking into account the breakdown of the local thermodynamic equilibrium (non-LTE). We found that in the periods of strong wave activity the subgrid ITFs can cause an additional cooling up to 3 K/day near the mesopause. The effect is produced mainly by the fundamental 15 μm band of the main CO2 isotope. We derived a simple expression for the correction to mean (resolved by GCMs) temperature profiles using the variance of the temperature perturbations to account for the additional cooling effect. The suggested parameterization can be applied in GCMs in conjunction with existing gravity wave drag parameterizations.

  7. Addressing Loss of Efficiency Due to Misclassification Error in Enriched Clinical Trials for the Evaluation of Targeted Therapies Based on the Cox Proportional Hazards Model

    PubMed Central

    Tsai, Chen-An; Lee, Kuan-Ting; Liu, Jen-pei

    2016-01-01

    A key feature of precision medicine is that it takes individual variability at the genetic or molecular level into account in determining the best treatment for patients diagnosed with diseases detected by recently developed novel biotechnologies. The enrichment design is an efficient design that enrolls only the patients testing positive for specific molecular targets and randomly assigns them for the targeted treatment or the concurrent control. However there is no diagnostic device with perfect accuracy and precision for detecting molecular targets. In particular, the positive predictive value (PPV) can be quite low for rare diseases with low prevalence. Under the enrichment design, some patients testing positive for specific molecular targets may not have the molecular targets. The efficacy of the targeted therapy may be underestimated in the patients that actually do have the molecular targets. To address the loss of efficiency due to misclassification error, we apply the discrete mixture modeling for time-to-event data proposed by Eng and Hanlon [8] to develop an inferential procedure, based on the Cox proportional hazard model, for treatment effects of the targeted treatment effect for the true-positive patients with the molecular targets. Our proposed procedure incorporates both inaccuracy of diagnostic devices and uncertainty of estimated accuracy measures. We employed the expectation-maximization algorithm in conjunction with the bootstrap technique for estimation of the hazard ratio and its estimated variance. We report the results of simulation studies which empirically investigated the performance of the proposed method. Our proposed method is illustrated by a numerical example. PMID:27120450

  8. Measurement error in frequency measured using wavelength meter due to residual moisture in interferometer and a simple method to avoid it

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Koji; Abe, Hisashi

    2016-11-01

    We have experimentally evaluated the accuracy of the frequency measured using a commonly used wavelength meter in the near-infrared region, which was calibrated in the visible region. An error of approximately 50 MHz was observed in the frequency measurement using the wavelength meter in the near-infrared region although the accuracy specified in the catalogue was 20 MHz. This error was attributable to residual moisture inside the Fizeau interferometer of the wavelength meter. A simple method to avoid the error is proposed.

  9. Radio metric errors due to mismatch and offset between a DSN antenna beam and the beam of a troposphere calibration instrument

    NASA Technical Reports Server (NTRS)

    Linfield, R. P.; Wilcox, J. Z.

    1993-01-01

    Two components of the error of a troposphere calibration measurement were quantified by theoretical calculations. The first component is a beam mismatch error, which occurs when the calibration instrument senses a conical volume different from the cylindrical volume sampled by a Deep Space Network (DSN) antenna. The second component is a beam offset error, which occurs if the calibration instrument is not mounted on the axis of the DSN antenna. These two error sources were calculated for both delay (e.g., VLBI) and delay rate (e.g., Doppler) measurements. The beam mismatch error for both delay and delay rate drops rapidly as the beamwidth of the troposphere calibration instrument (e.g., a water vapor radiometer or an infrared Fourier transform spectrometer) is reduced. At a 10-deg elevation angle, the instantaneous beam mismatch error is 1.0 mm for a 6-deg beamwidth and 0.09 mm for a 0.5-deg beam (these are the full angular widths of a circular beam with uniform gain out to a sharp cutoff). Time averaging for 60-100 sec will reduce these errors by factors of 1.2-2.2. At a 20-deg elevation angle, the lower limit for current Doppler observations, the beam-mismatch delay rate error is an Allan standard deviation over 100 sec of 1.1 x 10(exp -14) with a 4-deg beam and 1.3 x 10(exp -l5) for a 0.5-deg beam. A 50-m beam offset would result in a fairly modest (compared to other expected error sources) delay error (less than or equal to 0.3 mm for 60-sec integrations at any elevation angle is greater than or equal to 6 deg). However, the same offset would cause a large error in delay rate measurements (e.g., an Allan standard deviation of 1.2 x 10(exp -14) over 100 sec at a 20-deg elevation angle), which would dominate over other known error sources if the beamwidth is 2 deg or smaller. An on-axis location is essential for accurate troposphere calibration of delay rate measurements. A half-power beamwidth (for a beam with a tapered gain profile) of 1.2 deg or smaller is

  10. Changes in the nanoparticle aggregation rate due to the additional effect of electrostatic and magnetic forces on mass transport coefficients.

    PubMed

    Rosická, Dana; Sembera, Jan

    2013-01-01

    : The need may arise to be able to simulate the migration of groundwater nanoparticles through the ground. Transportation velocities of nanoparticles are different from that of water and depend on many processes that occur during migration. Unstable nanoparticles, such as zero-valent iron nanoparticles, are especially slowed down by aggregation between them. The aggregation occurs when attracting forces outweigh repulsive forces between the particles. In the case of iron nanoparticles that are used for remediation, magnetic forces between particles contribute to attractive forces and nanoparticles aggregate rapidly. This paper describes the addition of attractive magnetic forces and repulsive electrostatic forces between particles (by 'particle', we mean both single nanoparticles and created aggregates) into a basic model of aggregation which is commonly used. This model is created on the basis of the flow of particles in the proximity of observed particles that gives the rate of aggregation of the observed particle. By using a limit distance that has been described in our previous work, the flow of particles around one particle is observed in larger spacing between the particles. Attractive magnetic forces between particles draw the particles into closer proximity and result in aggregation. This model fits more closely with rapid aggregation which occurs between magnetic nanoparticles.

  11. Correcting for bias in relative risk estimates due to exposure measurement error: a case study of occupational exposure to antineoplastics in pharmacists.

    PubMed Central

    Spiegelman, D; Valanis, B

    1998-01-01

    OBJECTIVES: This paper describes 2 statistical methods designed to correct for bias from exposure measurement error in point and interval estimates of relative risk. METHODS: The first method takes the usual point and interval estimates of the log relative risk obtained from logistic regression and corrects them for nondifferential measurement error using an exposure measurement error model estimated from validation data. The second, likelihood-based method fits an arbitrary measurement error model suitable for the data at hand and then derives the model for the outcome of interest. RESULTS: Data from Valanis and colleagues' study of the health effects of antineoplastics exposure among hospital pharmacists were used to estimate the prevalence ratio of fever in the previous 3 months from this exposure. For an interdecile increase in weekly number of drugs mixed, the prevalence ratio, adjusted for confounding, changed from 1.06 to 1.17 (95% confidence interval [CI] = 1.04, 1.26) after correction for exposure measurement error. CONCLUSIONS: Exposure measurement error is often an important source of bias in public health research. Methods are available to correct such biases. PMID:9518972

  12. SU-E-P-13: Quantifying the Geometric Error Due to Irregular Motion in Four-Dimensional Computed Tomography (4DCT)

    SciTech Connect

    Sawant, A

    2015-06-15

    Purpose: Respiratory correlated 4DCT images are generated under the assumption of a regular breathing cycle. This study evaluates the error in 4DCT-based target position estimation in the presence of irregular respiratory motion. Methods: A custom-made programmable externally-and internally-deformable lung motion phantom was placed inside the CT bore. An abdominal pressure belt was placed around the phantom to mimic clinical 4DCT acquisitio and the motion platform was programmed with a sinusoidal (±10mm, 10 cycles per minute) motion trace and 7 motion traces recorded from lung cancer patients. The same setup and motion trajectories were repeated in the linac room and kV fluoroscopic images were acquired using the on-board imager. Positions of 4 internal markers segmented from the 4DCT volumes were overlaid upon the motion trajectories derived from the fluoroscopic time series to calculate the difference between estimated (4DCT) and “actual” (kV fluoro) positions. Results: With a sinusoidal trace, absolute errors of the 4DCT estimated markers positions vary between 0.78mm and 5.4mm and RMS errors are between 0.38mm to 1.7mm. With irregular patient traces, absolute errors of the 4DCT estimated markers positions increased significantly by 100 to 200 percent, while the corresponding RMS error values have much smaller changes. Significant mismatches were frequently found at peak-inhale or peak-exhale phase. Conclusion: As expected, under conditions of well-behaved, periodic sinusoidal motion, the 4DCT yielded much better estimation of marker positions. When an actual patient trace is used 4DCT-derived positions showed significant mismatches with the fluoroscopic trajectories, indicating the potential for geometric and therefore dosimetric errors in the presence of cycle-to-cycle respiratory variations.

  13. Errors in spectroscopic measurements of SO/sub 2/ due to nonexponential absorption of laser radiation, with application to the remote monitoring of atmospheric pollutants

    SciTech Connect

    Brassington, D.J.; Moncrieff, T.M.; Felton, R.C.; Jolliffe, B.W.; Marx, B.R.; Rowley, W.R.C.; Woods, P.T.

    1984-02-01

    Methods of measuring the concentration of atmospheric pollutants by laser absorption spectroscopy, such as differential absorption lidar (DIAL) and integrated long-path techniques, all rely on the validity of Beer's exponential absorption law. It is shown here that departures from this law occur if the probing laser has a bandwidth larger than the wavelength scale of structure in the absorption spectrum of the pollutant. A comprehensive experimental and theoretical treatment of the errors resulting from these departures is presented for the particular case of SO/sub 2/ monitoring at approx.300 nm. It is shown that the largest error occurs where the initial calibration measurement of absorption cross section is made at low pressure, in which case errors in excess of 5% in the cross section could occur for laser bandwidths >0.01 nm. Atmospheric measurements by DIAL or long-path methods are in most cases affected less, because pressure broadening smears the spectral structure, but when measuring high concentrations errors can exceed 5%.

  14. Measuring uncertainty in dose delivered to the cochlea due to setup error during external beam treatment of patients with cancer of the head and neck

    SciTech Connect

    Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A.

    2013-12-15

    Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup

  15. Microstructural evolution and intermetallic formation in Al-8wt% Si-0.8wt% Fe alloy due to grain refiner and modifier additions

    NASA Astrophysics Data System (ADS)

    Hassani, Amir; Ranjbar, Khalil; Sami, Sattar

    2012-08-01

    An alloy of Al-8wt% Si-0.8wt% Fe was cast in a metallic die, and its microstructural changes due to Ti-B refiner and Sr modifier additions were studied. Apart from usual refinement and modification of the microstructure, some mutual influences of the additives took place, and no mutual poisoning effects by these additives, in combined form, were observed. It was noticed that the dimensions of the iron-rich intermetallics were influenced by the additives causing them to become larger. The needle-shaped intermetallics that were obtained from refiner addition became thicker and longer when adding the modifier. It was also found that α-Al and eutectic silicon phases preferentially nucleate on different types of intermetallic compounds. The more iron content of the intermetallic compounds and the more changes in their dimensions occurred. Formation of the shrinkage porosities was also observed.

  16. Relativistic regimes in which Compton scattering doubly differential cross sections obtained from impulse approximation are accurate due to cancelation of errors

    NASA Astrophysics Data System (ADS)

    Lajohn, L. A.; Pratt, R. H.

    2015-05-01

    There is no simple parameter that can be used to predict when impulse approximation (IA) can yield accurate Compton scattering doubly differential cross sections (DDCS) in relativistic regimes. When Z is low, a small value of the parameter /q (where is the average initial electron momentum and q is the momentum transfer) suffices. For small Z the photon electron kinematic contribution described in relativistic S-matrix (SM) theory reduces to an expression, Xrel, which is present in the relativistic impulse approximation (RIA) formula for Compton DDCS. When Z is high, the S-Matrix photon electron kinematics no longer reduces to Xrel, and this along with the error characterized by the magnitude of /q contribute to the RIA error Δ. We demonstrate and illustrate in the form of contour plots that there are regimes of incident photon energy ωi and scattering angle θ in which the two types of errors at least partially cancel. Our calculations show that when θ is about 65° for Uranium K-shell scattering, Δ is less than 1% over an ωi range of 300 to 900 keV.

  17. False-positive myeloperoxidase binding activity due to DNA/anti-DNA antibody complexes: a source for analytical error in serologic evaluation of anti-neutrophil cytoplasmic autoantibodies.

    PubMed

    Jethwa, H S; Nachman, P H; Falk, R J; Jennette, J C

    2000-09-01

    Anti-myeloperoxidase antibodies (anti-MPO) are a major type of anti-neutrophil cytoplasmic antibody (ANCA). While evaluating anti-MPO monoclonal antibodies from SCG/Kj mice, we observed several hybridomas that appeared to react with both MPO and DNA. Sera from some patients with systemic lupus erythematosus (SLE) also react with MPO and DNA. We hypothesized that the MPO binding activity is a false-positive result due to the binding of DNA, contained within the antigen binding site of anti-DNA antibodies, to the cationic MPO. Antibodies from tissue culture supernatants from 'dual reactive' hybridomas were purified under high-salt conditions (3 M NaCl) to remove any antigen bound to antibody. The MPO and DNA binding activity were measured by ELISA. The MPO binding activity was completely abrogated while the DNA binding activity remained. The MPO binding activity was restored, in a dose-dependent manner, by the addition of increasing amount of calf-thymus DNA (CT-DNA) to the purified antibody. Sera from six patients with SLE that reacted with both MPO and DNA were treated with DNase and showed a decrease in MPO binding activity compared with untreated samples. MPO binding activity was observed when CT-DNA was added to sera from SLE patients that initially reacted with DNA but not with MPO. These results suggest that the DNA contained within the antigen binding site of anti-DNA antibodies could bind to the highly cationic MPO used as substrate antigen in immunoassays, resulting in a false-positive test.

  18. Analytical Calculation of Errors in Time and Value Perception Due to a Subjective Time Accumulator: A Mechanistic Model and the Generation of Weber's Law.

    PubMed

    Namboodiri, Vijay Mohan K; Mihalas, Stefan; Hussain Shuler, Marshall G

    2016-01-01

    It has been previously shown (Namboodiri, Mihalas, Marton, & Hussain Shuler, 2014) that an evolutionary theory of decision making and time perception is capable of explaining numerous behavioral observations regarding how humans and animals decide between differently delayed rewards of differing magnitudes and how they perceive time. An implementation of this theory using a stochastic drift-diffusion accumulator model (Namboodiri, Mihalas, & Hussain Shuler, 2014a) showed that errors in time perception and decision making approximately obey Weber's law for a range of parameters. However, prior calculations did not have a clear mechanistic underpinning. Further, these calculations were only approximate, with the range of parameters being limited. In this letter, we provide a full analytical treatment of such an accumulator model, along with a mechanistic implementation, to calculate the expression of these errors for the entirety of the parameter space. In our mechanistic model, Weber's law results from synaptic facilitation and depression within the feedback synapses of the accumulator. Our theory also makes the prediction that the steepness of temporal discounting can be affected by requiring the precise timing of temporal intervals. Thus, by presenting exact quantitative calculations, this work provides falsifiable predictions for future experimental testing.

  19. A novel error detection due to joint CRC aided denoise-and-forward network coding for two-way relay channels.

    PubMed

    Cheng, Yulun; Yang, Longxiang

    2014-01-01

    In wireless two-way (TW) relay channels, denoise-and-forward (DNF) network coding (NC) is a promising technique to achieve spectral efficiency. However, unsuccessful detection at relay severely deteriorates the diversity gain, as well as end-to-end pairwise error probability (PEP). To handle this issue, a novel joint cyclic redundancy code (CRC) check method (JCRC) is proposed in this paper by exploiting the property of two NC combined CRC codewords. Firstly, the detection probability bounds of the proposed method are derived to prove its efficiency in evaluating the reliability of NC signals. On the basis of that, three JCRC aided TW DNF NC schemes are proposed, and the corresponding PEP performances are also derived. Numerical results reveal that JCRC aided TW DNF NC has similar PEP comparing with the separate CRC one, while the complexity is reduced to half. Besides, it demonstrates that the proposed schemes outperform the conventional one with log-likelihood ratio threshold.

  20. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  1. Large-scale compensation of errors in pairwise-additive empirical force fields: comparison of AMBER intermolecular terms with rigorous DFT-SAPT calculations.

    PubMed

    Zgarbová, Marie; Otyepka, Michal; Sponer, Jirí; Hobza, Pavel; Jurecka, Petr

    2010-09-21

    The intermolecular interaction energy components for several molecular complexes were calculated using force fields available in the AMBER suite of programs and compared with Density Functional Theory-Symmetry Adapted Perturbation Theory (DFT-SAPT) values. The extent to which such comparison is meaningful is discussed. The comparability is shown to depend strongly on the intermolecular distance, which means that comparisons made at one distance only are of limited value. At large distances the coulombic and van der Waals 1/r(6) empirical terms correspond fairly well with the DFT-SAPT electrostatics and dispersion terms, respectively. At the onset of electronic overlap the empirical values deviate from the reference values considerably. However, the errors in the force fields tend to cancel out in a systematic manner at equilibrium distances. Thus, the overall performance of the force fields displays errors an order of magnitude smaller than those of the individual interaction energy components. The repulsive 1/r(12) component of the van der Waals expression seems to be responsible for a significant part of the deviation of the force field results from the reference values. We suggest that further improvement of the force fields for intermolecular interactions would require replacement of the nonphysical 1/r(12) term by an exponential function. Dispersion anisotropy and its effects are discussed. Our analysis is intended to show that although comparing the empirical and non-empirical interaction energy components is in general problematic, it might bring insights useful for the construction of new force fields. Our results are relevant to often performed force-field-based interaction energy decompositions.

  2. Growth enhancement of Picea abies trees under long-term, low-dose N addition is due to morphological more than to physiological changes.

    PubMed

    Krause, Kim; Cherubini, Paolo; Bugmann, Harald; Schleppi, Patrick

    2012-12-01

    Human activities have drastically increased nitrogen (N) inputs into natural and near-natural terrestrial ecosystems such that critical loads are now being exceeded in many regions of the world. This implies that these ecosystems are shifting from natural N limitation to eutrophication or even N saturation. This process is expected to modify the growth of forests and thus, along with management, to affect their carbon (C) sequestration. However, knowledge of the physiological mechanisms underlying tree response to N inputs, especially in the long term, is still lacking. In this study, we used tree-ring patterns and a dual stable isotope approach (δ(13)C and δ(18)O) to investigate tree growth responses and the underlying physiological reactions in a long-term, low-dose N addition experiment (+23 kg N ha(-1) a(-1)). This experiment has been conducted for 14 years in a mountain Picea abies (L.) Karst. forest in Alptal, Switzerland, using a paired-catchment design. Tree stem C sequestration increased by ∼22%, with an N use efficiency (NUE) of ca. 8 kg additional C in tree stems per kg of N added. Neither earlywood nor latewood δ(13)C values changed significantly compared with the control, indicating that the intrinsic water use efficiency (WUE(i)) (A/g(s)) did not change due to N addition. Further, the isotopic signal of δ(18)O in early- and latewood showed no significant response to the treatment, indicating that neither stomatal conductance nor leaf-level photosynthesis changed significantly. Foliar analyses showed that needle N concentration significantly increased in the fourth to seventh treatment year, accompanied by increased dry mass and area per needle, and by increased tree height growth. Later, N concentration and height growth returned to nearly background values, while dry mass and area per needle remained high. Our results support the hypothesis that enhanced stem growth caused by N addition is mainly due to an increased leaf area index (LAI

  3. Pre- and post-experimental manipulation assessments confirm the increase in number of birds due to the addition of nest boxes.

    PubMed

    Cuatianquiz Lima, Cecilia; Macías Garcia, Constantino

    2016-01-01

    Secondary cavity nesting (SCN) birds breed in holes that they do not excavate themselves. This is possible where there are large trees whose size and age permit the digging of holes by primary excavators and only rarely happens in forest plantations, where we expected a deficit of both breeding holes and SCN species. We assessed whether the availability of tree cavities influenced the number of SCNs in two temperate forest types, and evaluated the change in number of SCNs after adding nest boxes. First, we counted all cavities within each of our 25-m radius sampling points in mature and young forest plots during 2009. We then added nest boxes at standardised locations during 2010 and 2011 and conducted fortnightly bird counts (January-October 2009-2011). In 2011 we added two extra plots of each forest type, where we also conducted bird counts. Prior to adding nest boxes, counts revealed more SCNs in mature than in young forest. Following the addition of nest boxes, the number of SCNs increased significantly in the points with nest boxes in both types of forest. Counts in 2011 confirmed the increase in number of birds due to the addition of nest boxes. Given the likely benefits associated with a richer bird community we propose that, as is routinely done in some countries, forest management programs preserve old tree stumps and add nest boxes to forest plantations in order to increase bird numbers and bird community diversity. PMID:26998410

  4. Pre- and post-experimental manipulation assessments confirm the increase in number of birds due to the addition of nest boxes

    PubMed Central

    Cuatianquiz Lima, Cecilia

    2016-01-01

    Secondary cavity nesting (SCN) birds breed in holes that they do not excavate themselves. This is possible where there are large trees whose size and age permit the digging of holes by primary excavators and only rarely happens in forest plantations, where we expected a deficit of both breeding holes and SCN species. We assessed whether the availability of tree cavities influenced the number of SCNs in two temperate forest types, and evaluated the change in number of SCNs after adding nest boxes. First, we counted all cavities within each of our 25-m radius sampling points in mature and young forest plots during 2009. We then added nest boxes at standardised locations during 2010 and 2011 and conducted fortnightly bird counts (January–October 2009–2011). In 2011 we added two extra plots of each forest type, where we also conducted bird counts. Prior to adding nest boxes, counts revealed more SCNs in mature than in young forest. Following the addition of nest boxes, the number of SCNs increased significantly in the points with nest boxes in both types of forest. Counts in 2011 confirmed the increase in number of birds due to the addition of nest boxes. Given the likely benefits associated with a richer bird community we propose that, as is routinely done in some countries, forest management programs preserve old tree stumps and add nest boxes to forest plantations in order to increase bird numbers and bird community diversity. PMID:26998410

  5. Are differences in genomic data sets due to true biological variants or errors in genome assembly: an example from two chloroplast genomes.

    PubMed

    Wu, Zhiqiang; Tembrock, Luke R; Ge, Song

    2015-01-01

    DNA sequencing has been revolutionized by the development of high-throughput sequencing technologies. Plummeting costs and the massive throughput capacities of second and third generation sequencing platforms have transformed many fields of biological research. Concurrently, new data processing pipelines made rapid de novo genome assemblies possible. However, high quality data are critically important for all investigations in the genomic era. We used chloroplast genomes of one Oryza species (O. australiensis) to compare differences in sequence quality: one genome (GU592209) was obtained through Illumina sequencing and reference-guided assembly and the other genome (KJ830774) was obtained via target enrichment libraries and shotgun sequencing. Based on the whole genome alignment, GU592209 was more similar to the reference genome (O. sativa: AY522330) with 99.2% sequence identity (SI value) compared with the 98.8% SI values in the KJ830774 genome; whereas the opposite result was obtained when the SI values in coding and noncoding regions of GU592209 and KJ830774 were compared. Additionally, the junctions of two single copies and repeat copies in the chloroplast genome exhibited differences. Phylogenetic analyses were conducted using these sequences, and the different data sets yielded dissimilar topologies: phylogenetic replacements of the two individuals were remarkably different based on whole genome sequencing or SNP data and insertions and deletions (indels) data. Thus, we concluded that the genomic composition of GU592209 was heterogeneous in coding and non-coding regions. These findings should impel biologists to carefully consider the quality of sequencing and assembly when working with next-generation data.

  6. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  7. Variation in mechanical behavior due to different build directions of Titanium6Aluminum4Vanadium fabricated by electron beam additive manufacturing technology

    NASA Astrophysics Data System (ADS)

    Roy, Lalit

    Titanium has always been a metal of great interest since its discovery especially for critical applications because of its excellent mechanical properties such as light weight (almost half of that of the steel), low density (4.4 gm/cc) and high strength (almost similar to steel). It creates a stable and adherent oxide layer on its surface upon exposure to air or water which gives it a great resistance to corrosion and has made it a great choice for structures in severe corrosive environment and sea water. Its non-allergic property has made it suitable for biomedical application for manufacturing implants. Having a very high melting temperature, it has a very good potential for high temperature applications. But high production and processing cost has limited its application. Ti6Al4V is the most used titanium alloy for which it has acquired the title as `workhouse' of the Ti family. Additive layer Manufacturing (ALM) has brought revolution in manufacturing industries. Today, this additive manufacturing has developed into several methods and formed a family. This method fabricates a product by adding layer after layer as per the geometry given as input into the system. Though the conception was developed to fabricate prototypes and making tools initially, but its highly economic aspect i.e., very little waste material for less machining and comparatively lower production lead time, obviation of machine tools have drawn attention for its further development towards mass production. Electron Beam Melting (EBM) is the latest addition to ALM family developed by Arcam, ABRTM located in Sweden. The electron beam that is used as heat source melts metal powder to form layers. For this thesis work, three different types of specimens have been fabricated using EBM system. These specimens differ in regard of direction of layer addition. Mechanical properties such as ultimate tensile strength, elastic modulus and yield strength, have been measured and compared with standard data

  8. 38 CFR 3.361 - Benefits under 38 U.S.C. 1151(a) for additional disability or death due to hospital care, medical...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation Ratings for Special Purposes § 3.361 Benefits under 38 U.S.C..., error in judgment, or similar instance of fault on VA's part in furnishing hospital care, medical...

  9. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  10. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  11. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  12. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    PubMed

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  13. Error-related electrocorticographic activity in humans during continuous movements.

    PubMed

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2012-04-01

    Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects' movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.

  14. Ferrite Formation Dynamics and Microstructure Due to Inclusion Engineering in Low-Alloy Steels by Ti2O3 and TiN Addition

    NASA Astrophysics Data System (ADS)

    Mu, Wangzhong; Shibata, Hiroyuki; Hedström, Peter; Jönsson, Pär Göran; Nakajima, Keiji

    2016-08-01

    The dynamics of intragranular ferrite (IGF) formation in inclusion engineered steels with either Ti2O3 or TiN addition were investigated using in situ high temperature confocal laser scanning microscopy. Furthermore, the chemical composition of the inclusions and the final microstructure after continuous cooling transformation was investigated using electron probe microanalysis and electron backscatter diffraction, respectively. It was found that there is a significant effect of the chemical composition of the inclusions, the cooling rate, and the prior austenite grain size on the phase fractions and the starting temperatures of IGF and grain boundary ferrite (GBF). The fraction of IGF is larger in the steel with Ti2O3 addition compared to the steel with TiN addition after the same thermal cycle has been imposed. The reason for this difference is the higher potency of the TiO x phase as nucleation sites for IGF formation compared to the TiN phase, which was supported by calculations using classical nucleation theory. The IGF fraction increases with increasing prior austenite grain size, while the fraction of IGF in both steels was the highest for the intermediate cooling rate of 70 °C/min, since competing phase transformations were avoided, the structure of the IGF was though refined with increasing cooling rate. Finally, regarding the starting temperatures of IGF and GBF, they decrease with increasing cooling rate and the starting temperature of GBF decreases with increasing grain size, while the starting temperature of IGF remains constant irrespective of grain size.

  15. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  16. The test and treatment methods of benign paroxysmal positional vertigo and an addition to the management of vertigo due to the superior vestibular canal (BPPV-SC).

    PubMed

    Rahko, T

    2002-10-01

    A review of the tests and treatment manoeuvres for benign paroxysmal positional vertigo of the posterior, horizontal and superior vestibular canals is presented. Additionally, a new way to test and treat positional vertigo of the superior vestibular canal is presented. In a prospective study, 57 out of 305 patients' visits are reported. They had residual symptoms and dizziness after the test and the treatment of benign paroxysmal positional vertigo of the horizontal canal (BPPV-HC) and posterior canal (PC). They were tested with a new test and treated with a new manoeuvre for superior canal benign paroxysmal positional vertigo (BPPV-SC). Results for vertigo in 53 patients were good; motion sickness and acrophobia disappeared. Reactive neck tension to BPPV was relieved. Older people were numerous among patients and their quality of life (QOL) improved.

  17. Short-term salivary acetaldehyde increase due to direct exposure to alcoholic beverages as an additional cancer risk factor beyond ethanol metabolism

    PubMed Central

    2011-01-01

    Background An increasing body of evidence now implicates acetaldehyde as a major underlying factor for the carcinogenicity of alcoholic beverages and especially for oesophageal and oral cancer. Acetaldehyde associated with alcohol consumption is regarded as 'carcinogenic to humans' (IARC Group 1), with sufficient evidence available for the oesophagus, head and neck as sites of carcinogenicity. At present, research into the mechanistic aspects of acetaldehyde-related oral cancer has been focused on salivary acetaldehyde that is formed either from ethanol metabolism in the epithelia or from microbial oxidation of ethanol by the oral microflora. This study was conducted to evaluate the role of the acetaldehyde that is found as a component of alcoholic beverages as an additional factor in the aetiology of oral cancer. Methods Salivary acetaldehyde levels were determined in the context of sensory analysis of different alcoholic beverages (beer, cider, wine, sherry, vodka, calvados, grape marc spirit, tequila, cherry spirit), without swallowing, to exclude systemic ethanol metabolism. Results The rinsing of the mouth for 30 seconds with an alcoholic beverage is able to increase salivary acetaldehyde above levels previously judged to be carcinogenic in vitro, with levels up to 1000 μM in cases of beverages with extreme acetaldehyde content. In general, the highest salivary acetaldehyde concentration was found in all cases in the saliva 30 sec after using the beverages (average 353 μM). The average concentration then decreased at the 2-min (156 μM), 5-min (76 μM) and 10-min (40 μM) sampling points. The salivary acetaldehyde concentration depends primarily on the direct ingestion of acetaldehyde contained in the beverages at the 30-sec sampling, while the influence of the metabolic formation from ethanol becomes the major factor at the 2-min sampling point. Conclusions This study offers a plausible mechanism to explain the increased risk for oral cancer associated with

  18. Examining food additives and spices for their anti-oxidant ability to counteract oxidative damage due to chronic exposure to free radicals from environmental pollutants

    NASA Astrophysics Data System (ADS)

    Martinez, Raul A., III

    The main objective of this work was to examine food additives and spices (from the Apiaceae family) to determine their antioxidant properties to counteract oxidative stress (damage) caused by Environmental pollutants. Environmental pollutants generate Reactive Oxygen species and Reactive Nitrogen species. Star anise essential oil showed lower antioxidant activity than extracts using DPPH scavenging. Dill Seed -- Anethum Graveolens -the monoterpene components of dill showed to activate the enzyme glutathione-S-transferase , which helped attach the antioxidant molecule glutathione to oxidized molecules that would otherwise do damage in the body. The antioxidant activity of extracts of dill was comparable with ascorbic acid, alpha-tocopherol, and quercetin in in-vitro systems. Black Cumin -- Nigella Sativa: was evaluated the method 1,1-diphenyl2-picrylhhydrazyl (DPPH) radical scavenging activity. Positive correlations were found between the total phenolic content in the black cumin extracts and their antioxidant activities. Caraway -- Carum Carvi: The antioxidant activity was evaluated by the scavenging effects of 1,1'-diphenyl-2-picrylhydrazyl (DPPH). Caraway showed strong antioxidant activity. Cumin -- Cuminum Cyminum - the major polyphenolic were extracted and separated by HPTLC. The antioxidant activity of the cumin extract was tested on 1,1'-diphenyl-2- picrylhydrazyl (DPPH) free radical scavenging. Coriander -- Coriandrum Sativum - the antioxidant and free-radical-scavenging property of the seeds was studied and also investigated whether the administration of seeds curtails oxidative stress. Coriander seed powder not only inhibited the process of Peroxidative damage, but also significantly reactivated the antioxidant enzymes and antioxidant levels. The seeds also showed scavenging activity against superoxides and hydroxyl radicals. The total polyphenolic content of the seeds was found to be 12.2 galic acid equivalents (GAE)/g while the total flavonoid content

  19. Examining food additives and spices for their anti-oxidant ability to counteract oxidative damage due to chronic exposure to free radicals from environmental pollutants

    NASA Astrophysics Data System (ADS)

    Martinez, Raul A., III

    The main objective of this work was to examine food additives and spices (from the Apiaceae family) to determine their antioxidant properties to counteract oxidative stress (damage) caused by Environmental pollutants. Environmental pollutants generate Reactive Oxygen species and Reactive Nitrogen species. Star anise essential oil showed lower antioxidant activity than extracts using DPPH scavenging. Dill Seed -- Anethum Graveolens -the monoterpene components of dill showed to activate the enzyme glutathione-S-transferase , which helped attach the antioxidant molecule glutathione to oxidized molecules that would otherwise do damage in the body. The antioxidant activity of extracts of dill was comparable with ascorbic acid, alpha-tocopherol, and quercetin in in-vitro systems. Black Cumin -- Nigella Sativa: was evaluated the method 1,1-diphenyl2-picrylhhydrazyl (DPPH) radical scavenging activity. Positive correlations were found between the total phenolic content in the black cumin extracts and their antioxidant activities. Caraway -- Carum Carvi: The antioxidant activity was evaluated by the scavenging effects of 1,1'-diphenyl-2-picrylhydrazyl (DPPH). Caraway showed strong antioxidant activity. Cumin -- Cuminum Cyminum - the major polyphenolic were extracted and separated by HPTLC. The antioxidant activity of the cumin extract was tested on 1,1'-diphenyl-2- picrylhydrazyl (DPPH) free radical scavenging. Coriander -- Coriandrum Sativum - the antioxidant and free-radical-scavenging property of the seeds was studied and also investigated whether the administration of seeds curtails oxidative stress. Coriander seed powder not only inhibited the process of Peroxidative damage, but also significantly reactivated the antioxidant enzymes and antioxidant levels. The seeds also showed scavenging activity against superoxides and hydroxyl radicals. The total polyphenolic content of the seeds was found to be 12.2 galic acid equivalents (GAE)/g while the total flavonoid content

  20. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  1. Triangulation Error Analysis for the Barium Ion Cloud Experiment. M.S. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1973-01-01

    The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.

  2. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices

  3. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  4. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  5. Compensation of optode sensitivity and position errors in diffuse optical tomography using the approximation error approach.

    PubMed

    Mozumder, Meghdoot; Tarvainen, Tanja; Arridge, Simon R; Kaipio, Jari; Kolehmainen, Ville

    2013-01-01

    Diffuse optical tomography is highly sensitive to measurement and modeling errors. Errors in the source and detector coupling and positions can cause significant artifacts in the reconstructed images. Recently the approximation error theory has been proposed to handle modeling errors. In this article, we investigate the feasibility of the approximation error approach to compensate for modeling errors due to inaccurately known optode locations and coupling coefficients. The approach is evaluated with simulations. The results show that the approximation error method can be used to recover from artifacts in reconstructed images due to optode coupling and position errors.

  6. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  7. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  8. Cirrus cloud retrieval using infrared sounding data: Multilevel cloud errors

    NASA Technical Reports Server (NTRS)

    Baum, Bryan A.; Wielicki, Bruce A.

    1994-01-01

    In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-microns CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1-1.0) and cloud-top pressures (850-250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud effective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all cases, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300-500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.

  9. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  10. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  11. Sepsis: Medical errors in Poland.

    PubMed

    Rorat, Marta; Jurek, Tomasz

    2016-01-01

    Health, safety and medical errors are currently the subject of worldwide discussion. The authors analysed medico-legal opinions trying to determine types of medical errors and their impact on the course of sepsis. The authors carried out a retrospective analysis of 66 medico-legal opinions issued by the Wroclaw Department of Forensic Medicine between 2004 and 2013 (at the request of the prosecutor or court) in cases examined for medical errors. Medical errors were confirmed in 55 of the 66 medico-legal opinions. The age of victims varied from 2 weeks to 68 years; 49 patients died. The analysis revealed medical errors committed by 113 health-care workers: 98 physicians, 8 nurses and 8 emergency medical dispatchers. In 33 cases, an error was made before hospitalisation. Hospital errors occurred in 35 victims. Diagnostic errors were discovered in 50 patients, including 46 cases of sepsis being incorrectly recognised and insufficient diagnoses in 37 cases. Therapeutic errors occurred in 37 victims, organisational errors in 9 and technical errors in 2. In addition to sepsis, 8 patients also had a severe concomitant disease and 8 had a chronic disease. In 45 cases, the authors observed glaring errors, which could incur criminal liability. There is an urgent need to introduce a system for reporting and analysing medical errors in Poland. The development and popularisation of standards for identifying and treating sepsis across basic medical professions is essential to improve patient safety and survival rates. Procedures should be introduced to prevent health-care workers from administering incorrect treatment in cases.

  12. [Medical device use errors].

    PubMed

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  13. Tropical errors and convection

    NASA Astrophysics Data System (ADS)

    Bechtold, P.; Bauer, P.; Engelen, R. J.

    2012-12-01

    Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical errors and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these errors is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind errors are the Indian Ocean with large interannual variations in forecast errors, and the East Pacific with persistent systematic errors that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation errors inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment time-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too

  14. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  15. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  16. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  17. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  18. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  19. Hybrid Models for Trajectory Error Modelling in Urban Environments

    NASA Astrophysics Data System (ADS)

    Angelatsa, E.; Parés, M. E.; Colomina, I.

    2016-06-01

    This paper tackles the first step of any strategy aiming to improve the trajectory of terrestrial mobile mapping systems in urban environments. We present an approach to model the error of terrestrial mobile mapping trajectories, combining deterministic and stochastic models. Due to urban specific environment, the deterministic component will be modelled with non-continuous functions composed by linear shifts, drifts or polynomial functions. In addition, we will introduce a stochastic error component for modelling residual noise of the trajectory error function. First step for error modelling requires to know the actual trajectory error values for several representative environments. In order to determine as accurately as possible the trajectories error, (almost) error less trajectories should be estimated using extracted nonsemantic features from a sequence of images collected with the terrestrial mobile mapping system and from a full set of ground control points. Once the references are estimated, they will be used to determine the actual errors in terrestrial mobile mapping trajectory. The rigorous analysis of these data sets will allow us to characterize the errors of a terrestrial mobile mapping system for a wide range of environments. This information will be of great use in future campaigns to improve the results of the 3D points cloud generation. The proposed approach has been evaluated using real data. The data originate from a mobile mapping campaign over an urban and controlled area of Dortmund (Germany), with harmful GNSS conditions. The mobile mapping system, that includes two laser scanner and two cameras, was mounted on a van and it was driven over a controlled area around three hours. The results show the suitability to decompose trajectory error with non-continuous deterministic and stochastic components.

  20. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  1. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  2. Motor and non-motor error and the influence of error magnitude on brain activity.

    PubMed

    Nadig, Karin Graziella; Jäncke, Lutz; Lüchinger, Roger; Lutz, Kai

    2010-04-01

    It has been shown that frontal cortical areas increase their activity during error perception and error processing. However, it is not yet clear whether perception of motor errors is processed in the same frontal areas as perception of errors in cognitive tasks. It is also unclear whether brain activity level is influenced by the magnitude of error. For this purpose, we conducted a study in which subjects were confronted with motor and non-motor errors, and had them perform a sensorimotor transformation task in which they were likely to commit motor errors of different magnitudes (internal errors). In addition to the internally committed motor errors, non-motor errors (external errors) were added to the feedback in some trials. We found that activity in the anterior insula, inferior frontal gyrus (IFG), cerebellum, precuneus, and posterior medial frontal cortex (pMFC) correlated positively with the magnitude of external errors. The middle frontal gyrus (MFG) and the pMFC cortex correlated positively with the magnitude of the total error fed back to subjects (internal plus external). No significant positive correlation between internal error and brain activity could be detected. These results indicate that motor errors have a differential effect on brain activity compared with non-motor errors.

  3. Field error lottery

    NASA Astrophysics Data System (ADS)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  4. Field error lottery

    NASA Astrophysics Data System (ADS)

    Elliott, C. James; McVey, Brian D.; Quimby, David C.

    1990-11-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.

  5. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  6. Medical Error and Moral Luck.

    PubMed

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613

  7. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  8. Error properties of Argos satellite telemetry locations using least squares and Kalman filtering.

    PubMed

    Boyd, Janice D; Brightsmith, Donald J

    2013-01-01

    Study of animal movements is key for understanding their ecology and facilitating their conservation. The Argos satellite system is a valuable tool for tracking species which move long distances, inhabit remote areas, and are otherwise difficult to track with traditional VHF telemetry and are not suitable for GPS systems. Previous research has raised doubts about the magnitude of position errors quoted by the satellite service provider CLS. In addition, no peer-reviewed publications have evaluated the usefulness of the CLS supplied error ellipses nor the accuracy of the new Kalman filtering (KF) processing method. Using transmitters hung from towers and trees in southeastern Peru, we show the Argos error ellipses generally contain <25% of the true locations and therefore do not adequately describe the true location errors. We also find that KF processing does not significantly increase location accuracy. The errors for both LS and KF processing methods were found to be lognormally distributed, which has important repercussions for error calculation, statistical analysis, and data interpretation. In brief, "good" positions (location codes 3, 2, 1, A) are accurate to about 2 km, while 0 and B locations are accurate to about 5-10 km. However, due to the lognormal distribution of the errors, larger outliers are to be expected in all location codes and need to be accounted for in the user's data processing. We evaluate five different empirical error estimates and find that 68% lognormal error ellipses provided the most useful error estimates. Longitude errors are larger than latitude errors by a factor of 2 to 3, supporting the use of elliptical error ellipses. Numerous studies over the past 15 years have also found fault with the CLS-claimed error estimates yet CLS has failed to correct their misleading information. We hope this will be reversed in the near future.

  9. Understanding the Nature of Medication Errors in an ICU with a Computerized Physician Order Entry System

    PubMed Central

    Cho, Insook; Park, Hyeok; Choi, Youn Jeong; Hwang, Mi Heui; Bates, David W.

    2014-01-01

    Objectives We investigated incidence rates to understand the nature of medication errors potentially introduced by utilizing a computerized physician order entry (CPOE) system in the three clinical phases of the medication process: prescription, administration, and documentation. Methods Overt observations and chart reviews were employed at two surgical intensive care units of a 950-bed tertiary teaching hospital. Ten categories of high-risk drugs prescribed over a four-month period were noted and reviewed. Error definition and classifications were adapted from previous studies for use in the present research. Incidences of medication errors in the three phases of the medication process were analyzed. In addition, nurses' responses to prescription errors were also assessed. Results Of the 534 prescriptions issued, 286 (53.6%) included at least one error. The proportion of errors was 19.0% (58) of the 306 drug administrations, of which two-thirds were verbal orders classified as errors due to incorrectly entered prescriptions. Documentation errors occurred in 205 (82.7%) of 248 correctly performed administrations. When tracking incorrectly entered prescriptions, 93% of the errors were intercepted by nurses, but two-thirds of them were recorded as prescribed rather than administered. Conclusion The number of errors occurring at each phase of the medication process was relatively high, despite long experience with a CPOE system. The main causes of administration errors and documentation errors were prescription errors and verbal order processes. To reduce these errors, hospital-level and unit-level efforts toward a better system are needed. PMID:25526059

  10. Error Properties of Argos Satellite Telemetry Locations Using Least Squares and Kalman Filtering

    PubMed Central

    Boyd, Janice D.; Brightsmith, Donald J.

    2013-01-01

    Study of animal movements is key for understanding their ecology and facilitating their conservation. The Argos satellite system is a valuable tool for tracking species which move long distances, inhabit remote areas, and are otherwise difficult to track with traditional VHF telemetry and are not suitable for GPS systems. Previous research has raised doubts about the magnitude of position errors quoted by the satellite service provider CLS. In addition, no peer-reviewed publications have evaluated the usefulness of the CLS supplied error ellipses nor the accuracy of the new Kalman filtering (KF) processing method. Using transmitters hung from towers and trees in southeastern Peru, we show the Argos error ellipses generally contain <25% of the true locations and therefore do not adequately describe the true location errors. We also find that KF processing does not significantly increase location accuracy. The errors for both LS and KF processing methods were found to be lognormally distributed, which has important repercussions for error calculation, statistical analysis, and data interpretation. In brief, “good” positions (location codes 3, 2, 1, A) are accurate to about 2 km, while 0 and B locations are accurate to about 5–10 km. However, due to the lognormal distribution of the errors, larger outliers are to be expected in all location codes and need to be accounted for in the user’s data processing. We evaluate five different empirical error estimates and find that 68% lognormal error ellipses provided the most useful error estimates. Longitude errors are larger than latitude errors by a factor of 2 to 3, supporting the use of elliptical error ellipses. Numerous studies over the past 15 years have also found fault with the CLS-claimed error estimates yet CLS has failed to correct their misleading information. We hope this will be reversed in the near future. PMID:23690980

  11. Accepting error to make less error.

    PubMed

    Einhorn, H J

    1986-01-01

    In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.

  12. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  13. Decoding and synchronization of error correcting codes

    NASA Astrophysics Data System (ADS)

    Madkour, S. A.

    1983-01-01

    Decoding devices for hard quantization and soft decision error correcting codes are discussed. A Meggit decoder for Reed-Solomon polynominal codes was implemented and tested. It uses 49 TTL logic IC. A maximum binary frequency of 30 Mbit/sec is demonstrated. A soft decision decoding approach was applied to hard decision decoding, using the principles of threshold decoding. Simulation results indicate that the proposed schema achieves satisfactory performance using only a small number of parity checks. The combined correction of substitution and synchronization errors is analyzed. The algorithm presented shows the capability of convolutional codes to correct synchronization errors as well as independent additive errors without any additional redundancy.

  14. The effect of divided attention on inhibiting the gravity error.

    PubMed

    Hood, Bruce M; Wilson, Alice; Dyson, Sally

    2006-05-01

    Children who could overcome the gravity error on Hood's (1995) tubes task were tested in a condition where they had to monitor two falling balls. This condition significantly impaired search performance with the majority of mistakes being gravity errors. In a second experiment, the effect of monitoring two balls was compared in the tubes task and a spatial transposition task not involving gravity. Again, monitoring two objects produced impaired search performance in the gravity task but not in the spatial transposition task. These findings support the view that divided attention disrupts the ability to exercise inhibitory control over the gravity error and that the performance drop on this task is not due to the additional task demands incurred by divided attention.

  15. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  16. Error-compensation measurements on polarization qubits

    NASA Astrophysics Data System (ADS)

    Hou, Zhibo; Zhu, Huangjun; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can

    2016-06-01

    Systematic errors are inevitable in most measurements performed in real life because of imperfect measurement devices. Reducing systematic errors is crucial to ensuring the accuracy and reliability of measurement results. To this end, delicate error-compensation design is often necessary in addition to device calibration to reduce the dependence of the systematic error on the imperfection of the devices. The art of error-compensation design is well appreciated in nuclear magnetic resonance system by using composite pulses. In contrast, there are few works on reducing systematic errors in quantum optical systems. Here we propose an error-compensation design to reduce the systematic error in projective measurements on a polarization qubit. It can reduce the systematic error to the second order of the phase errors of both the half-wave plate (HWP) and the quarter-wave plate (QWP) as well as the angle error of the HWP. This technique is then applied to experiments on quantum state tomography on polarization qubits, leading to a 20-fold reduction in the systematic error. Our study may find applications in high-precision tasks in polarization optics and quantum optics.

  17. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  18. Issues in automatic OCR error classification

    SciTech Connect

    Esakov, J.; Lopresti, D.P.; Sandberg, J.S.; Zhou, J.

    1994-12-31

    In this paper we present the surprising result that OCR errors are not always uniformly distributed across a page. Under certain circumstances, 30% or more of the errors incurred can be attributed to a single, avoidable phenomenon in the scanning process. This observation has important ramifications for work that explicitly or implicitly assumes a uniform error distribution. In addition, our experiments show that not just the quantity but also the nature of the errors is affected. This could have an impact on strategies used for post-process error correction. Results such as these can be obtained only by analyzing large quantities of data in a controlled way. To this end, we also describe our algorithm for classifying OCR errors. This is based on a well-known dynamic programming approach for determining string edit distance which we have extended to handle the types of character segmentation errors inherent to OCR.

  19. Compounding errors in 2 dogs receiving anticonvulsants

    PubMed Central

    McConkey, Sandra E.; Walker, Susan; Adams, Cathy

    2012-01-01

    Two cases that involve drug compounding errors are described. One dog exhibited increased seizure activity due to a compounded, flavored phenobarbital solution that deteriorated before the expiration date provided by the compounder. The other dog developed clinical signs of hyperkalemia and bromine toxicity following a 5-fold compounding error in the concentration of potassium bromide (KBr). PMID:23024385

  20. [Medical errors in obstetrics].

    PubMed

    Marek, Z

    1984-08-01

    Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.

  1. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  2. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  3. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  4. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  5. Error latency measurements in symbolic architectures

    NASA Technical Reports Server (NTRS)

    Young, L. T.; Iyer, R. K.

    1991-01-01

    Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.

  6. Flux Sampling Errors for Aircraft and Towers

    NASA Technical Reports Server (NTRS)

    Mahrt, Larry

    1998-01-01

    Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.

  7. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  8. Studies of Error Sources in Geodetic VLBI

    NASA Technical Reports Server (NTRS)

    Rogers, A. E. E.; Niell, A. E.; Corey, B. E.

    1996-01-01

    Achieving the goal of millimeter uncertainty in three dimensional geodetic positioning on a global scale requires significant improvement in the precision and accuracy of both random and systematic error sources. For this investigation we proposed to study errors due to instrumentation in Very Long Base Interferometry (VLBI) and due to the atmosphere. After the inception of this work we expanded the scope to include assessment of error sources in GPS measurements, especially as they affect the vertical component of site position and the measurement of water vapor in the atmosphere. The atmosphere correction 'improvements described below are of benefit to both GPS and VLBI.

  9. Motion estimation performance models with application to hardware error tolerance

    NASA Astrophysics Data System (ADS)

    Cheong, Hye-Yeon; Ortega, Antonio

    2007-01-01

    The progress of VLSI technology towards deep sub-micron feature sizes, e.g., sub-100 nanometer technology, has created a growing impact of hardware defects and fabrication process variability, which lead to reductions in yield rate. To address these problems, a new approach, system-level error tolerance (ET), has been recently introduced. Considering that a significant percentage of the entire chip production is discarded due to minor imperfections, this approach is based on accepting imperfect chips that introduce imperceptible/acceptable system-level degradation; this leads to increases in overall effective yield. In this paper, we investigate the impact of hardware faults on the video compression performance, with a focus on the motion estimation (ME) process. More specifically, we provide an analytical formulation of the impact of single and multiple stuck-at-faults within ME computation. We further present a model for estimating the system-level performance degradation due to such faults, which can be used for the error tolerance based decision strategy of accepting a given faulty chip. We also show how different faults and ME search algorithms compare in terms of error tolerance and define the characteristics of search algorithm that lead to increased error tolerance. Finally, we show that different hardware architectures performing the same metric computation have different error tolerance characteristics and we present the optimal ME hardware architecture in terms of error tolerance. While we focus on ME hardware, our work could also applied to systems (e.g., classifiers, matching pursuits, vector quantization) where a selection is made among several alternatives (e.g., class label, basis function, quantization codeword) based on which choice minimizes an additive metric of interest.

  10. Analysis of Medication Error Reports

    SciTech Connect

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  11. Addressing medical errors in hand surgery.

    PubMed

    Johnson, Shepard P; Adkinson, Joshua M; Chung, Kevin C

    2014-09-01

    Influential think tanks such as the Institute of Medicine have raised awareness about the implications of medical errors. In response, organizations, medical societies, and hospitals have initiated programs to decrease the incidence and prevent adverse effects of these errors. Surgeons deal with the direct implications of adverse events involving patients. In addition to managing the physical consequences, they are confronted with ethical and social issues when caring for a harmed patient. Although there is considerable effort to implement system-wide changes, there is little guidance for hand surgeons on how to address medical errors. Admitting an error by a physician is difficult, but a transparent environment where patients are notified of errors and offered consolation and compensation is essential to maintain physician-patient trust. Furthermore, equipping hand surgeons with a guide for addressing medical errors will help identify system failures, provide learning points for safety improvement, decrease litigation against physicians, and demonstrate a commitment to ethical and compassionate medical care.

  12. Measurement error in geometric morphometrics.

    PubMed

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.

  13. Preventing errors in laterality.

    PubMed

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2015-04-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.

  14. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  15. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-01

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  16. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  17. Errors Disrupt Subsequent Early Attentional Processes

    PubMed Central

    Van der Borght, Liesbet; Schevernels, Hanne; Burle, Boris; Notebaert, Wim

    2016-01-01

    It has been demonstrated that target detection is impaired following an error in an unrelated flanker task. These findings support the idea that the occurrence or processing of unexpected error-like events interfere with subsequent information processing. In the present study, we investigated the effect of errors on early visual ERP components. We therefore combined a flanker task and a visual discrimination task. Additionally, the intertrial interval between both tasks was manipulated in order to investigate the duration of these negative after-effects. The results of the visual discrimination task indicated that the amplitude of the N1 component, which is related to endogenous attention, was significantly decreased following an error, irrespective of the intertrial interval. Additionally, P3 amplitude was attenuated after an erroneous trial, but only in the long-interval condition. These results indicate that low-level attentional processes are impaired after errors. PMID:27050303

  18. Proofreading for word errors.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  19. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  20. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples

    PubMed Central

    Lyles, Robert H.; Van Domelen, Dane; Mitchell, Emily M.; Schisterman, Enrique F.

    2015-01-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  1. A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples.

    PubMed

    Lyles, Robert H; Van Domelen, Dane; Mitchell, Emily M; Schisterman, Enrique F

    2015-11-01

    Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. PMID:26593934

  2. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.

  3. Sequencing error correction without a reference genome

    PubMed Central

    2013-01-01

    Background Next (second) generation sequencing is an increasingly important tool for many areas of molecular biology, however, care must be taken when interpreting its output. Even a low error rate can cause a large number of errors due to the high number of nucleotides being sequenced. Identifying sequencing errors from true biological variants is a challenging task. For organisms without a reference genome this difficulty is even more challenging. Results We have developed a method for the correction of sequencing errors in data from the Illumina Solexa sequencing platforms. It does not require a reference genome and is of relevance for microRNA studies, unsequenced genomes, variant detection in ultra-deep sequencing and even for RNA-Seq studies of organisms with sequenced genomes where RNA editing is being considered. Conclusions The derived error model is novel in that it allows different error probabilities for each position along the read, in conjunction with different error rates depending on the particular nucleotides involved in the substitution, and does not force these effects to behave in a multiplicative manner. The model provides error rates which capture the complex effects and interactions of the three main known causes of sequencing error associated with the Illumina platforms. PMID:24350580

  4. Errors in neuroradiology.

    PubMed

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  5. Negligence, genuine error, and litigation.

    PubMed

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  6. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  7. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  8. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  9. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    . Moreover, the PES effect appears across tasksets with distinct stimuli and response rules in the context of observed errors, reflecting a generic process. Additionally, the slowing effect and improved accuracy in the post-observed error trial do not occur together, suggesting that they are independent behavioral adjustments in the context of observed errors. PMID:26934579

  10. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  11. Alcohol and error processing.

    PubMed

    Holroyd, Clay B; Yeung, Nick

    2003-08-01

    A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.

  12. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  13. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  14. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory

  15. Experimental investigation of false positive errors in auditory species occurrence surveys.

    PubMed

    Miller, David A W; Weir, Linda A; Mcclintock, Brett T; Grant, Evan H Campbell; Bailey, Larissa L; Simons, Theodore R

    2012-07-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, who recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors, and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI:--46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%;--3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods

  16. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  17. Antenna pointing systematic error model derivations

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.; Lansing, F. L.; Riggs, R.

    1987-01-01

    The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

  18. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.

  19. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. PMID:23962670

  20. Error monitoring in musicians.

    PubMed

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  1. Error reduction when prescribing neonatal parenteral nutrition.

    PubMed

    Brown, Cynthia L; Garrison, Nancy A; Hutchison, Alastair A

    2007-08-01

    A neonatal intensive care unit audit of 204 parenteral nutrition (PN) orders revealed a 27.9% PN prescribing error rate, with errors by pediatric residents exceeding those by neonatal nurse practitioners (NNPs) (39% versus 16%; P < 0.001). Our objective was to reduce the PN prescribing error rate by implementing an ordering improvement process. An interactive computerized PN worksheet, used voluntarily, was introduced and its impact analyzed in a retrospective cross-sectional study. A time management study was performed. Analysis of 480 PN orders revealed that the PN prescribing error rate was 11.7%, with no difference in error rates between pediatric residents and NNPs (12.3% versus 10.5%). Use of the interactive computerized PN worksheet was associated with a reduction in the prescribing error rate from 14.5 to 6.8% for all PN orders ( P = 0.016) and from 29.3 to 9.6% for peripheral PN orders ( P = 0.002). All 12 errors that occurred in the 177 PN prescriptions completed using the computerized PN worksheet were due to avoidable data entry or transcription mistakes. The time management study led to system improvements in PN ordering. We recommend that an interactive computerized PN worksheet be used to prescribe peripheral PN and thus reduce errors.

  2. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  3. Physician's error: medical or legal concept?

    PubMed

    Mujovic-Zornic, Hajrija M

    2010-06-01

    This article deals with the common term of different physician's errors that often happen in daily practice of health care. Author begins with the term of medical malpractice, defined broadly as practice of unjustified acts or failures to act upon the part of a physician or other health care professionals, which results in harm to the patient. It is a common term that includes many types of medical errors, especially physician's errors. The author also discusses the concept of physician's error in particular, which is understood no more in traditional way only as classic error in acting something manually wrong without necessary skills (medical concept), but as an error which violates patient's basic rights and which has its final legal consequence (legal concept). In every case the essential element of liability is to establish this error as a breach of the physician's duty. The first point to note is that the standard of procedure and the standard of due care against which the physician will be judged is not going to be that of the ordinary reasonable man who enjoys no medical expertise. The court's decision should give finale answer and legal qualification in each concrete case. The author's conclusion is that higher protection of human rights in the area of health equaly demands broader concept of physician's error with the accent to its legal subject matter.

  4. Preventable errors in organ transplantation: an emerging patient safety issue?

    PubMed

    Ison, M G; Holl, J L; Ladner, D

    2012-09-01

    Several widely publicized errors in transplantation including a death due to ABO incompatibility, two HIV transmissions and two hepatitis C virus (HCV) transmissions have raised concerns about medical errors in organ transplantation. The root cause analysis of each of these events revealed preventable failures in the systems and processes of care as the underlying causes. In each event, no standardized system or redundant process was in place to mitigate the failures that led to the error. Additional system and process vulnerabilities such as poor clinician communication, erroneous data transcription and transmission were also identified. Organ transplantation, because it is highly complex, often stresses the systems and processes of care and, therefore, offers a unique opportunity to proactively identify vulnerabilities and potential failures. Initial steps have been taken to understand such issues through the OPTN/UNOS Operations and Safety Committee, the OPTN/UNOS Disease Transmission Advisory Committee (DTAC) and the current A2ALL ancillary Safety Study. However, to effectively improve patient safety in organ transplantation, the development of a process for reporting of preventable errors that affords protection and the support of empiric research is critical. Further, the transplant community needs to embrace the implementation of evidence-based system and process improvements that will mitigate existing safety vulnerabilities.

  5. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  6. Process Recovery after CaO Addition Due to Granule Formation in a CSTR Co-Digester—A Tool to Influence the Composition of the Microbial Community and Stabilize the Process?

    PubMed Central

    Liebrich, Marietta; Kleyböcker, Anne; Kasina, Monika; Miethling-Graff, Rona; Kassahun, Andrea; Würdemann, Hilke

    2016-01-01

    The composition, structure and function of granules formed during process recovery with calcium oxide in a laboratory-scale fermenter fed with sewage sludge and rapeseed oil were studied. In the course of over-acidification and successful process recovery, only minor changes were observed in the bacterial community of the digestate, while granules appeared during recovery. Fluorescence microscopic analysis of the granules showed a close spatial relationship between calcium and oil and/or long chain fatty acids. This finding further substantiated the hypothesis that calcium precipitated with carbon of organic origin and reduced the negative effects of overloading with oil. Furthermore, the enrichment of phosphate minerals in the granules was shown, and molecular biological analyses detected polyphosphate-accumulating organisms as well as methanogenic archaea in the core. Organisms related to Methanoculleus receptaculi were detected in the inner zones of a granule, whereas they were present in the digestate only after process recovery. This finding indicated more favorable microhabitats inside the granules that supported process recovery. Thus, the granule formation triggered by calcium oxide addition served as a tool to influence the composition of the microbial community and to stabilize the process after overloading with oil.

  7. Process Recovery after CaO Addition Due to Granule Formation in a CSTR Co-Digester-A Tool to Influence the Composition of the Microbial Community and Stabilize the Process?

    PubMed

    Liebrich, Marietta; Kleyböcker, Anne; Kasina, Monika; Miethling-Graff, Rona; Kassahun, Andrea; Würdemann, Hilke

    2016-03-17

    The composition, structure and function of granules formed during process recovery with calcium oxide in a laboratory-scale fermenter fed with sewage sludge and rapeseed oil were studied. In the course of over-acidification and successful process recovery, only minor changes were observed in the bacterial community of the digestate, while granules appeared during recovery. Fluorescence microscopic analysis of the granules showed a close spatial relationship between calcium and oil and/or long chain fatty acids. This finding further substantiated the hypothesis that calcium precipitated with carbon of organic origin and reduced the negative effects of overloading with oil. Furthermore, the enrichment of phosphate minerals in the granules was shown, and molecular biological analyses detected polyphosphate-accumulating organisms as well as methanogenic archaea in the core. Organisms related to Methanoculleus receptaculi were detected in the inner zones of a granule, whereas they were present in the digestate only after process recovery. This finding indicated more favorable microhabitats inside the granules that supported process recovery. Thus, the granule formation triggered by calcium oxide addition served as a tool to influence the composition of the microbial community and to stabilize the process after overloading with oil.

  8. Process Recovery after CaO Addition Due to Granule Formation in a CSTR Co-Digester-A Tool to Influence the Composition of the Microbial Community and Stabilize the Process?

    PubMed

    Liebrich, Marietta; Kleyböcker, Anne; Kasina, Monika; Miethling-Graff, Rona; Kassahun, Andrea; Würdemann, Hilke

    2016-01-01

    The composition, structure and function of granules formed during process recovery with calcium oxide in a laboratory-scale fermenter fed with sewage sludge and rapeseed oil were studied. In the course of over-acidification and successful process recovery, only minor changes were observed in the bacterial community of the digestate, while granules appeared during recovery. Fluorescence microscopic analysis of the granules showed a close spatial relationship between calcium and oil and/or long chain fatty acids. This finding further substantiated the hypothesis that calcium precipitated with carbon of organic origin and reduced the negative effects of overloading with oil. Furthermore, the enrichment of phosphate minerals in the granules was shown, and molecular biological analyses detected polyphosphate-accumulating organisms as well as methanogenic archaea in the core. Organisms related to Methanoculleus receptaculi were detected in the inner zones of a granule, whereas they were present in the digestate only after process recovery. This finding indicated more favorable microhabitats inside the granules that supported process recovery. Thus, the granule formation triggered by calcium oxide addition served as a tool to influence the composition of the microbial community and to stabilize the process after overloading with oil. PMID:27681911

  9. Process Recovery after CaO Addition Due to Granule Formation in a CSTR Co-Digester—A Tool to Influence the Composition of the Microbial Community and Stabilize the Process?

    PubMed Central

    Liebrich, Marietta; Kleyböcker, Anne; Kasina, Monika; Miethling-Graff, Rona; Kassahun, Andrea; Würdemann, Hilke

    2016-01-01

    The composition, structure and function of granules formed during process recovery with calcium oxide in a laboratory-scale fermenter fed with sewage sludge and rapeseed oil were studied. In the course of over-acidification and successful process recovery, only minor changes were observed in the bacterial community of the digestate, while granules appeared during recovery. Fluorescence microscopic analysis of the granules showed a close spatial relationship between calcium and oil and/or long chain fatty acids. This finding further substantiated the hypothesis that calcium precipitated with carbon of organic origin and reduced the negative effects of overloading with oil. Furthermore, the enrichment of phosphate minerals in the granules was shown, and molecular biological analyses detected polyphosphate-accumulating organisms as well as methanogenic archaea in the core. Organisms related to Methanoculleus receptaculi were detected in the inner zones of a granule, whereas they were present in the digestate only after process recovery. This finding indicated more favorable microhabitats inside the granules that supported process recovery. Thus, the granule formation triggered by calcium oxide addition served as a tool to influence the composition of the microbial community and to stabilize the process after overloading with oil. PMID:27681911

  10. Navigation and meteorological error equations for some aerodynamic parameters

    NASA Technical Reports Server (NTRS)

    Krikorian, M. J.; Rice, J.; Mitchell, P.

    1979-01-01

    Mathematical equations for the analysis of the errors that are expected in a set of postflight aerodynamic parameters are presented. The errors are due to inaccuracies in the Shuttle best estimate trajectory and in the meteorological data obtained in support of the flights. The error analysis shows that the parameter vector, Z, and its associated error covariance matrix, C sub Z, is calculated from a given state vector, X, and its associated covariance matrix, C sub X.

  11. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  12. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  13. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    SciTech Connect

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.

  14. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE PAGES

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less

  15. Soft Error Vulnerability of Iterative Linear Algebra Methods

    SciTech Connect

    Bronevetsky, G; de Supinski, B

    2008-01-19

    Devices are increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft error rates were significant primarily in space and high-atmospheric computing. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming important even at terrestrial altitudes. Due to their large number of components, supercomputers are particularly susceptible to soft errors. Since many large scale parallel scientific applications use iterative linear algebra methods, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. Many users consider these methods invulnerable to most soft errors since they converge from an imprecise solution to a precise one. However, we show in this paper that iterative methods are vulnerable to soft errors, exhibiting both silent data corruptions and poor ability to detect errors. Further, we evaluate a variety of soft error detection and tolerance techniques, including checkpointing, linear matrix encodings, and residual tracking techniques.

  16. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  17. Addition of the Neurokinin-1-Receptor Antagonist (RA) Aprepitant to a 5-Hydroxytryptamine-RA and Dexamethasone in the Prophylaxis of Nausea and Vomiting Due to Radiation Therapy With Concomitant Cisplatin

    SciTech Connect

    Jahn, Franziska; Jahn, Patrick; Sieker, Frank; Vordermark, Dirk; Jordan, Karin

    2015-08-01

    Purpose: To assess, in a prospective, observational study, the safety and efficacy of the addition of the neurokinin-1-receptor antagonist (NK1-RA) aprepitant to concomitant radiochemotherapy, for the prophylaxis of radiation therapy–induced nausea and vomiting. Patients and Methods: This prospective observational study compared the antiemetic efficacy of an NK1-RA (aprepitant), a 5-hydroxytryptamine-RA, and dexamethasone (aprepitant regimen) versus a 5-hydroxytryptamine-RA and dexamethasone (control regimen) in patients receiving concomitant radiochemotherapy with cisplatin at the Department of Radiation Oncology, University Hospital Halle (Saale), Germany. The primary endpoint was complete response in the overall phase, defined as no vomiting and no use of rescue therapy in this period. Results: Fifty-nine patients treated with concomitant radiochemotherapy with cisplatin were included in this study. Thirty-one patients received the aprepitant regimen and 29 the control regimen. The overall complete response rates for cycles 1 and 2 were 75.9% and 64.5% for the aprepitant group and 60.7% and 54.2% for the control group, respectively. Although a 15.2% absolute difference was reached in cycle 1, a statistical significance was not detected (P=.22). Furthermore maximum nausea was 1.58 ± 1.91 in the control group and 0.73 ± 1.79 in the aprepitant group (P=.084); for the head-and-neck subset, 2.23 ± 2.13 in the control group and 0.64 ± 1.77 in the aprepitant group, respectively (P=.03). Conclusion: This is the first study of an NK1-RA–containing antiemetic prophylaxis regimen in patients receiving concomitant radiochemotherapy. Although the primary endpoint was not obtained, the absolute difference of 10% in efficacy was reached, which is defined as clinically meaningful for patients by international guidelines groups. Randomized phase 3 studies are necessary to further define the potential role of an NK1-RA in this setting.

  18. Reduction of Orifice-Induced Pressure Errors

    NASA Technical Reports Server (NTRS)

    Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.

    1987-01-01

    Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.

  19. Dialogues on prediction errors.

    PubMed

    Niv, Yael; Schoenbaum, Geoffrey

    2008-07-01

    The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.

  20. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  1. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  2. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  3. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  4. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  5. Estimating genotype error rates from high-coverage next-generation sequence data.

    PubMed

    Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

    2014-11-01

    Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods.

  6. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data.

    PubMed

    Wei, Na; Fang, Rongxin

    2016-01-01

    With sparse and uneven site distribution, Global Positioning System (GPS) data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6-7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP) with no additional regularization. The optimal truncation degree should be decreased to degree 4-5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM) approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources. PMID:27187392

  7. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    PubMed Central

    Wei, Na; Fang, Rongxin

    2016-01-01

    With sparse and uneven site distribution, Global Positioning System (GPS) data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP) with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM) approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources. PMID:27187392

  8. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data.

    PubMed

    Wei, Na; Fang, Rongxin

    2016-01-01

    With sparse and uneven site distribution, Global Positioning System (GPS) data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6-7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP) with no additional regularization. The optimal truncation degree should be decreased to degree 4-5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM) approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  9. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  10. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  11. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  12. Airplane wing vibrations due to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Pastel, R. L.; Caruthers, J. E.; Frost, W.

    1981-01-01

    The magnitude of error introduced due to wing vibration when measuring atmospheric turbulence with a wind probe mounted at the wing tip was studied. It was also determined whether accelerometers mounted on the wing tip are needed to correct this error. A spectrum analysis approach is used to determine the error. Estimates of the B-57 wing characteristics are used to simulate the airplane wing, and von Karman's cross spectrum function is used to simulate atmospheric turbulence. It was found that wing vibration introduces large error in measured spectra of turbulence in the frequency's range close to the natural frequencies of the wing.

  13. Inborn Errors of Metabolism.

    PubMed

    Ezgu, Fatih

    2016-01-01

    Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available.

  14. Clinical errors in cognitive-behavior therapy.

    PubMed

    Kim, Eun Ha; Hollon, Steven D; Olatunji, Bunmi O

    2016-09-01

    Although cognitive-behavioral therapy (CBT) has been shown to be highly effective for a wide range of disorders, many patients do not benefit. The failure to fully benefit from CBT may be due to a wide range of factors, one of which includes "clinical errors" that often occur during the therapeutic process. We briefly note 4 such clinical errors including neglecting to conduct a detailed functional analysis of the presenting problem(s), not adequately engaging the patient in developing a case formulation for the purposes of treatment planning, getting wrapped up in simply examining beliefs without behavioral tests, and not holding patients accountable for fear of rupturing the therapeutic alliance. We then discuss the context in which these clinical errors may occur during CBT and highlight alternative approaches. Being mindful of these and other potential clinical errors during CBT may facilitate better treatment outcomes. (PsycINFO Database Record PMID:27505455

  15. Clinical errors in cognitive-behavior therapy.

    PubMed

    Kim, Eun Ha; Hollon, Steven D; Olatunji, Bunmi O

    2016-09-01

    Although cognitive-behavioral therapy (CBT) has been shown to be highly effective for a wide range of disorders, many patients do not benefit. The failure to fully benefit from CBT may be due to a wide range of factors, one of which includes "clinical errors" that often occur during the therapeutic process. We briefly note 4 such clinical errors including neglecting to conduct a detailed functional analysis of the presenting problem(s), not adequately engaging the patient in developing a case formulation for the purposes of treatment planning, getting wrapped up in simply examining beliefs without behavioral tests, and not holding patients accountable for fear of rupturing the therapeutic alliance. We then discuss the context in which these clinical errors may occur during CBT and highlight alternative approaches. Being mindful of these and other potential clinical errors during CBT may facilitate better treatment outcomes. (PsycINFO Database Record

  16. Bond additivity corrections for quantum chemistry methods

    SciTech Connect

    C. F. Melius; M. D. Allendorf

    1999-04-01

    In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method due to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.

  17. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  18. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  19. Marking Errors: A Simple Strategy

    ERIC Educational Resources Information Center

    Timmons, Theresa Cullen

    1987-01-01

    Indicates that using highlighters to mark errors produced a 76% class improvement in removing comma errors and a 95.5% improvement in removing apostrophe errors. Outlines two teaching procedures, to be followed before introducing this tool to the class, that enable students to remove errors at this effective rate. (JD)

  20. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  1. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  2. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  3. Mismatch-mediated error prone repair at the immunoglobulin genes.

    PubMed

    Chahwan, Richard; Edelmann, Winfried; Scharff, Matthew D; Roa, Sergio

    2011-12-01

    The generation of effective antibodies depends upon somatic hypermutation (SHM) and class-switch recombination (CSR) of antibody genes by activation induced cytidine deaminase (AID) and the subsequent recruitment of error prone base excision and mismatch repair. While AID initiates and is required for SHM, more than half of the base changes that accumulate in V regions are not due to the direct deamination of dC to dU by AID, but rather arise through the recruitment of the mismatch repair complex (MMR) to the U:G mismatch created by AID and the subsequent perversion of mismatch repair from a high fidelity process to one that is very error prone. In addition, the generation of double-strand breaks (DSBs) is essential during CSR, and the resolution of AID-generated mismatches by MMR to promote such DSBs is critical for the efficiency of the process. While a great deal has been learned about how AID and MMR cause hypermutations and DSBs, it is still unclear how the error prone aspect of these processes is largely restricted to antibody genes. The use of knockout models and mice expressing mismatch repair proteins with separation-of-function point mutations have been decisive in gaining a better understanding of the roles of each of the major MMR proteins and providing further insight into how mutation and repair are coordinated. Here, we review the cascade of MMR factors and repair signals that are diverted from their canonical error free role and hijacked by B cells to promote genetic diversification of the Ig locus. This error prone process involves AID as the inducer of enzymatically-mediated DNA mismatches, and a plethora of downstream MMR factors acting as sensors, adaptors and effectors of a complex and tightly regulated process from much of which is not yet well understood.

  4. A Simple Approach to Experimental Errors

    ERIC Educational Resources Information Center

    Phillips, M. D.

    1972-01-01

    Classifies experimental error into two main groups: systematic error (instrument, personal, inherent, and variational errors) and random errors (reading and setting errors) and presents mathematical treatments for the determination of random errors. (PR)

  5. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  6. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  7. Effects of various experimental parameters on errors in triangulation solution of elongated object in space. [barium ion cloud

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1975-01-01

    The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.

  8. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  9. Conditional Density Estimation in Measurement Error Problems.

    PubMed

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  10. Errors and mistakes in breast ultrasound diagnostics.

    PubMed

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  11. A comedy of errors: the bad knee.

    PubMed

    Cameron, Hugh U

    2005-06-01

    A review of 241 consecutive total knee revisions has been carried out. Other than loosening, wear, and stiffness, at present, a commoner reason is an unsatisfactory result due to minor errors of tibial and femoral placement. Currently, about 16% of femoral components required derotation during revision.

  12. Current Views of Error and Editing.

    ERIC Educational Resources Information Center

    Hull, Glynda

    1987-01-01

    Inexperienced writers, including both basic writers and learning disabled, commit errors that often follow a discernible pattern due to applying erroneous or incomplete rules. Techniques for teaching editing skills are described, including textual analyses of students' writing, interviews with students, structuring the editing task, and providing…

  13. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  14. Evaluating operating system vulnerability to memory errors.

    SciTech Connect

    Ferreira, Kurt Brian; Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke; Mueller, Frank; Fiala, David; Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  15. Analysis of discretization errors in LES

    NASA Technical Reports Server (NTRS)

    Ghosal, Sandip

    1995-01-01

    All numerical simulations of turbulence (DNS or LES) involve some discretization errors. The integrity of such simulations therefore depend on our ability to quantify and control such errors. In the classical literature on analysis of errors in partial differential equations, one typically studies simple linear equations (such as the wave equation or Laplace's equation). The qualitative insight gained from studying such simple situations is then used to design numerical methods for more complex problems such as the Navier-Stokes equations. Though such an approach may seem reasonable as a first approximation, it should be recognized that strongly nonlinear problems, such as turbulence, have a feature that is absent in linear problems. This feature is the simultaneous presence of a continuum of space and time scales. Thus, in an analysis of errors in the one dimensional wave equation, one may, without loss of generality, rescale the equations so that the dependent variable is always of order unity. This is not possible in the turbulence problem since the amplitudes of the Fourier modes of the velocity field have a continuous distribution. The objective of the present research is to provide some quantitative measures of numerical errors in such situations. Though the focus of this work is LES, the methods introduced here can be just as easily applied to DNS. Errors due to discretization of the time-variable are neglected for the purpose of this analysis.

  16. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  17. Attentional capture by irrelevant transients leads to perceptual errors in a competitive change detection task.

    PubMed

    Schneider, Daniel; Beste, Christian; Wascher, Edmund

    2012-01-01

    Theories on visual change detection imply that attention is a necessary but not sufficient prerequisite for aware perception. Misguidance of attention due to salient irrelevant distractors can therefore lead to severe deficits in change detection. The present study investigates the mechanisms behind such perceptual errors and their relation to error processing on higher cognitive levels. Participants had to detect a luminance change that occasionally occurred simultaneously with an irrelevant orientation change in the opposite hemi-field (conflict condition). By analyzing event-related potentials in the EEG separately in those error prone conflict trials for correct and erroneous change detection, we demonstrate that only correct change detection was associated with the allocation of attention to the relevant luminance change. Erroneous change detection was associated with an initial capture of attention toward the irrelevant orientation change in the N1 time window and a lack of subsequent target selection processes (N2pc). Errors were additionally accompanied by an increase of the fronto-central N2 and a kind of error negativity (Ne or ERN), which, however, peaked prior to the response. These results suggest that a strong perceptual conflict by salient distractors can disrupt the further processing of relevant information and thus affect its aware perception. Yet, it does not impair higher cognitive processes for conflict and error detection, indicating that these processes are independent from awareness.

  18. The calculation of moment uncertainties from velocity distribution functions with random errors

    NASA Astrophysics Data System (ADS)

    Gershman, Daniel J.; Dorelli, John C.; F.-Viñas, Adolfo; Pollock, Craig J.

    2015-08-01

    Instrumentation that detects individual plasma particles is susceptible to random counting errors. These errors propagate into the calculations of moments of measured particle velocity distribution functions. Although rules of thumb exist for the effects of random errors on the calculation of lower order moments (e.g., density, velocity, and temperature) of Maxwell-Boltzmann distributions, they do not generally apply to nonthermal distributions or to higher-order moments. To date, such errors have only been estimated using brute force Monte Carlo techniques, i.e., repeated (~50) samplings of distribution functions. Here we present a mathematical formalism for analytically obtaining uncertainty estimates of plasma moments due to random errors either measured in situ by instruments or synthesized by particle simulations. Our uncertainty estimates precisely match the statistical variation of simulated plasma moments and carry the computational cost equivalent of only ~15 Monte Carlo samplings. In addition, we provide the means to calculate a covariance matrix that can be reported along with typical plasma moments. This matrix enables the propagation of statistical errors into arbitrary coordinate systems or functions of plasma moments without the need to reanalyze full distribution functions. Our methodology, which is applied to electron data from Plasma Electron and Current Experiment on the Cluster spacecraft as an example, is relevant to both existing and future data sets and requires only instrument-measured counts and phase space densities reported for a set of calibrated energy-angle targets.

  19. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  20. Proper handling of random errors and distortions in astronomical data analysis

    NASA Astrophysics Data System (ADS)

    Cardiel, Nicolas; Gorgas, Javier; Gallego, Jess; Serrano, Angel; Zamorano, Jaime; Garcia-Vargas, Maria-Luisa; Gomez-Cambronero, Pedro; Filgueira, Jose M.

    2002-12-01

    The aim of a data reduction process is to minimize the influence of data acquisition imperfections on the estimation of the desired astronomical quantity. For this purpose, one must perform appropriate manipulations with data and calibration frames. In addition, random-error frames (computed from first principles: expected statistical distribution of photo-electrons, detector gain, readout-noise, etc.), corresponding to the raw-data frames, can also be properly reduced. This parallel treatment of data and errors guarantees the correct propagation of random errors due to the arithmetic manipulations throughout the reduction procedure. However, due to the unavoidable fact that the information collected by detectors is physically sampled, this approach collides with a major problem: errors are correlated when applying image manipulations involving non-integer pixel shifts of data. Since this is actually the case for many common reduction steps (wavelength calibration into a linear scale, image rectification when correcting for geometric distortions,...), we discuss the benefits of considering the data reduction as the full characterization of the raw-data frames, but avoiding, as far as possible, the arithmetic manipulation of that data until the final measure of the image properties with a scientific meaning for the astronomer. For this reason, it is essential that the software tools employed for the analysis of the data perform their work using that characterization. In that sense, the real reduction of the data should be performed during the analysis, and not before, in order to guarantee the proper treatment of errors.

  1. Neutron multiplication error in TRU waste measurements

    SciTech Connect

    Veilleux, John; Stanfield, Sean B; Wachter, Joe; Ceo, Bob

    2009-01-01

    Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are

  2. Magnetic nanoparticle thermometer: an investigation of minimum error transmission path and AC bias error.

    PubMed

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-04-14

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method.

  3. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  4. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  5. Characteristics of medication errors with parenteral cytotoxic drugs.

    PubMed

    Fyhr, A; Akselsson, R

    2012-09-01

    Errors involving cytotoxic drugs have the potential of being fatal and should therefore be prevented. The objective of this article is to identify the characteristics of medication errors involving parenteral cytotoxic drugs in Sweden. A total of 60 cases reported to the national error reporting systems from 1996 to 2008 were reviewed. Classification was made to identify cytotoxic drugs involved, type of error, where the error occurred, error detection mechanism, and consequences for the patient. The most commonly involved cytotoxic drugs were fluorouracil, carboplatin, cytarabine and doxorubicin. The platinum-containing drugs often caused serious consequences for the patients. The most common error type were too high doses (45%) followed by wrong drug (30%). Twenty-five of the medication errors (42%) occurred when doctors were prescribing. All of the preparations were delivered to the patient causing temporary or life-threatening harm. Another 25 of the medication errors (42%) started with preparation at the pharmacies. The remaining 10 medication errors (16%) were due to errors during preparation by nurses (5/60) and administration by nurses to the wrong patient (5/60). It is of utmost importance to minimise the potential for errors in the prescribing stage. The identification of drugs and patients should also be improved.

  6. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  7. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  8. [Dealing with errors in medicine].

    PubMed

    Schoenenberger, R A; Perruchoud, A P

    1998-12-24

    Iatrogenic disease is probably more commonly than assumed the consequence of errors and mistakes committed by physicians and other medical personnel. Traditionally, strategies to prevent errors in medicine focus on inspection and rely on the professional ethos of health care personnel. The increasingly complex nature of medical practise and the multitude of interventions that each patient receives increases the likelihood of error. More efficient approaches to deal with errors have been developed. The methods include routine identification of errors (critical incidence report), systematic monitoring of multiple-step processes in medical practice, system analysis, and system redesign. A search for underlying causes of errors (rather than distal causes) will enable organizations to collectively learn without denying the inevitable occurrence of human error. Errors and mistakes may become precious chances to increase the quality of medical care.

  9. Analysis of the impact of error detection on computer performance

    NASA Technical Reports Server (NTRS)

    Shin, K. C.; Lee, Y. H.

    1983-01-01

    Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.

  10. Characterization of the error budget of Alba-NOM

    NASA Astrophysics Data System (ADS)

    Nicolas, Josep; Martínez, Juan Carlos

    2013-05-01

    The Alba-NOM instrument is a high accuracy scanning machine capable of measuring the slope profile of long mirrors with resolution below the nanometer scale and for a wide range of curvatures. We present the characterization of different sources of errors that limit the uncertainty of the instrument. We have investigated three main contributions to the uncertainty of the measurements: errors introduced by the scanning system and the pentaprism, errors due to environmental conditions, and optical errors of the autocollimator. These sources of error have been investigated by measuring the corresponding motion errors with a high accuracy differential interferometer and by simulating their impact on the measurements by means of ray-tracing. Optical error contributions have been extracted from the analysis of redundant measurements of test surfaces. The methods and results are presented, as well as an example of application that has benefited from the achieved accuracy.

  11. A Refined Algorithm On The Estimation Of Residual Motion Errors In Airborne SAR Images

    NASA Astrophysics Data System (ADS)

    Zhong, Xuelian; Xiang, Maosheng; Yue, Huanyin; Guo, Huadong

    2010-10-01

    Due to the lack of accuracy in the navigation system, residual motion errors (RMEs) frequently appear in the airborne SAR image. For very high resolution SAR imaging and repeat-pass SAR interferometry, the residual motion errors must be estimated and compensated. We have proposed a new algorithm before to estimate the residual motion errors for an individual SAR image. It exploits point-like targets distributed along the azimuth direction, and not only corrects the phase, but also improves the azimuth focusing. But the required point targets are selected by hand, which is time- and labor-consuming. In addition, the algorithm is sensitive to noises. In this paper, a refined algorithm is proposed aiming at these two shortcomings. With real X-band airborne SAR data, the feasibility and accuracy of the refined algorithm are demonstrated.

  12. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  13. Reducing nurse medicine administration errors.

    PubMed

    Ofosu, Rose; Jarrett, Patricia

    Errors in administering medicines are common and can compromise the safety of patients. This review discusses the causes of drug administration error in hospitals by student and registered nurses, and the practical measures educators and hospitals can take to improve nurses' knowledge and skills in medicines management, and reduce drug errors.

  14. Error Bounds for Interpolative Approximations.

    ERIC Educational Resources Information Center

    Gal-Ezer, J.; Zwas, G.

    1990-01-01

    Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)

  15. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  16. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and

  17. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  18. Determination of molar IR absorptivities and their errors

    NASA Astrophysics Data System (ADS)

    Staat, H.; Korte, E. H.

    1984-03-01

    Molar absorptivities of band maxima of acetonitrile, n-heptane, benzene, and toluene were determined from difference spectra. The statistical and most important systematic errors are given. Recently, we studied statistical and systematic errors occuring in the determination of IR absorptivities ɛ of liquids (ref. 1). Considerable systematic errors are caused by reflection losses at the outer and inner surfaces of the cell windows. It was shown that these are compensated for if the ratio of two transmittance spectra (T 1, T 2) due to different sample thicknesses (d 1, d 2) is used: In such a case Bouguer—Lambert-Beer's laws leads to ? where c denotes the concentration. The reliability of the absorptivities derived in this way, is mainly affected by the statistical error comprising the standard deviations of the transmittance measurements as well as by the systematic errors from multiple beam interference within the cell (the fringes do not compensate for each other because of their different periods) and from the finite slit width. Experimental conditions can be chosen so that errors from beam convergence, polarization, temperature variations, and thermal emission are negligible. The influences on the transmittance measurement by drift, unwanted radiation, reliability of wavenumber reading, and non-linearity of the detector system are not considered. The molar absorptivities of band maxima of acetonitrile, n-heptane, benzene, and toluene have been determined using equation (1) and are listed in the Table. The values ofΔd employed were in the order of 10 μm to 40 μm, therefore, the strongest bands could not be evaluated. The statistical error was calculated from ? and the systematic error due to finite spectral slit width (s) from ? with the band half-width 2γ. The deviation of the cell from planoparallel shape has been taken into account quantitatively, this is different to the method used previously (ref. 1). If the cell is wedge shaped so that its thickness

  19. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  20. Measurement error in human dental mensuration.

    PubMed

    Kieser, J A; Groeneveld, H T; McKee, J; Cameron, N

    1990-01-01

    The reliability of human odontometric data was evaluated in a sample of 60 teeth. Three observers, using their own instruments and the same definition of the mesiodistal and buccolingual dimensions were asked to repeat their measurements after 2 months. Precision, or repeatability, was analysed by means of Pearsonian correlation coefficients and mean absolute error values. Accuracy, or the absence of bias, was evaluated by means of Bland-Altman procedures and attendant Student t-tests, and also by an ANOVA procedure. The present investigation suggests that odontometric data have a high interobserver error component. Mesiodistal dimensions show greater imprecision and bias than buccolingual measurements. The results of the ANOVA suggest that bias is the result of interobserver error and is not due to the time between repeated measurements.

  1. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    NASA Astrophysics Data System (ADS)

    Sarovar, Mohan; Young, Kevin C.

    2013-12-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC.

  2. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  3. Role of cognition in generating and mitigating clinical errors.

    PubMed

    Patel, Vimla L; Kannampallil, Thomas G; Shortliffe, Edward H

    2015-07-01

    Given the complexities of current clinical practice environments, strategies to reduce clinical error must appreciate that error detection and recovery are integral to the function of complex cognitive systems. In this review, while acknowledging that error elimination is an attractive notion, we use evidence to show that enhancing error detection and improving error recovery are also important goals. We further show how departures from clinical protocols or guidelines can yield innovative and appropriate solutions to unusual problems. This review addresses cognitive approaches to the study of human error and its recovery process, highlighting their implications in promoting patient safety and quality. In addition, we discuss methods for enhancing error recognition, and promoting suitable responses, through external cognitive support and virtual reality simulations for the training of clinicians. PMID:25935928

  4. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  5. Identification of Error Patterns in Terminal-Area ATC Communications

    NASA Technical Reports Server (NTRS)

    Quinn, Cheryl; Walter, Kim E.; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    Advancing air traffic management technologies have enabled a greater number of aircraft to use the same airspace more effectively. As aircraft separations are reduced and final approaches are more finely timed, there is less room for error. The present study examined 122 terminal-area, loss-of-separation and procedure violation incidents reported to the Aviation Safety Reporting System (ASRS) by air traffic controllers. Narrative description codes were used for the incidents for type of violation, contributing factors, recovery strategies, and consequences. Usually multiple errors occurred prior to the violation. Error sequences were analyzed and common patterns of errors were identified. In half of the incidents, errors were noticed in time to correct mistakes. Of these, almost 43% committed additional errors during the recovery attempt. This analysis shows that redundancies in the present air traffic control system may not be sufficient to support large increases in traffic density. Error prevention and design considerations for air traffic management systems are discussed.

  6. Register file soft error recovery

    SciTech Connect

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  7. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  8. New Gear Transmission Error Measurement System Designed

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  9. Systematic error analysis of rotating coil using computer simulation

    SciTech Connect

    Li, Wei-chuan; Coles, M.

    1993-04-01

    This report describes a study of the systematic and random measurement uncertainties of magnetic multipoles which are due to construction errors, rotational speed variation, and electronic noise in a digitally bucked tangential coil assembly with dipole bucking windings. The sensitivities of the systematic multipole uncertainty to construction errors are estimated analytically and using a computer simulation program.

  10. Gender Bender: Gender Errors in L2 Pronoun Production

    ERIC Educational Resources Information Center

    Anton-Mendez, Ines

    2010-01-01

    To address questions about information processing at the message level, pronoun errors of second language (L2) speakers of English were studied. Some L2 pronoun errors--"he/she" confusions by Spanish speakers of L2 English--could be due to differences in the informational requirements of the speakers' two languages, providing a window into the…

  11. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  12. Diagnostic Errors Study Findings

    MedlinePlus

    ... ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/office-testing-toolkit/ . Additionally, the Office of the National ... Assistance on Health Initiatives Measurement & Reporting Tools Research Tools & ... Centers & Programs Centers & Offices Initiatives About AHRQ Portfolios ...

  13. Experimental quantum error correction with high fidelity

    SciTech Connect

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-15

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from {epsilon} to {approx}{epsilon}{sup 2}. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  14. Improved astigmatic focus error detection method

    NASA Technical Reports Server (NTRS)

    Bernacki, Bruce E.

    1992-01-01

    All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.

  15. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  16. Error modelling on burned area products

    NASA Astrophysics Data System (ADS)

    Padilla, M.; Chuvieco, E.

    2012-12-01

    In the last decade multiple efforts have been undertaken to map burned areas (BA) at global scale. Global BA projects usually carry along a validation phase, which aims to assess product quality. Errors are commonly measured in these validation exercises, but they frequently do not tackle error sources, which hampers the use of BA products as input to earth system models. In this study we present a method to assess the relationships between commission and omission errors on one side and landscape and burned patch characteristics on the other side. Errors were extracted by comparing global BA results and Landsat BA perimeters. Selected factors to explain error distribution were related to landscape characteristics and quality of input data. The former included BA spatial properties, tree cover (from MODIS Vegetation Continous Field), and the land cover type (Globcover 2005). The latter were the number of cloud-free observations, the confidence level of the BA algorithm and the sub-pixel proportion of true BA. The relationship between explanatory variables and errors was estimated using Generalized Additive Models. This analysis was undertaken to assess global BA products within the framework of the fire_cci project (www.esa-fire-cci.org). This project is part of the European Space Agency's Climate Change Initiative, which aims to generate long-term global products of Essential Climate Variables (ECV). The fire_cci project aims to generate time series of global BA, merging data from three sensors: MERIS, (A)ATSR and VEGETATION. The error characterization exercise presented in this paper was based on MERIS BA results from 2005 in four study sites (Australia, Brazil, Canada and Kazakhstan). Results show that errors are more frequent on pixels partially burned, and tend to decrease for high and low tree-cover (when areas have either 0 or 100%), as well as when the product confidence level is high. Detected burned pixels surrounded by other burned pixels were found less

  17. Scaling prediction errors to reward variability benefits error-driven learning in humans

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease “adapters'” accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. PMID:26180123

  18. The economics of health care quality and medical errors.

    PubMed

    Andel, Charles; Davidow, Stephen L; Hollander, Mark; Moreno, David A

    2012-01-01

    Hospitals have been looking for ways to improve quality and operational efficiency and cut costs for nearly three decades, using a variety of quality improvement strategies. However, based on recent reports, approximately 200,000 Americans die from preventable medical errors including facility-acquired conditions and millions may experience errors. In 2008, medical errors cost the United States $19.5 billion. About 87 percent or $17 billion were directly associated with additional medical cost, including: ancillary services, prescription drug services, and inpatient and outpatient care, according to a study sponsored by the Society for Actuaries and conducted by Milliman in 2010. Additional costs of $1.4 billion were attributed to increased mortality rates with $1.1 billion or 10 million days of lost productivity from missed work based on short-term disability claims. The authors estimate that the economic impact is much higher, perhaps nearly $1 trillion annually when quality-adjusted life years (QALYs) are applied to those that die. Using the Institute of Medicine's (IOM) estimate of 98,000 deaths due to preventable medical errors annually in its 1998 report, To Err Is Human, and an average of ten lost years of life at $75,000 to $100,000 per year, there is a loss of $73.5 billion to $98 billion in QALYs for those deaths--conservatively. These numbers are much greater than those we cite from studies that explore the direct costs of medical errors. And if the estimate of a recent Health Affairs article is correct-preventable death being ten times the IOM estimate-the cost is $735 billion to $980 billion. Quality care is less expensive care. It is better, more efficient, and by definition, less wasteful. It is the right care, at the right time, every time. It should mean that far fewer patients are harmed or injured. Obviously, quality care is not being delivered consistently throughout U.S. hospitals. Whatever the measure, poor quality is costing payers and

  19. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  20. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  1. An Introduction to Error Analysis for Quantitative Chemistry

    ERIC Educational Resources Information Center

    Neman, R. L.

    1972-01-01

    Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)

  2. Cosmic ray-induced soft errors in static MOS memory cells

    NASA Technical Reports Server (NTRS)

    Sivo, L. L.; Peden, J. C.; Brettschneider, M.; Price, W.; Pentecost, P.

    1979-01-01

    Previous analytical models were extended to predict cosmic ray-induced soft error rates in static MOS memory devices. The effect is due to ionization and can be introduced by high energy, heavy ion components of the galactic environment. The results indicate that the sensitivity of memory cells is directly related to the density of the particular MOS technology which determines the node capacitance values. Hence, CMOS is less sensitive than e.g., PMOS. In addition, static MOS memory cells are less sensitive than dynamic ones due to differences in the mechanisms of storing bits. The flip-flop of a static cell is inherently stable against cosmic ray-induced bit flips. Predicted error rates on a CMOS RAM and a PMOS shift register are in general agreement with previous spacecraft flight data.

  3. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  4. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  5. Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography

    NASA Technical Reports Server (NTRS)

    Withers, Paul; Lorenz, R. D.; Neumann, G. A.

    2002-01-01

    Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.

  6. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  7. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  8. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  9. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-01

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors.

  10. Optimal joint power-rate adaptation for error resilient video coding

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Gürses, Eren; Kim, Anna N.; Perkis, Andrew

    2008-01-01

    In recent years digital imaging devices become an integral part of our daily lives due to the advancements in imaging, storage and wireless communication technologies. Power-Rate-Distortion efficiency is the key factor common to all resource constrained portable devices. In addition, especially in real-time wireless multimedia applications, channel adaptive and error resilient source coding techniques should be considered in conjunction with the P-R-D efficiency, since most of the time Automatic Repeat-reQuest (ARQ) and Forward Error Correction (FEC) are either not feasible or costly in terms of bandwidth efficiency delay. In this work, we focus on the scenarios of real-time video communication for resource constrained devices over bandwidth limited and lossy channels, and propose an analytic Power-channel Error-Rate-Distortion (P-E-R-D) model. In particular, probabilities of macroblocks coding modes are intelligently controlled through an optimization process according to their distinct rate-distortion-complexity performance for a given channel error rate. The framework provides theoretical guidelines for the joint analysis of error resilient source coding and resource allocation. Experimental results show that our optimal framework provides consistent rate-distortion performance gain under different power constraints.

  11. Error Sensitivity to Environmental Noise in Quantum Circuits for Chemical State Preparation.

    PubMed

    Sawaya, Nicolas P D; Smelyanskiy, Mikhail; McClean, Jarrod R; Aspuru-Guzik, Alán

    2016-07-12

    Calculating molecular energies is likely to be one of the first useful applications to achieve quantum supremacy, performing faster on a quantum than a classical computer. However, if future quantum devices are to produce accurate calculations, errors due to environmental noise and algorithmic approximations need to be characterized and reduced. In this study, we use the high performance qHiPSTER software to investigate the effects of environmental noise on the preparation of quantum chemistry states. We simulated 18 16-qubit quantum circuits under environmental noise, each corresponding to a unitary coupled cluster state preparation of a different molecule or molecular configuration. Additionally, we analyze the nature of simple gate errors in noise-free circuits of up to 40 qubits. We find that, in most cases, the Jordan-Wigner (JW) encoding produces smaller errors under a noisy environment as compared to the Bravyi-Kitaev (BK) encoding. For the JW encoding, pure dephasing noise is shown to produce substantially smaller errors than pure relaxation noise of the same magnitude. We report error trends in both molecular energy and electron particle number within a unitary coupled cluster state preparation scheme, against changes in nuclear charge, bond length, number of electrons, noise types, and noise magnitude. These trends may prove to be useful in making algorithmic and hardware-related choices for quantum simulation of molecular energies. PMID:27254482

  12. Tomographic Errors From Wavefront Healing

    NASA Astrophysics Data System (ADS)

    Malcolm, A. E.; Trampert, J.

    2008-12-01

    Despite recent advances in full-waveform modeling ray theory is still, for good reasons, the preferred method in global tomography. It is well known that ray theory is most accurate for anomalies that are large compared to the wavelength. Exactly what errors result from the failure of this assumption is less well understood, in spite of the fact that anomalies found in the Earth from ray-based tomography methods are often outside the regime in which ray theory is known to be valid. Using the spectral element method, we have computed exact delay times and compared them to ray-theoretical traveltimes for two classic anomalies, one large and disk-shaped near the core mantle boundary, and the other a plume-like structure extending throughout the mantle. Wavefront healing is apparent in the traveltime anomalies generated by these structures; its effects are strongly asymmetric between P and S arrivals due to wavelength differences and source directionality. Simple computations in two dimensions allow us to develop the intuition necessary to understand how diffractions around the anomalies explain these results. When inverting the exact travel time anomalies with ray theory we expect wavefront healing to have a strong influence on the resulting structures. We anticipate that the asymmetry will be of particular importance in anomalies in the bulk velocity structure.

  13. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  14. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  15. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  16. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  17. Approaches to relativistic positioning around Earth and error estimations

    NASA Astrophysics Data System (ADS)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  18. Quality assessment of speckle patterns for DIC by consideration of both systematic errors and random errors

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren

    2016-11-01

    The performance of digital image correlation (DIC) is influenced by the quality of speckle patterns significantly. Thus, it is crucial to present a valid and practical method to assess the quality of speckle patterns. However, existing assessment methods either lack a solid theoretical foundation or fail to consider the errors due to interpolation. In this work, it is proposed to assess the quality of speckle patterns by estimating the root mean square error (RMSE) of DIC, which is the square root of the sum of square of systematic error and random error. Two performance evaluation parameters, respectively the maximum and the quadratic mean of RMSE, are proposed to characterize the total error. An efficient algorithm is developed to estimate these parameters, and the correctness of this algorithm is verified by numerical experiments for both 1 dimensional signal and actual speckle images. The influences of correlation criterion, shape function order, and sub-pixel registration algorithm are briefly discussed. Compared to existing methods, method presented by this paper is more valid due to the consideration of both measurement accuracy and precision.

  19. Improving medication administration error reporting systems. Why do errors occur?

    PubMed

    Wakefield, B J; Wakefield, D S; Uden-Holman, T

    2000-01-01

    Monitoring medication administration errors (MAE) is often included as part of the hospital's risk management program. While observation of actual medication administration is the most accurate way to identify errors, hospitals typically rely on voluntary incident reporting processes. Although incident reporting systems are more economical than other methods of error detection, incident reporting can also be a time-consuming process depending on the complexity or "user-friendliness" of the reporting system. Accurate incident reporting systems are also dependent on the ability of the practitioner to: 1) recognize an error has actually occurred; 2) believe the error is significant enough to warrant reporting; and 3) overcome the embarrassment of having committed a MAE and the fear of punishment for reporting a mistake (either one's own or another's mistake).

  20. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  1. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  2. A review article of the reduce errors in medical laboratories.

    PubMed

    Mohammedsaleh, Zuhair M; Mohammedsaleh, Fayez

    2014-07-29

    The current article examines the modern practices of reducing errors in medical laboratories. The paper sought to examine the methods that different countries are applying to reduce errors in medical laboratories. In addition, the paper examines the relationship between inadequate training of laboratory personnel and error causation in medical laboratories. A total of 17 research articles have been reviewed. The paper has done a comparison of pathology laboratory practices in the US, Canada, the UK and Australia, regarding laboratory staff skills and error reduction. The paper finds out that; although some of the developed countries have employed advanced technology to reduce errors, there is still a great need to use sophisticated medical equipment to reduce errors. In addition, the levels of training for the medical technicians are still low. They are not equipped enough to reduce the errors to the required levels. The article recommends application of advanced technology in the reduction of errors, and training of technicians on the best practices to reduce errors.

  3. Analyzing the errors of DFT approximations for compressed water systems

    NASA Astrophysics Data System (ADS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the

  4. Analyzing the errors of DFT approximations for compressed water systems.

    PubMed

    Alfè, D; Bartók, A P; Csányi, G; Gillan, M J

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm(3) where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE(h) ≃ 15 meV/monomer for the liquid and the

  5. Error suppression and correction for quantum annealing

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel

    While adiabatic quantum computing and quantum annealing enjoy a certain degree of inherent robustness against excitations and control errors, there is no escaping the need for error correction or suppression. In this talk I will give an overview of our work on the development of such error correction and suppression methods. We have experimentally tested one such method combining encoding, energy penalties and decoding, on a D-Wave Two processor, with encouraging results. Mean field theory shows that this can be explained in terms of a softening of the closing of the gap due to the energy penalty, resulting in protection against excitations that occur near the quantum critical point. Decoding recovers population from excited states and enhances the success probability of quantum annealing. Moreover, we have demonstrated that using repetition codes with increasing code distance can lower the effective temperature of the annealer. References: K.L. Pudenz, T. Albash, D.A. Lidar, ``Error corrected quantum annealing with hundreds of qubits'', Nature Commun. 5, 3243 (2014). K.L. Pudenz, T. Albash, D.A. Lidar, ``Quantum annealing correction for random Ising problems'', Phys. Rev. A. 91, 042302 (2015). S. Matsuura, H. Nishimori, T. Albash, D.A. Lidar, ``Mean Field Analysis of Quantum Annealing Correction''. arXiv:1510.07709. W. Vinci et al., in preparation.

  6. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  7. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  8. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  9. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  10. Quantifying error distributions in crowding.

    PubMed

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  11. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  12. Enhanced notification of infusion pump programming errors.

    PubMed

    Evans, R Scott; Carlson, Rick; Johnson, Kyle V; Palmer, Brent K; Lloyd, James F

    2010-01-01

    Hospitalized patients receive countless doses of medications through manually programmed infusion pumps. Many medication errors are the result of programming incorrect pump settings. When used appropriately, smart pumps have the potential to detect some programming errors. However, based on the current use of smart pumps, there are conflicting reports on their ability to prevent patient harm without additional capabilities and interfaces to electronic medical records (EMR). We developed a smart system that is connected to the EMR including medication charting that can detect and alert on potential pump programming errors. Acceptable programming limits of dose rate increases in addition to initial drug doses for 23 high-risk medications are monitored. During 22.5 months in a 24 bed ICU, 970 alerts (4% of 25,040 doses, 1.4 alerts per day) were generated for pump settings programmed outside acceptable limits of which 137 (14%) were found to have prevented potential harm. Monitoring pump programming at the system level rather than the pump provides access to additional patient data in the EMR including previous dosage levels, other concurrent medications and caloric intake, age, gender, vitals and laboratory results.

  13. Atmospheric Pressure Error of GRACE in Antarctic Ice Mass Change

    NASA Astrophysics Data System (ADS)

    Kim, B.; Eom, J.; Seo, K. W.

    2014-12-01

    As GRACE has observed time-varying gravity longer than a decade, long-term mass changes have been emerged. In particular, linear trends and accelerated patterns in Antarctica were reported and paid attention for the projection of sea level rise. The cause of accelerated ice mass loss in Antarctica is not known since its amplitude is not significantly larger than ice mass change associated with natural climate variations. In this study, we consider another uncertainty in Antarctic ice mass loss acceleration due to unmodeled atmospheric pressure field. We first compare GRACE AOD product with in-situ atmospheric pressure data from SCAR READER project. GRACE AOD (ECMWF) shows spurious jump near Transantarctic Mountains, which is due to the regular model update of ECMWF. In addition, GRACE AOD shows smaller variations than in-situ observation in coastal area. This is possibly due to the lower resolution of GRACE AOD, and thus relatively stable ocean bottom pressure associated with inverted barometric effect suppresses the variations of atmospheric pressure near coast. On the other hand, GRACE AOD closely depicts in-situ observations far from oceans. This is probably because GRACE AOD model (ECMWF) is assimilated with in-situ observations. However, the in-situ observational sites in interior of Antarctica are sparse, and thus it is still uncertain the reliability of GRACE AOD for most region of Antarctica. To examine this, we cross-validate three different reanalysis; ERA Interim, NCEP DOE and MERRA. Residual atmospheric pressure fields as a measure of atmospheric pressure errors, NCEP DOE - ERA Interim or MERRA - ERA Interim, show long-term changes, and the estimated uncertainty in acceleration of Antarctic ice mass change is about 9 Gton/yr^2 from 2003 to 2012. This result implies that the atmospheric surface pressure error likely hinders the accurate estimate of the ice mass loss acceleration in Antarctica.

  14. Challenge and error: critical events and attention-related errors.

    PubMed

    Cheyne, James Allan; Carriere, Jonathan S A; Solman, Grayden J F; Smilek, Daniel

    2011-12-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error↔attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention lapses; resource-depleting cognitions interfering with attention to subsequent task challenges. Attention lapses lead to errors, and errors themselves are a potent consequence often leading to further attention lapses potentially initiating a spiral into more serious errors. We investigated this challenge-induced error↔attention-lapse model using the Sustained Attention to Response Task (SART), a GO-NOGO task requiring continuous attention and response to a number series and withholding of responses to a rare NOGO digit. We found response speed and increased commission errors following task challenges to be a function of temporal distance from, and prior performance on, previous NOGO trials. We conclude by comparing and contrasting the present theory and findings to those based on choice paradigms and argue that the present findings have implications for the generality of conflict monitoring and control models.

  15. Human error in recreational boating.

    PubMed

    McKnight, A James; Becker, Wayne W; Pettit, Anthony J; McKnight, A Scott

    2007-03-01

    Each year over 600 people die and more than 4000 are reported injured in recreational boating accidents. As with most other accidents, human error is the major contributor. U.S. Coast Guard reports of 3358 accidents were analyzed to identify errors in each of the boat types by which statistics are compiled: auxiliary (motor) sailboats, cabin motorboats, canoes and kayaks, house boats, personal watercraft, open motorboats, pontoon boats, row boats, sail-only boats. The individual errors were grouped into categories on the basis of similarities in the behavior involved. Those presented here are the categories accounting for at least 5% of all errors when summed across boat types. The most revealing and significant finding is the extent to which the errors vary across types. Since boating is carried out with one or two types of boats for long periods of time, effective accident prevention measures, including safety instruction, need to be geared to individual boat types.

  16. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  17. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  18. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  19. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  20. Errors as allies: error management training in health professions education.

    PubMed

    King, Aimee; Holder, Michael G; Ahmed, Rami A

    2013-06-01

    This paper adopts methods from the organisational team training literature to outline how health professions education can improve patient safety. We argue that health educators can improve training quality by intentionally encouraging errors during simulation-based team training. Preventable medical errors are inevitable, but encouraging errors in low-risk settings like simulation can allow teams to have better emotional control and foresight to manage the situation if it occurs again with live patients. Our paper outlines an innovative approach for delivering team training.

  1. Concurrent remote entanglement with quantum error correction against photon losses

    NASA Astrophysics Data System (ADS)

    Roy, Ananda; Stone, A. Douglas; Jiang, Liang

    2016-09-01

    Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.

  2. Accurate identification and compensation of geometric errors of 5-axis CNC machine tools using double ball bar

    NASA Astrophysics Data System (ADS)

    Lasemi, Ali; Xue, Deyi; Gu, Peihua

    2016-05-01

    Five-axis CNC machine tools are widely used in manufacturing of parts with free-form surfaces. Geometric errors of machine tools have significant effects on the quality of manufactured parts. This research focuses on development of a new method to accurately identify geometric errors of 5-axis CNC machines, especially the errors due to rotary axes, using the magnetic double ball bar. A theoretical model for identification of geometric errors is provided. In this model, both position-independent errors and position-dependent errors are considered as the error sources. This model is simplified by identification and removal of the correlated and insignificant error sources of the machine. Insignificant error sources are identified using the sensitivity analysis technique. Simulation results reveal that the simplified error identification model can result in more accurate estimations of the error parameters. Experiments on a 5-axis CNC machine tool also demonstrate significant reduction in the volumetric error after error compensation.

  3. Evaluating concentration estimation errors in ELISA microarray experiments

    SciTech Connect

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.; Anderson, Kevin K.; Zangar, Richard C.

    2005-01-26

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Although propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.

  4. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  5. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  6. Influence of satellite geometry, range, clock, and altimeter errors on two-satellite GPS navigation

    NASA Astrophysics Data System (ADS)

    Bridges, Philip D.

    Flight tests were conducted at Yuma Proving Grounds, Yuma, AZ, to determine the performance of a navigation system capable of using only two GPS satellites. The effect of satellite geometry, range error, and altimeter error on the horizontal position solution were analyzed for time and altitude aided GPS navigation (two satellites + altimeter + clock). The east and north position errors were expressed as a function of satellite range error, altimeter error, and east and north Dilution of Precision. The equations for the Dilution of Precision were derived as a function of satellite azimuth and elevation angles for the two satellite case. The expressions for the position error were then used to analyze the flight test data. The results showed the correlation between satellite geometry and position error, the increase in range error due to clock drift, and the impact of range and altimeter error on the east and north position error.

  7. Congenital errors of folate metabolism.

    PubMed

    Zittoun, J

    1995-09-01

    Congenital errors of folate metabolism can be related either to defective transport of folate through various cells or to defective intracellular utilization of folate due to some enzyme deficiencies. Defective transport of folate across the intestine and the blood-brain barrier was reported in the condition 'Congenital Malabsorption of Folate'. This disease is characterized by a severe megaloblastic anaemia of early appearance associated with mental retardation. Anaemia is folate-responsive, but neurological symptoms are only poorly improved because of the inability to maintain adequate levels of folate in the CSF. A familial defect of cellular uptake was described in a family with a high frequency of aplastic anaemia or leukaemia. An isolated defect in folate transport into CSF was identified in a patient suffering from a cerebellar syndrome and pyramidal tract dysfunction. Among enzyme deficiencies, some are well documented, others still putative. Methylenetetrahydrofolate reductase deficiency is the most common. The main clinical findings are neurological signs (mental retardation, seizures, rarely schizophrenic syndromes) or vascular disease, without any haematological abnormality. Low levels of folate in serum, red blood cells and CSF associated with homocystinuria are constant. Methionine synthase deficiency is characterized by a megaloblastic anaemia occurring early in life that is more or less folate-responsive and associated with mental retardation. Glutamate formiminotransferase-cyclodeaminase deficiency is responsible for massive excretion of formiminoglutamic acid but megaloblastic anaemia is not constant. The clinical findings are a more or less severe mental or physical retardation. Dihydrofolate reductase deficiency was reported in three children presenting with a megaloblastic anaemia a few days or weeks after birth, which responded to folinic acid. The possible relationship between congenital disorders such as neural tube defects or

  8. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  9. Human error mitigation initiative (HEMI) : summary report.

    SciTech Connect

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.; Brannon, Nathan Gregory

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operations indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.

  10. Color extended visual cryptography using error diffusion.

    PubMed

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  11. Inborn errors of metabolism underlying primary immunodeficiencies.

    PubMed

    Parvaneh, Nima; Quartier, Pierre; Rostami, Parastoo; Casanova, Jean-Laurent; de Lonlay, Pascale

    2014-10-01

    A number of inborn errors of metabolism (IEM) have been shown to result in predominantly immunologic phenotypes, manifesting in part as inborn errors of immunity. These phenotypes are mostly caused by defects that affect the (i) quality or quantity of essential structural building blocks (e.g., nucleic acids, and amino acids), (ii) cellular energy economy (e.g., glucose metabolism), (iii) post-translational protein modification (e.g., glycosylation) or (iv) mitochondrial function. Presenting as multisystemic defects, they also affect innate or adaptive immunity, or both, and display various types of immune dysregulation. Specific and potentially curative therapies are available for some of these diseases, whereas targeted treatments capable of inducing clinical remission are available for others. We will herein review the pathogenesis, diagnosis, and treatment of primary immunodeficiencies (PIDs) due to underlying metabolic disorders.

  12. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  13. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  14. BFC: correcting Illumina sequencing errors

    PubMed Central

    2015-01-01

    Summary: BFC is a free, fast and easy-to-use sequencing error corrector designed for Illumina short reads. It uses a non-greedy algorithm but still maintains a speed comparable to implementations based on greedy methods. In evaluations on real data, BFC appears to correct more errors with fewer overcorrections in comparison to existing tools. It particularly does well in suppressing systematic sequencing errors, which helps to improve the base accuracy of de novo assemblies. Availability and implementation: https://github.com/lh3/bfc Contact: hengli@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25953801

  15. Addressing the use of phylogenetics for identification of sequences in error in the SWGDAM mitochondrial DNA database.

    PubMed

    Budowle, Bruce; Polanskey, Deborah; Allard, Marc W; Chakraborty, Ranajit

    2004-11-01

    The SWGDAM mtDNA database is a publicly available reference source that is used for estimating the rarity of an evidence mtDNA profile. Because of the current processes for generating population data, it is unlikely that population databases are error free. The majority of the errors are due to human error and are transcriptional in nature. Phylogenetic analysis of data sets can identify some potential errors, and coupled with a review of the sequence data or alignment sheets can be a very useful tool. Seven sequences with errors have been identified by phylogenetic analysis. In addition, two samples were inadvertently modified when placed in the SWGDAM database. The corrected sequences are provided so that users can modify appropriately the current iteration of the SWGDAM database. From a practical perspective, upper bound estimates of the percentage of matching profiles obtained from a database search containing an incorrect sequence and those of a database containing the corrected sequence are not substantially different. Community wide access and review has enabled identification of errors in the SWGDAM data set and will continue to do so. The result of public accessibility is that the quality of the SWGDAM forensic dataset is always improving. PMID:15568698

  16. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  17. Reducing the risk of medication errors in women.

    PubMed

    Grissinger, Matthew C; Kelly, Kate

    2005-01-01

    We outline some of the causes of medication errors involving women and recommend ways that healthcare practitioners can prevent some of these errors. Patient safety has become a major concern since the November 1999 release of the Institute of Medicine (IOM) report, "To Err Is Human." Errors involving prescription medications are responsible for up to 7000 American deaths per year, and the financial costs of drug-related morbidity and mortality may be nearly $77 billion a year. The Institute for Safe Medication Practices (ISMP) collects and analyzes voluntary confidential medication error reports and makes recommendations on the prevention of such errors. This paper uses the expertise of ISMP in medication error prevention to make recommendations to prevent medication errors involving women. Healthcare practitioners should focus on areas of the medication use process that would have the greatest impact, including obtaining complete patient information, accurately communicating drug information, and properly educating patients. Although medication errors are not more common in women, there are some unique concerns with medications used for treating women. In addition, sharing of information about medication use and compliance with medication regimens have been identified as concerns. Through the sharing of information and improving the patient education process, healthcare practitioners should play a more active role in medication error reduction activities by working together toward the goal of improving medication safety and encouraging women to become active in their own care.

  18. Sources of error in picture naming under time pressure.

    PubMed

    Lloyd-Jones, Toby J; Nettlemill, Mandy

    2007-06-01

    We used a deadline procedure to investigate how time pressure may influence the processes involved in picture naming. The deadline exaggerated errors found under naming without deadline. There were also category differences in performance between living and nonliving things and, in particular, for animals versus fruit and vegetables. The majority of errors were visuallyand semantically related to the target (e. celery-asparagus), and there was a greater proportion of these errors made to living things. Importantly, there were also more visual-semantic errors to animals than to fruit and vegetables. In addition, there were a smaller number of pure semantic errors (e.g., nut-bolt), which were made predominantly to nonliving things. The different kinds of error were correlated with different variables. Overall, visual-semantic errors were associated with visual complexity and visual similarity, whereas pure semantic errors were associated with imageability and age of acquisition. However, for animals, visual-semantic errors were associated with visual complexity, whereas for fruit and vegetables they were associated with visual similarity. We discuss these findings in terms of theories of category-specific semantic impairment and models of picture naming. PMID:17848037

  19. FORCE: FORtran for Cosmic Errors

    NASA Astrophysics Data System (ADS)

    Colombi, Stéphane; Szapudi, István

    We review the theory of cosmic errors we have recently developed for count-in-cells statistics. The corresponding FORCE package provides a simple and useful way to compute cosmic covariance on factorial moments and cumulants measured in galaxy catalogs.

  20. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  1. Quantile Regression With Measurement Error

    PubMed Central

    Wei, Ying; Carroll, Raymond J.

    2010-01-01

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802

  2. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  3. Dual processing and diagnostic errors.

    PubMed

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  4. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  5. Orbital and Geodetic Error Analysis

    NASA Technical Reports Server (NTRS)

    Felsentreger, T.; Maresca, P.; Estes, R.

    1985-01-01

    Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

  6. Determining inertial errors from navigation-in-place data

    NASA Astrophysics Data System (ADS)

    Pittman, Don N.; Roberts, Chris E.

    To validate the self-calibration performance of a missile system's inertial navigation system (INS), an inertial error estimator (IEE) and a calibration procedure were developed and tested. The IEE was developed using a least squares fit of navigation-in-place (nav-in-place) data to calculate the accelerometer biases, the gyro biases, and the axis misalignments. It was found that the estimation error due to highly correlated regression functions decreases with navigation time. A tradeoff study was performed to select the optimum nav-in-place time allowed for determining the estimation of the inertial errors. Once the optimum navigation time was determined, an INS calibration procedure incorporating the IEE was developed. The calibration procedure was validated by applying corrections for the inertial errors to the INS, and then navigating in place to show a reduced position error profile. Test results showing the outcome of the INS calibration procedure are included.

  7. Reflection error correction of gas turbine blade temperature

    NASA Astrophysics Data System (ADS)

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan

    2016-03-01

    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  8. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  9. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  10. Detecting Soft Errors in Stencil based Computations

    SciTech Connect

    Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  11. Continuous quantum error correction through local operations

    SciTech Connect

    Mascarenhas, Eduardo; Franca Santos, Marcelo; Marques, Breno; Terra Cunha, Marcelo

    2010-09-15

    We propose local strategies to protect global quantum information. The protocols, which are quantum error-correcting codes for dissipative systems, are based on environment measurements, direct feedback control, and simple encoding of the logical qubits into physical qutrits whose decaying transitions are indistinguishable and equally probable. The simple addition of one extra level in the description of the subsystems allows for local actions to fully and deterministically protect global resources such as entanglement. We present codes for both quantum jump and quantum state diffusion measurement strategies and test them against several sources of inefficiency. The use of qutrits in information protocols suggests further characterization of qutrit-qutrit disentanglement dynamics, which we also give together with simple local environment measurement schemes able to prevent distillability sudden death and even enhance entanglement in situations in which our feedback error correction is not possible.

  12. Observations of TOPEX/Poseidon Orbit Errors Due to Gravitational and Tidal Modeling Errors Using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Haines, B.; Christensen, E.; Guinn, J.; Norman, R.; Marshall, J.

    1995-01-01

    Satellite altimetry must measure variations in ocean topography with cm-level accuracy. The TOPEX/Poseidon mission is designed to do this by measuring the radial component of the orbit with an accuracy of 13 cm or better RMS. Recent advances, however, have improved this accuracy by about an order of magnitude.

  13. Quantifying errors without random sampling

    PubMed Central

    Phillips, Carl V; LaPole, Luwanna M

    2003-01-01

    Background All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. Discussion We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Summary Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research. PMID:12892568

  14. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  15. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    SciTech Connect

    Hess-Flores, Mauricio

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  16. ISA accelerometer onboard the Mercury Planetary Orbiter: error budget

    NASA Astrophysics Data System (ADS)

    Iafolla, Valerio; Lucchesi, David M.; Nozzoli, Sergio; Santoli, Francesco

    2007-03-01

    We have estimated a preliminary error budget for the Italian Spring Accelerometer (ISA) that will be allocated onboard the Mercury Planetary Orbiter (MPO) of the European Space Agency (ESA) space mission to Mercury named BepiColombo. The role of the accelerometer is to remove from the list of unknowns the non-gravitational accelerations that perturb the gravitational trajectory followed by the MPO in the strong radiation environment that characterises the orbit of Mercury around the Sun. Such a role is of fundamental importance in the context of the very ambitious goals of the Radio Science Experiments (RSE) of the BepiColombo mission. We have subdivided the errors on the accelerometer measurements into two main families: (i) the pseudo-sinusoidal errors and (ii) the random errors. The former are characterised by a periodic behaviour with the frequency of the satellite mean anomaly and its higher order harmonic components, i.e., they are deterministic errors. The latter are characterised by an unknown frequency distribution and we assumed for them a noise-like spectrum, i.e., they are stochastic errors. Among the pseudo-sinusoidal errors, the main contribution is due to the effects of the gravity gradients and the inertial forces, while among the random-like errors the main disturbing effect is due to the MPO centre-of-mass displacements produced by the onboard High Gain Antenna (HGA) movements and by the fuel consumption and sloshing. Very subtle to be considered are also the random errors produced by the MPO attitude corrections necessary to guarantee the nadir pointing of the spacecraft. We have therefore formulated the ISA error budget and the requirements for the satellite in order to guarantee an orbit reconstruction for the MPO spacecraft with an along-track accuracy of about 1 m over the orbital period of the satellite around Mercury in such a way to satisfy the RSE requirements.

  17. Correcting Errors in Catchment-Scale Satellite Rainfall Accumulation Using Microwave Satellite Soil Moisture Products

    NASA Astrophysics Data System (ADS)

    Ryu, D.; Crow, W. T.

    2011-12-01

    Streamflow forecasting in the poorly gauged or ungauged catchments is very difficult mainly due to the absence of the input forcing data for forecasting models. This challenge poses a threat to human safety and industry in the areas where proper warning system is not provided. Currently, a number of studies are in progress to calibrate streamflow models without relying on ground observations as an effort to construct a streamflow forecasting systems in the ungauged catchments. Also, recent advances in satellite altimetry and innovative application of the optical has enabled mapping streamflow rate and flood extent in the remote areas. In addition, remotely sensed hydrological variables such as the real-time satellite precipitation data, microwave soil moisture retrievals, and surface thermal infrared observations have the great potential to be used as a direct input or signature information to run the forecasting models. In this work, we evaluate a real-time satellite precipitation product, TRMM 3B42RT, and correct errors of the product using the microwave satellite soil moisture products over 240 catchments in Australia. The error correction is made by analyzing the difference between output soil moisture of a simple model forced by the TRMM product and the satellite retrievals of soil moisture. The real-time satellite precipitation products before and after the error correction are compared with the daily gauge-interpolated precipitation data produced by the Australian Bureau of Meteorology. The error correction improves overall accuracy of the catchment-scale satellite precipitation, especially the root mean squared error (RMSE), correlation, and the false alarm ratio (FAR), however, only a marginal improvement is observed in the probability of detection (POD). It is shown that the efficiency of the error correction is affected by the surface vegetation density and the annual precipitation of the catchments.

  18. Model of glucose sensor error components: identification and assessment for new Dexcom G4 generation devices.

    PubMed

    Facchinetti, Andrea; Del Favero, Simone; Sparacino, Giovanni; Cobelli, Claudio

    2015-12-01

    It is clinically well-established that minimally invasive subcutaneous continuous glucose monitoring (CGM) sensors can significantly improve diabetes treatment. However, CGM readings are still not as reliable as those provided by standard fingerprick blood glucose (BG) meters. In addition to unavoidable random measurement noise, other components of sensor error are distortions due to the blood-to-interstitial glucose kinetics and systematic under-/overestimations associated with the sensor calibration process. A quantitative assessment of these components, and the ability to simulate them with precision, is of paramount importance in the design of CGM-based applications, e.g., the artificial pancreas (AP), and in their in silico testing. In the present paper, we identify and assess a model of sensor error of for two sensors, i.e., the G4 Platinum (G4P) and the advanced G4 for artificial pancreas studies (G4AP), both belonging to the recently presented "fourth" generation of Dexcom CGM sensors but different in their data processing. Results are also compared with those obtained by a sensor belonging to the previous, "third," generation by the same manufacturer, the SEVEN Plus (7P). For each sensor, the error model is derived from 12-h CGM recordings of two sensors used simultaneously and BG samples collected in parallel every 15 ± 5 min. Thanks to technological innovations, G4P outperforms 7P, with average mean absolute relative difference (MARD) of 11.1 versus 14.2%, respectively, and lowering of about 30% the error of each component. Thanks to the more sophisticated data processing algorithms, G4AP resulted more reliable than G4P, with a MARD of 10.0%, and a further decrease to 20% of the error due to blood-to-interstitial glucose kinetics.

  19. Influence of litho patterning on DSA placement errors

    NASA Astrophysics Data System (ADS)

    Wuister, Sander; Druzhinina, Tamara; Ambesi, Davide; Laenens, Bart; Yi, Linda He; Finders, Jo

    2014-03-01

    Directed self-assembly of block copolymers is currently being investigated as a shrinking technique complementary to lithography. One of the critical issues about this technique is that DSA induces the placement error. In this paper, study of the relation between confinement by lithography and the placement error induced by DSA is demonstrated. Here, both 193i and EUV pre-patterns are created using a simple algorithm to confine two contact holes formed by DSA on a pitch of 45nm. Full physical numerical simulations were used to compare the impact of the confinement on DSA related placement error, pitch variations due to pattern variations and phase separation defects.

  20. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the

  1. Error localization in RHIC by fitting difference orbits

    SciTech Connect

    Liu C.; Minty, M.; Ptitsyn, V.

    2012-05-20

    The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.

  2. Error Reduction for Weigh-In-Motion

    SciTech Connect

    Hively, Lee M; Abercrombie, Robert K; Scudiere, Matthew B; Sheldon, Frederick T

    2009-01-01

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing).

  3. Error Reduction in Weigh-In-Motion

    2007-09-21

    Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. ORNL weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bounding and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This work shows further filtering improvements to enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with lessmore » effort (elimination of redundant weighing)« less

  4. Errors and Their Mitigation at the Kirchhoff-Law-Johnson-Noise Secure Key Exchange

    PubMed Central

    Saez, Yessica; Kish, Laszlo B.

    2013-01-01

    A method to quantify the error probability at the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange is introduced. The types of errors due to statistical inaccuracies in noise voltage measurements are classified and the error probability is calculated. The most interesting finding is that the error probability decays exponentially with the duration of the time window of single bit exchange. The results indicate that it is feasible to have so small error probabilities of the exchanged bits that error correction algorithms are not required. The results are demonstrated with practical considerations. PMID:24303033

  5. A Tunable, Software-based DRAM Error Detection and Correction Library for HPC

    SciTech Connect

    Fiala, David J; Ferreira, Kurt Brian; Mueller, Frank; Engelmann, Christian

    2012-01-01

    Proposed exascale systems will present a number of considerable resiliency challenges. In particular, DRAM soft-errors, or bit-flips, are expected to greatly increase due to the increased memory density of these systems. Current hardware-based fault-tolerance methods will be unsuitable for addressing the expected soft error frequency rate. As a result, additional software will be needed to address this challenge. In this paper we introduce LIBSDC, a tunable, transparent silent data corruption detection and correction library for HPC applications. LIBSDC provides comprehensive SDC protection for program memory by implementing on-demand page integrity verification. Experimental benchmarks with Mantevo HPCCG show that once tuned, LIBSDC is able to achieve SDC protection with 50\\% overhead of resources, less than the 100\\% needed for double modular redundancy.

  6. Explaining errors in children's questions.

    PubMed

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  7. Simulation of Radar Rainfall Fields: A Random Error Model

    NASA Astrophysics Data System (ADS)

    Aghakouchak, A.; Habib, E.; Bardossy, A.

    2008-12-01

    Precipitation is a major input in hydrological and meteorological models. It is believed that uncertainties due to input data will propagate in modeling hydrologic processes. Stochastically generated rainfall data are used as input to hydrological and meteorological models to assess model uncertainties and climate variability in water resources systems. The superposition of random errors of different sources is one of the main factors in uncertainty of radar estimates. One way to express these uncertainties is to stochastically generate random error fields to impose them on radar measurements in order to obtain an ensemble of radar rainfall estimates. In the method introduced here, the random error consists of two components: purely random error and dependent error on the indicator variable. Model parameters of the error model are estimated using a heteroscedastic maximum likelihood model in order to account for variance heterogeneity in radar rainfall error estimates. When reflectivity values are considered, the exponent and multiplicative factor of the Z-R relationship are estimated simultaneously with the model parameters. The presented model performs better compared to the previous approaches that generally result in unaccounted heteroscedasticity in error fields and thus radar ensemble.

  8. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors

    PubMed Central

    Murphy, Peter R.; van Moort, Marianne L.; Nieuwenhuis, Sander

    2016-01-01

    Reaction time (RT) is commonly observed to slow down after an error. This post-error slowing (PES) has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES. PMID:27010472

  9. Impacts of double-ended beam-pointing error on system performance

    NASA Astrophysics Data System (ADS)

    Horkin, Phil R.

    2000-05-01

    Optical Intersatellite links have been investigated for many years, but to date have enjoyed few spaceborne applications. The literature is rich in articles describing system issues such as jitter and pointing effects, but this author believes that simplifications generally made lead to significant errors. Simplifications made, for example, due to the complexity of joint distribution functions are easily overcome with widely available computer tools. Satellite- based data transport systems must offer similar Quality of Service (QoS) parameters as fiber-based transport. The movement to packet-based protocols adds additional constraints not often considered in past papers. BER may no longer be the dominant concern; packet loss, misdelivery, or severely corrupted packets can easily dominate the error budgets. The aggregation of static and dynamic pointing errors on both ends of such a link dramatically reduces the QoS. The approach described in this paper provides the terminal designer the methodology to analytically balance the impacts of these error sources against implementation solutions.

  10. Statistical model and error analysis of a proposed audio fingerprinting algorithm

    NASA Astrophysics Data System (ADS)

    McCarthy, E. P.; Balado, F.; Silvestre, G. C. M.; Hurley, N. J.

    2006-01-01

    In this paper we present a statistical analysis of a particular audio fingerprinting method proposed by Haitsma et al.1 Due to the excellent robustness and synchronisation properties of this particular fingerprinting method, we would like to examine its performance for varying values of the parameters involved in the computation and ascertain its capabilities. For this reason, we pursue a statistical model of the fingerprint (also known as a hash, message digest or label). Initially we follow the work of a previous attempt made by Doets and Lagendijk 2-4 to obtain such a statistical model. By reformulating the representation of the fingerprint as a quadratic form, we present a model in which the parameters derived by Doets and Lagendijk may be obtained more easily. Furthermore, our model allows further insight into certain aspects of the behaviour of the fingerprinting algorithm not previously examined. Using our model, we then analyse the probability of error (P e) of the hash. We identify two particular error scenarios and obtain an expression for the probability of error in each case. We present three methods of varying accuracy to approximate P e following Gaussian noise addition to the signal of interest. We then analyse the probability of error following desynchronisation of the signal at the input of the hashing system and provide an approximation to P e for different parameters of the algorithm under varying degrees of desynchronisation.

  11. Refractive Error, Axial Length, and Relative Peripheral Refractive Error before and after the Onset of Myopia

    PubMed Central

    Mutti, Donald O.; Hayes, John R.; Mitchell, G. Lynn; Jones, Lisa A.; Moeschberger, Melvin L.; Cotter, Susan A.; Kleinstein, Robert N.; Manny, Ruth E.; Twelker, J. Daniel; Zadnik, Karla

    2009-01-01

    year after onset, whereas axial length and myopic refractive error continued to elongate and to progress, respectively, although at slower rates compared with the rate at onset. Conclusions A more negative refractive error, longer axial length, and more hyperopic relative peripheral refractive error in addition to faster rates of change in these variables may be useful for predicting the onset of myopia, but only within a span of 2 to 4 years before onset. Becoming myopic does not appear to be characterized by a consistent rate of increase in refractive error and expansion of the globe. Acceleration in myopia progression, axial elongation, and peripheral hyperopia in the year prior to onset followed by relatively slower, more stable rates of change after onset suggests that more than one factor may influence ocular expansion during myopia onset and progression. PMID:17525178

  12. Theory and Simulation of Field Error Transport.

    NASA Astrophysics Data System (ADS)

    Dubin, D. H. E.

    2007-11-01

    The rate at which a plasma escapes across an applied magnetic field B due to symmetry-breaking electric or magnetic ``field errors'' is revisited. Such field errors cause plasma loss (or compression) in stellarators, tokamaks,ootnotetextH.E. Mynick, Ph Plas 13 058102 (2006). and nonneutral plasmas.ootnotetextEggleston, Ph Plas 14 012302 (07); Danielson et al., Ph Plas 13 055706. We study this process using idealized simulations that follow guiding centers in given trap fields, neglecting their collective effect on the evolution, but including collisions. Also, the Fokker-Planck equation describing the particle distribution is solved, and the predicted transport agrees with simulations in every applicable regime. When a field error of the form δφ(r, θ, z ) = ɛ(r) e^i m θ kz is applied to an infinite plasma column, the transport rates fall into the usual banana, plateau and fluid regimes. When the particles are axially confined by applied trap fields, the same three regimes occur. When an added ``squeeze'' potential produces a separatrix in the axial motion, the transport is enhanced, scaling roughly as ( ν/ B )^1/2 δ2̂ when ν< φ. For φ< ν< φB (where φ, ν and φB are the rotation, collision and axial bounce frequencies) there is also a 1/ ν regime similar to that predicted for ripple-enhanced transport.^1

  13. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  14. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  15. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900

  16. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories.

  17. Surface errors in the course of machining precision optics

    NASA Astrophysics Data System (ADS)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  18. Laser tracker error determination using a network measurement

    NASA Astrophysics Data System (ADS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-04-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.

  19. [Errors in surgery. Strategies to improve surgical safety].

    PubMed

    Arenas-Márquez, Humberto; Anaya-Prado, Roberto

    2008-01-01

    Surgery is an extreme experience for both patient and surgeon. The patient has to be rescued from something so serious that it may justify the surgeon to violate his/her integrity in order to resolve the problem. Nevertheless, both physician and patient recognize that the procedure has some risks. Medical errors are the 8th cause of death in the U.S., and malpractice can be documented in >50% of the legal prosecutions in Mexico. Of special interest is the specialty of general surgery where legal responsibility can be confirmed in >80% of the cases. Interest in mortality attributed to medical errors has existed since the 19th century; clearly identifying the lack of knowledge, abilities, and poor surgical and diagnostic judgment as the cause of errors. Currently, poor organization, lack of team work, and physician/ patient-related factors are recognized as the cause of medical errors. Human error is unavoidable and health care systems and surgeons should adopt the culture of error analysis openly, inquisitively and permanently. Errors should be regarded as an opportunity to learn that health care should to be patient centered and not surgeon centered. In this review, we analyze the causes of complications and errors that can develop during routine surgery. Additionally, we propose measures that will allow improvements in the safety of surgical patients. PMID:18778549

  20. Shuttle orbit IMU alignment. Single-precision computation error

    NASA Technical Reports Server (NTRS)

    Mcclain, C. R.

    1980-01-01

    The source of computational error in the inertial measurement unit (IMU) onorbit alignment software was investigated. Simulation runs were made on the IBM 360/70 computer with the IMU orbit alignment software coded in hal/s. The results indicate that for small IMU misalignment angles (less than 600 arc seconds), single precision computations in combination with the arc cosine method of eigen rotation angle extraction introduces an additional misalignment error of up to 230 arc seconds per axis. Use of the arc sine method, however, produced negligible misalignment error. As a result of this study, the arc sine method was recommended for use in the IMU onorbit alignment software.

  1. Underlying Cause(s) of Letter Perseveration Errors

    PubMed Central

    Fischer-Baum, Simon; Rapp, Brenda

    2011-01-01

    Perseverations, the inappropriate intrusion of elements from a previous response into a current response, are commonly observed in individuals with acquired deficits. This study specifically investigates the contribution of failure-to activate and failure-to-inhibit deficit(s) in the generation of letter perseveration errors in acquired dysgraphia. We provide evidence from the performance 12 dysgraphic individuals indicating that a failure to activate graphemes for a target word gives rise to letter perseveration errors. In addition, we also provide evidence that, in some individuals, a failure-to-inhibit deficit may also contribute to the production of perseveration errors. PMID:22178232

  2. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  3. Synthetic aperture interferometry: error analysis

    SciTech Connect

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  4. Orbit IMU alignment: Error analysis

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  5. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  6. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. PMID:27184070

  7. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis.

  8. 20 Tips to Help Prevent Medical Errors

    MedlinePlus

    ... Prevent Medical Errors 20 Tips to Help Prevent Medical Errors: Patient Fact Sheet This information is for ... current information. Select to Download PDF (295 KB). Medical errors can occur anywhere in the health care ...

  9. First- and second-order error estimates in Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Bakx, R.; Kleiss, R. H. P.; Versteegen, F.

    2016-11-01

    In Monte Carlo integration an accurate and reliable determination of the numerical integration error is essential. We point out the need for an independent estimate of the error on this error, for which we present an unbiased estimator. In contrast to the usual (first-order) error estimator, this second-order estimator can be shown to be not necessarily positive in an actual Monte Carlo computation. We propose an alternative and indicate how this can be computed in linear time without risk of large rounding errors. In addition, we comment on the relatively very slow convergence of the second-order error estimate.

  10. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  11. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  12. Reduced error signalling in medication-naive children with ADHD: associations with behavioural variability and post-error adaptations

    PubMed Central

    Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom

    2016-01-01

    Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332

  13. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  14. Evaluating the Effect of Global Positioning System (GPS) Satellite Clock Error via GPS Simulation

    NASA Astrophysics Data System (ADS)

    Sathyamoorthy, Dinesh; Shafii, Shalini; Amin, Zainal Fitry M.; Jusoh, Asmariah; Zainun Ali, Siti

    2016-06-01

    This study is aimed at evaluating the effect of Global Positioning System (GPS) satellite clock error using GPS simulation. Two conditions of tests are used; Case 1: All the GPS satellites have clock errors within the normal range of 0 to 7 ns, corresponding to pseudorange error range of 0 to 2.1 m; Case 2: One GPS satellite suffers from critical failure, resulting in clock error in the pseudorange of up to 1 km. It is found that increase of GPS satellite clock error causes increase of average positional error due to increase of pseudorange error in the GPS satellite signals, which results in increasing error in the coordinates computed by the GPS receiver. Varying average positional error patterns are observed for the each of the readings. This is due to the GPS satellite constellation being dynamic, causing varying GPS satellite geometry over location and time, resulting in GPS accuracy being location / time dependent. For Case 1, in general, the highest average positional error values are observed for readings with the highest PDOP values, while the lowest average positional error values are observed for readings with the lowest PDOP values. For Case 2, no correlation is observed between the average positional error values and PDOP, indicating that the error generated is random.

  15. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant.

  16. Management of human error by design

    NASA Technical Reports Server (NTRS)

    Wiener, Earl

    1988-01-01

    Design-induced errors and error prevention as well as the concept of lines of defense against human error are discussed. The concept of human error prevention, whose main focus has been on hardware, is extended to other features of the human-machine interface vulnerable to design-induced errors. In particular, it is pointed out that human factors and human error prevention should be part of the process of transport certification. Also, the concept of error tolerant systems is considered as a last line of defense against error.

  17. Reducing medical errors and adverse events.

    PubMed

    Pham, Julius Cuong; Aswani, Monica S; Rosen, Michael; Lee, HeeWon; Huddle, Matthew; Weeks, Kristina; Pronovost, Peter J

    2012-01-01

    Medical errors account for ∼98,000 deaths per year in the United States. They increase disability and costs and decrease confidence in the health care system. We review several important types of medical errors and adverse events. We discuss medication errors, healthcare-acquired infections, falls, handoff errors, diagnostic errors, and surgical errors. We describe the impact of these errors, review causes and contributing factors, and provide an overview of strategies to reduce these events. We also discuss teamwork/safety culture, an important aspect in reducing medical errors.

  18. Does naming accuracy improve through self-monitoring of errors?

    PubMed

    Schwartz, Myrna F; Middleton, Erica L; Brecher, Adelyn; Gagliardi, Maureen; Garvey, Kelly

    2016-04-01

    This study examined spontaneous self-monitoring of picture naming in people with aphasia. Of primary interest was whether spontaneous detection or repair of an error constitutes an error signal or other feedback that tunes the production system to the desired outcome. In other words, do acts of monitoring cause adaptive change in the language system? A second possibility, not incompatible with the first, is that monitoring is indicative of an item's representational strength, and strength is a causal factor in language change. Twelve PWA performed a 615-item naming test twice, in separate sessions, without extrinsic feedback. At each timepoint, we scored the first complete response for accuracy and error type and the remainder of the trial for verbalizations consistent with detection (e.g., "no, not that") and successful repair (i.e., correction). Data analysis centered on: (a) how often an item that was misnamed at one timepoint changed to correct at the other timepoint, as a function of monitoring; and (b) how monitoring impacted change scores in the Forward (Time 1 to Time 2) compared to Backward (Time 2 to Time 1) direction. The Strength hypothesis predicts significant effects of monitoring in both directions. The Learning hypothesis predicts greater effects in the Forward direction. These predictions were evaluated for three types of errors--Semantic errors, Phonological errors, and Fragments--using mixed-effects regression modeling with crossed random effects. Support for the Strength hypothesis was found for all three error types. Support for the Learning hypothesis was found for Semantic errors. All effects were due to error repair, not error detection. We discuss the theoretical and clinical implications of these novel findings. PMID:26863091

  19. Which forcing data errors matter most when modeling seasonal snowpacks?

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2014-12-01

    High quality forcing data are critical when modeling seasonal snowpacks and snowmelt, but their quality is often compromised due to measurement errors or deficiencies in gridded data products (e.g., spatio-temporal interpolation, empirical parameterizations, or numerical weather model outputs). To assess the relative impact of errors in different meteorological forcings, many studies have conducted sensitivity analyses where errors (e.g., bias) are imposed on one forcing at a time and changes in model output are compared. Although straightforward, this approach only considers simplistic error structures and cannot quantify interactions in different meteorological forcing errors (i.e., it assumes a linear system). Here we employ the Sobol' method of global sensitivity analysis, which allows us to test how co-existing errors in six meteorological forcings (i.e., air temperature, precipitation, wind speed, humidity, incoming shortwave and longwave radiation) impact specific modeled snow variables (i.e., peak snow water equivalent, snowmelt rates, and snow disappearance timing). Using the Sobol' framework across a large number of realizations (>100000 simulations annually at each site), we test how (1) the type (e.g., bias vs. random errors), (2) distribution (e.g., uniform vs. normal), and (3) magnitude (e.g., instrument uncertainty vs. field uncertainty) of forcing errors impact key outputs from a physically based snow model (the Utah Energy Balance). We also assess the role of climate by conducting the analysis at sites in maritime, intermountain, continental, and tundra snow zones. For all outputs considered, results show that (1) biases in forcing data are more important than random errors, (2) the choice of error distribution can enhance the importance of specific forcings, and (3) the level of uncertainty considered dictates the relative importance of forcings. While the relative importance of forcings varied with snow variable and climate, the results broadly

  20. Intensive Treatment with Ultrasound Visual Feedback for Speech Sound Errors in Childhood Apraxia

    PubMed Central

    Preston, Jonathan L.; Leece, Megan C.; Maas, Edwin

    2016-01-01

    Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients with additional knowledge about their tongue shapes when attempting to produce sounds that are erroneous. The additional feedback may assist children with childhood apraxia of speech (CAS) in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice) may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10–14 years diagnosed with CAS attended 16 h of speech therapy over a 2-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor patterns and improved articulation

  1. Intensive Treatment with Ultrasound Visual Feedback for Speech Sound Errors in Childhood Apraxia.

    PubMed

    Preston, Jonathan L; Leece, Megan C; Maas, Edwin

    2016-01-01

    Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients with additional knowledge about their tongue shapes when attempting to produce sounds that are erroneous. The additional feedback may assist children with childhood apraxia of speech (CAS) in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice) may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 years diagnosed with CAS attended 16 h of speech therapy over a 2-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor patterns and improved articulation of

  2. Intensive Treatment with Ultrasound Visual Feedback for Speech Sound Errors in Childhood Apraxia.

    PubMed

    Preston, Jonathan L; Leece, Megan C; Maas, Edwin

    2016-01-01

    Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients with additional knowledge about their tongue shapes when attempting to produce sounds that are erroneous. The additional feedback may assist children with childhood apraxia of speech (CAS) in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice) may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 years diagnosed with CAS attended 16 h of speech therapy over a 2-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor patterns and improved articulation of

  3. Intensive Treatment with Ultrasound Visual Feedback for Speech Sound Errors in Childhood Apraxia

    PubMed Central

    Preston, Jonathan L.; Leece, Megan C.; Maas, Edwin

    2016-01-01

    Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients with additional knowledge about their tongue shapes when attempting to produce sounds that are erroneous. The additional feedback may assist children with childhood apraxia of speech (CAS) in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice) may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10–14 years diagnosed with CAS attended 16 h of speech therapy over a 2-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor patterns and improved articulation

  4. Sensitivity Studies of the Radar-Rainfall Error Models

    NASA Astrophysics Data System (ADS)

    Villarini, G.; Krajewski, W. F.; Ciach, G. J.

    2007-12-01

    It is well acknowledged that there are large uncertainties associated with the operational quantitative precipitation estimates produced by the U.S. national network of WSR-88D radars. These errors are due to the measurement principles, parameter estimation, and not fully understood physical processes. Comprehensive quantitative evaluation of these uncertainties is still at an early stage. The authors proposed an empirically-based model in which the relation between true rainfall (RA) and radar-rainfall (RR) could be described as the product of a deterministic distortion function and a random component. However, how different values of the parameters in the radar-rainfall algorithms used to create these products impact the model results still remains an open question. In this study, the authors investigate the effects of different exponents in the Z-R relation (Marshall- Palmer, NEXRAD, and tropical) and of an anomalous propagation (AP) removal algorithm. Additionally, they generalize the model to describe the radar-rainfall uncertainties in the additive form. This approach is fully empirically based and rain gauge estimates are considered as an approximation of the true rainfall. The proposed results are based on a large sample (six years) of data from the Oklahoma City radar (KTLX) and processed through the Hydro-NEXRAD software system. The radar data are complemented with the corresponding rain gauge observations from the Oklahoma Mesonet, and the Agricultural Research Service Micronet.

  5. Verification of the Forecast Errors Based on Ensemble Spread

    NASA Astrophysics Data System (ADS)

    Vannitsem, S.; Van Schaeybroeck, B.

    2014-12-01

    The use of ensemble prediction systems allows for an uncertainty estimation of the forecast. Most end users do not require all the information contained in an ensemble and prefer the use of a single uncertainty measure. This measure is the ensemble spread which serves to forecast the forecast error. It is however unclear how best the quality of these forecasts can be performed, based on spread and forecast error only. The spread-error verification is intricate for two reasons: First for each probabilistic forecast only one observation is substantiated and second, the spread is not meant to provide an exact prediction for the error. Despite these facts several advances were recently made, all based on traditional deterministic verification of the error forecast. In particular, Grimit and Mass (2007) and Hopson (2014) considered in detail the strengths and weaknesses of the spread-error correlation, while Christensen et al (2014) developed a proper-score extension of the mean squared error. However, due to the strong variance of the error given a certain spread, the error forecast should be preferably considered as probabilistic in nature. In the present work, different probabilistic error models are proposed depending on the spread-error metrics used. Most of these models allow for the discrimination of a perfect forecast from an imperfect one, independent of the underlying ensemble distribution. The new spread-error scores are tested on the ensemble prediction system of the European Centre of Medium-range forecasts (ECMWF) over Europe and Africa. ReferencesChristensen, H. M., Moroz, I. M. and Palmer, T. N., 2014, Evaluation of ensemble forecast uncertainty using a new proper score: application to medium-range and seasonal forecasts. In press, Quarterly Journal of the Royal Meteorological Society. Grimit, E. P., and C. F. Mass, 2007: Measuring the ensemble spread-error relationship with a probabilistic approach: Stochastic ensemble results. Mon. Wea. Rev., 135, 203

  6. Errors in airborne flux measurements

    NASA Astrophysics Data System (ADS)

    Mann, Jakob; Lenschow, Donald H.

    1994-07-01

    We present a general approach for estimating systematic and random errors in eddy correlation fluxes and flux gradients measured by aircraft in the convective boundary layer as a function of the length of the flight leg, or of the cutoff wavelength of a highpass filter. The estimates are obtained from empirical expressions for various length scales in the convective boundary layer and they are experimentally verified using data from the First ISLSCP (International Satellite Land Surface Climatology Experiment) Field Experiment (FIFE), the Air Mass Transformation Experiment (AMTEX), and the Electra Radome Experiment (ELDOME). We show that the systematic flux and flux gradient errors can be important if fluxes are calculated from a set of several short flight legs or if the vertical velocity and scalar time series are high-pass filtered. While the systematic error of the flux is usually negative, that of the flux gradient can change sign. For example, for temperature flux divergence the systematic error changes from negative to positive about a quarter of the way up in the convective boundary layer.

  7. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  8. The Errors of Our Ways

    ERIC Educational Resources Information Center

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  9. Typical errors of ESP users

    NASA Astrophysics Data System (ADS)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  10. Reduced discretization error in HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm2 exposed to both solar particle event and galactic cosmic ray environments.

  11. Amplify Errors to Minimize Them

    ERIC Educational Resources Information Center

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  12. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  13. Error Patterns of Bilingual Readers.

    ERIC Educational Resources Information Center

    Gonzalez, Phillip C.; Elijah, David V.

    1979-01-01

    In a study of developmental reading behaviors, errors of 75 Spanish-English bilingual students (grades 2-9) on the McLeod GAP Comprehension Test were categorized in an attempt to ascertain a pattern of language difficulties. Contrary to previous research, bilingual readers minimally used native language cues in reading second language materials.…

  14. What Is a Reading Error?

    ERIC Educational Resources Information Center

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  15. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  16. Input/output error analyzer

    NASA Technical Reports Server (NTRS)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  17. A brief history of error.

    PubMed

    Murray, Andrew W

    2011-10-01

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it. PMID:21968991

  18. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  19. Factors that influence the generation of autobiographical memory conjunction errors.

    PubMed

    Devitt, Aleea L; Monk-Fromont, Edwin; Schacter, Daniel L; Addis, Donna Rose

    2016-01-01

    The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors.

  20. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  1. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    SciTech Connect

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  2. Error resilient image transmission based on virtual SPIHT

    NASA Astrophysics Data System (ADS)

    Liu, Rongke; He, Jie; Zhang, Xiaolin

    2007-02-01

    SPIHT is one of the most efficient image compression algorithms. It had been successfully applied to a wide variety of images, such as medical and remote sensing images. However, it is highly susceptible to channel errors. A single bit error could potentially lead to decoder derailment. In this paper, we integrate new error resilient tools into wavelet coding algorithm and present an error-resilient image transmission scheme based on virtual set partitioning in hierarchical trees (SPIHT), EREC and self truncation mechanism. After wavelet decomposition, the virtual spatial-orientation trees in the wavelet domain are individually encoded using virtual SPIHT. Since the self-similarity across sub bands is preserved, a high source coding efficiency can be achieved. The scheme is essentially a tree-based coding, thus error propagation is limited within each virtual tree. The number of virtual trees may be adjusted according to the channel conditions. When the channel is excellent, we may decrease the number of trees to further improve the compression efficiency, otherwise increase the number of trees to guarantee the error resilience to channel. EREC is also adopted to enhance the error resilience capability of the compressed bit streams. At the receiving side, the self-truncation mechanism based on self constraint of set partition trees is introduced. The decoding of any sub-tree halts in case the violation of self-constraint relationship occurs in the tree. So the bits impacted by the error propagation are limited and more likely located in the low bit-layers. In additional, inter-trees interpolation method is applied, thus some errors are compensated. Preliminary experimental results demonstrate that the proposed scheme can achieve much more benefits on error resilience.

  3. Prospective, multidisciplinary recording of perioperative errors in cerebrovascular surgery: is error in the eye of the beholder?

    PubMed

    Michalak, Suzanne M; Rolston, John D; Lawton, Michael T

    2016-06-01

    OBJECT Surgery requires careful coordination of multiple team members, each playing a vital role in mitigating errors. Previous studies have focused on eliciting errors from only the attending surgeon, likely missing events observed by other team members. METHODS Surveys were administered to the attending surgeon, resident surgeon, anesthesiologist, and nursing staff immediately following each of 31 cerebrovascular surgeries; participants were instructed to record any deviation from optimal course (DOC). DOCs were categorized and sorted by reporter and perioperative timing, then correlated with delays and outcome measures. RESULTS Errors were recorded in 93.5% of the 31 cases surveyed. The number of errors recorded per case ranged from 0 to 8, with an average of 3.1 ± 2.1 errors (± SD). Overall, technical errors were most common (24.5%), followed by communication (22.4%), management/judgment (16.0%), and equipment (11.7%). The resident surgeon reported the most errors (52.1%), followed by the circulating nurse (31.9%), the attending surgeon (26.6%), and the anesthesiologist (14.9%). The attending and resident surgeons were most likely to report technical errors (52% and 30.6%, respectively), while anesthesiologists and circulating nurses mostly reported anesthesia errors (36%) and communication errors (50%), respectively. The overlap in reported errors was 20.3%. If this study had used only the surveys completed by the attending surgeon, as in prior studies, 72% of equipment errors, 90% of anesthesia and communication errors, and 100% of nursing errors would have been missed. In addition, it would have been concluded that errors occurred in only 45.2% of cases (rather than 93.5%) and that errors resulting in a delay occurred in 3.2% of cases instead of the 74.2% calculated using data from 4 team members. Compiled results from all team members yielded significant correlations between technical DOCs and prolonged hospital stays and reported and actual delays (p = 0

  4. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1992-01-01

    The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.

  5. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  6. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  7. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase.

    PubMed

    McInerney, Peter; Adams, Paul; Hadi, Masood Z

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error rate measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu, Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition. PMID:25197572

  8. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  9. Sensitivity to Error Fields in NSTX High Beta Plasmas

    SciTech Connect

    Park, Jong-Kyu; Menard, Jonathan E.; Gerhardt, Stefan P.; Buttery, Richard J.; Sabbagh, Steve A.; Bell, Steve E.; LeBlanc, Benoit P.

    2011-11-07

    It was found that error field threshold decreases for high β in NSTX, although the density correlation in conventional threshold scaling implies the threshold would increase since higher β plasmas in our study have higher plasma density. This greater sensitivity to error field in higher β plasmas is due to error field amplification by plasmas. When the effect of amplification is included with ideal plasma response calculations, the conventional density correlation can be restored and threshold scaling becomes more consistent with low β plasmas. However, it was also found that the threshold can be significantly changed depending on plasma rotation. When plasma rotation was reduced by non-resonant magnetic braking, the further increase of sensitivity to error field was observed.

  10. Systematic Parameter Errors in Inspiraling Neutron Star Binaries

    NASA Astrophysics Data System (ADS)

    Favata, Marc

    2014-03-01

    The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled.

  11. A study on refractive errors among school children in Kolkata.

    PubMed

    Das, Angshuman; Dutta, Himadri; Bhaduri, Gautam; De Sarkar, Ajay; Sarkar, Krishnendu; Bannerjee, Manas

    2007-04-01

    Childhood visual impairment due to refractive errors is a significant problem in school children and has a considerable impact on public health. To assess the magnitude of the problem the present study was undertaken among the school children aged 5 to 10 years in Kolkata. Detailed ophthalmological examination was carried out in the schools as well as in the Regional Institute of Ophthalmology, Kolkata. Among 2317 students examined, 582 (25.11%) were suffering from refractive errors, myopia being the commonest (n = 325; 14.02%). Astigmatism affected 91 children (3.93%). There is an increase of prevalence of refractive errors with increase of age, but it is not statistically significant (p > 0.05). There is also no significant difference of refractive errors between boys and girls. PMID:17822183

  12. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  13. The intrinsic error thresholds of the surface code with correlated errors

    NASA Astrophysics Data System (ADS)

    Jouzdani, Pejman; Mucciolo, Eduardo; Novais, Eduardo

    2014-03-01

    We study how the resilience of the surface code to decoherence is affected by the presence of a bosonic bath. The surface code experiences an effective dynamics due to the coupling to a bosonic bath that correlates the qubits of the code. The range of the effective induced qubit-qubit interaction depends on parameters related to the bath correlation functions. We show hat different ranges set different intrinsic bounds on the fidelity of the code. These bounds appear to be independent of the stochastic error probabilities frequently studied in the literature and to be merely a consequence of the induced dynamics by the bath. We introduce a new definition of stabilizers based on logical operators that allows us to efficiently implement a Metropolis algorithm to determine the intrinsic upper bounds to the error threshold. Supported by the ONR and the NSF grant CCF 1117241.

  14. Dispersion, static correlation, and delocalisation errors in density functional theory: an electrostatic theorem perspective.

    PubMed

    Dwyer, Austin D; Tozer, David J

    2011-10-28

    Dispersion, static correlation, and delocalisation errors in density functional theory are considered from the unconventional perspective of the force on a nucleus in a stretched diatomic molecule. The electrostatic theorem of Feynman is used to relate errors in the forces to errors in the electron density distortions, which in turn are related to erroneous terms in the Kohn-Sham equations. For H(2), the exact dispersion force arises from a subtle density distortion; the static correlation error leads to an overestimated force due to an exaggerated distortion. For H(2)(+), the exact force arises from a delicate balance between attractive and repulsive components; the delocalisation error leads to an underestimated force due to an underestimated distortion. The net force in H(2)(+) can become repulsive, giving the characteristic barrier in the potential energy curve. Increasing the fraction of long-range exact orbital exchange increases the distortion, reducing delocalisation error but increasing static correlation error.

  15. Classifying and Predicting Errors of Inpatient Medication Reconciliation

    PubMed Central

    Pippins, Jennifer R.; Gandhi, Tejal K.; Hamann, Claus; Ndumele, Chima D.; Labonville, Stephanie A.; Diedrichsen, Ellen K.; Carty, Marcy G.; Karson, Andrew S.; Bhan, Ishir; Coley, Christopher M.; Liang, Catherine L.; Turchin, Alexander; McCarthy, Patricia C.

    2008-01-01

    Background Failure to reconcile medications across transitions in care is an important source of potential harm to patients. Little is known about the predictors of unintentional medication discrepancies and how, when, and where they occur. Objective To determine the reasons, timing, and predictors of potentially harmful medication discrepancies. Design Prospective observational study. Patients Admitted general medical patients. Measurements Study pharmacists took gold-standard medication histories and compared them with medical teams’ medication histories, admission and discharge orders. Blinded teams of physicians adjudicated all unexplained discrepancies using a modification of an existing typology. The main outcome was the number of potentially harmful unintentional medication discrepancies per patient (potential adverse drug events or PADEs). Results Among 180 patients, 2066 medication discrepancies were identified, and 257 (12%) were unintentional and had potential for harm (1.4 per patient). Of these, 186 (72%) were due to errors taking the preadmission medication history, while 68 (26%) were due to errors reconciling the medication history with discharge orders. Most PADEs occurred at discharge (75%). In multivariable analyses, low patient understanding of preadmission medications, number of medication changes from preadmission to discharge, and medication history taken by an intern were associated with PADEs. Conclusions Unintentional medication discrepancies are common and more often due to errors taking an accurate medication history than errors reconciling this history with patient orders. Focusing on accurate medication histories, on potential medication errors at discharge, and on identifying high-risk patients for more intensive interventions may improve medication safety during and after hospitalization. PMID:18563493

  16. Error analysis for retrieval of Venus' IR surface emissivity from VIRTIS/VEX measurements

    NASA Astrophysics Data System (ADS)

    Kappel, David; Haus, Rainer; Arnold, Gabriele

    2015-08-01

    Venus' surface emissivity data in the infrared can serve to explore the planet's geology. The only global data with high spectral, spatial, and temporal resolution and coverage at present is supplied by nightside emission measurements acquired by the Visible and InfraRed Thermal Imaging Spectrometer VIRTIS-M-IR (1.0 - 5.1 μm) aboard ESA's Venus Express. A radiative transfer simulation and a retrieval algorithm can be used to determine surface emissivity in the nightside spectral transparency windows located at 1.02, 1.10, and 1.18 μm. To obtain satisfactory fits to measured spectra, the retrieval pipeline also determines auxiliary parameters describing cloud properties from a certain spectral range. But spectral information content is limited, and emissivity is difficult to retrieve due to strong interferences from other parameters. Based on a selection of representative synthetic VIRTIS-M-IR spectra in the range 1.0 - 2.3 μm, this paper investigates emissivity retrieval errors that can be caused by interferences of atmospheric and surface parameters, by measurement noise, and by a priori data, and which retrieval pipeline leads to minimal errors. Retrieval of emissivity from a single spectrum is shown to fail due to extremely large errors, although the fits to the reference spectra are very good. Neglecting geologic activity, it is suggested to apply a multi-spectrum retrieval technique to retrieve emissivity relative to an initial value as a parameter that is common to several measured spectra that cover the same surface bin. Retrieved emissivity maps of targets with limited extension (a few thousand km) are then additively renormalized to remove spatially large scale deviations from the true emissivity map that are due to spatially slowly varying interfering parameters. Corresponding multi-spectrum retrieval errors are estimated by a statistical scaling of the single-spectrum retrieval errors and are listed for 25 measurement repetitions. For the best of the

  17. GP-B error modeling and analysis

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1982-01-01

    Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

  18. Error estimation for ORION baseline vector determination

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1980-01-01

    Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.

  19. A simple double error correcting BCH codes

    NASA Astrophysics Data System (ADS)

    Sinha, V.

    1983-07-01

    With the availability of various cost effective digital hardware components, error correcting codes are realized in hardware in simpler fashion than was hitherto possible. Instead of computing error locations in BCH decoding by Berklekamp algorith, syndrome to error location mapping using an EPROM for double error correcting BCH code is described. The processing is parallel instead of serial. Possible applications are given.

  20. Error Analysis in the Introductory Physics Laboratory.

    ERIC Educational Resources Information Center

    Deacon, Christopher G.

    1992-01-01

    Describes two simple methods of error analysis: (1) combining errors in the measured quantities; and (2) calculating the error or uncertainty in the slope of a straight-line graph. Discusses significance of the error in the comparison of experimental results with some known value. (MDH)