Science.gov

Sample records for acceptable error range

  1. Accepting error to make less error.

    PubMed

    Einhorn, H J

    1986-01-01

    In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.

  2. Moment expansion for ionospheric range error

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A.; Reich, R.; Parker, H.; Berbert, J.

    1972-01-01

    On a plane earth, the ionospheric or tropospheric range error depends only on the total refractivity content or zeroth moment of the refracting layer and the elevation angle. On a spherical earth, however, the dependence is more complex; so for more accurate results it has been necessary to resort to complex ray-tracing calculations. A simple, high-accuracy alternative to the ray-tracing calculation is presented. By appropriate expansion of the angular dependence in the ray-tracing integral in a power series in height, an expression is obtained for the range error in terms of a simple function of elevation angle, E, at the expansion height and of the mth moment of the refractivity, N, distribution about the expansion height. The rapidity of convergence is heavily dependent on the choice of expansion height. For expansion heights in the neighborhood of the centroid of the layer (300-490 km), the expansion to N = 2 (three terms) gives results accurate to about 0.4% at E = 10 deg. As an analytic tool, the expansion affords some insight on the influence of layer shape on range errors in special problems.

  3. Atmospheric refraction errors in laser ranging systems

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.; Rowlett, J. R.

    1976-01-01

    The effects of horizontal refractivity gradients on the accuracy of laser ranging systems were investigated by ray tracing through three dimensional refractivity profiles. The profiles were generated by performing a multiple regression on measurements from seven or eight radiosondes, using a refractivity model which provided for both linear and quadratic variations in the horizontal direction. The range correction due to horizontal gradients was found to be an approximately sinusoidal function of azimuth having a minimum near 0 deg azimuth and a maximum near 180 deg azimuth. The peak to peak variation was approximately 5 centimeters at 10 deg elevation and decreased to less than 1 millimeter at 80 deg elevation.

  4. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  5. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  6. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems.

    PubMed

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches.

  7. Close-range radar rainfall estimation and error analysis

    NASA Astrophysics Data System (ADS)

    van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

    2016-08-01

    measurements, with a difference of 5-8 %. This shows the potential of radar as a tool for rainfall estimation, especially at close ranges, but also underlines the importance of applying radar correction methods as individual errors can have a large detrimental impact on the QPE performance of the radar.

  8. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration.

  9. Influence of satellite geometry, range, clock, and altimeter errors on two-satellite GPS navigation

    NASA Astrophysics Data System (ADS)

    Bridges, Philip D.

    Flight tests were conducted at Yuma Proving Grounds, Yuma, AZ, to determine the performance of a navigation system capable of using only two GPS satellites. The effect of satellite geometry, range error, and altimeter error on the horizontal position solution were analyzed for time and altitude aided GPS navigation (two satellites + altimeter + clock). The east and north position errors were expressed as a function of satellite range error, altimeter error, and east and north Dilution of Precision. The equations for the Dilution of Precision were derived as a function of satellite azimuth and elevation angles for the two satellite case. The expressions for the position error were then used to analyze the flight test data. The results showed the correlation between satellite geometry and position error, the increase in range error due to clock drift, and the impact of range and altimeter error on the east and north position error.

  10. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  11. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  12. A correction method for range walk error in photon counting 3D imaging LIDAR

    NASA Astrophysics Data System (ADS)

    He, Weiji; Sima, Boyu; Chen, Yunfei; Dai, Huidong; Chen, Qian; Gu, Guohua

    2013-11-01

    A correction method for the range walk error is presented in this paper, which is based on a priori modeling and suitable for the GmAPD photon counting three-dimensional(3D) imaging LIDAR. The range walk error is mainly brought in by the fluctuation in number of photons in the laser echo pulse. In this paper, the priori model of range walk error was established, and the function relationship between the range walk error and the laser pulse response rate was determined using the numerical fitting. With this function, the range walk error of original 3D range image was forecasted and the corresponding compensated image of range walk error was obtained to correct the original 3D range image. The experimental results showed that the correction method could reduce the range walk error effectively, and it is particularly suitable for the case that there are significant differences of material properties or reflection characteristics in the scene.

  13. Close-range radar rainfall estimation and error analysis

    NASA Astrophysics Data System (ADS)

    van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

    2012-04-01

    It is well-known that quantitative precipitation estimation (QPE) is affected by many sources of error. The most important of these are 1) radar calibration, 2) wet radome attenuation, 3) rain attenuation, 4) vertical profile of reflectivity, 5) variations in drop size distribution, and 6) sampling effects. The study presented here is an attempt to separate and quantify these sources of error. For this purpose, QPE is performed very close to the radar (~1-2 km) so that 3), 4), and 6) will only play a minor role. Error source 5) can be corrected for because of the availability of two disdrometers (instruments that measure the drop size distribution). A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm in De Bilt, The Netherlands is analyzed. Radar, rain gauge, and disdrometer data from De Bilt are used for this. It is clear from the analyses that without any corrections, the radar severely underestimates the total rain amount (only 25 mm). To investigate the effect of wet radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation up to ~4 dB. The calibration of the radar is checked by looking at received power from the sun. This turns out to cause another 1 dB of underestimation. The effect of variability of drop size distributions is shown to cause further underestimation. Correcting for all of these effects yields a good match between radar QPE and gauge measurements.

  14. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  15. Error analysis for a spaceborne laser ranging system

    NASA Technical Reports Server (NTRS)

    Pavlis, E. C.

    1979-01-01

    The dependence (or independence) of baseline accuracies, obtained from a typical mission of a spaceborne ranging system, on several factors is investigated. The emphasis is placed on a priori station information, but factors such as the elevation cut-off angle, the geometry of the network, the mean orbital height, and to a limited extent geopotential modeling are also examined. The results are obtained through simulations, but some theoretical justification is also given. Guidelines for freeing the results from these dependencies are suggested for most of the factors.

  16. Example Procedures for Developing Acceptance-Range Criteria for BESTEST-EX

    SciTech Connect

    Judkoff, R.; Polly, B.; Bianchi, M.; Neymark, J.

    2010-08-01

    This document provides an example procedure for establishing acceptance-range criteria to assess results from software undergoing BESTEST-EX. This example method for BESTEST-EX is a modified version of the method described in HERS BESTEST.

  17. 76 FR 37793 - Viking Range Corporation, Provisional Acceptance of a Settlement Agreement and Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-28

    ... COMMISSION Viking Range Corporation, Provisional Acceptance of a Settlement Agreement and Order AGENCY... Agreement with Viking Range Corporation, containing a civil penalty of $450,000.00. DATES: Any interested... 1. In accordance with 16 CFR 1118.20, Viking Range Corporation (``Viking'') and the staff...

  18. Improved estimates of the range of errors on photomasks using measured values of skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Hamaker, Henry Chris

    1995-12-01

    Statistical process control (SPC) techniques often use six times the standard deviation sigma to estimate the range of errors within a process. Two assumptions are inherent in this choice of metric for the range: (1) the normal distribution adequately describes the errors, and (2) the fraction of errors falling within plus or minus 3 sigma, about 99.73%, is sufficiently large that we may consider the fraction occurring outside this range to be negligible. In state-of-the-art photomasks, however, the assumption of normality frequently breaks down, and consequently plus or minus 3 sigma is not a good estimate of the range of errors. In this study, we show that improved estimates for the effective maximum error Em, which is defined as the value for which 99.73% of all errors fall within plus or minus Em of the mean mu, may be obtained by quantifying the deviation from normality of the error distributions using the skewness and kurtosis of the error sampling. Data are presented indicating that in laser reticle- writing tools, Em less than or equal to 3 sigma. We also extend this technique for estimating the range of errors to specifications that are usually described by mu plus 3 sigma. The implications for SPC are examined.

  19. Range walk error correction using prior modeling in photon counting 3D imaging lidar

    NASA Astrophysics Data System (ADS)

    He, Weiji; Chen, Yunfei; Miao, Zhuang; Chen, Qian; Gu, Guohua; Dai, Huidong

    2013-09-01

    A real-time correction method for range walk error in photon counting 3D imaging Lidar is proposed in this paper. We establish the photon detection model and pulse output delay model for GmAPD, which indicates that range walk error in photon counting 3D imaging Lidar is mainly effected by the number of photons during laser echo pulse. A measurable variable - laser pulse response rate is defined as a substitute of the number of photons during laser echo pulse, and the expression of the range walk error with respect to the laser pulse response rate is obtained using priori calibration. By recording photon arrival time distribution, the measurement error of unknown targets is predicted using established range walk error function and the range walk error compensated image is got. Thus real-time correction of the measurement error in photon counting 3D imaging Lidar is implemented. The experimental results show that the range walks error caused by the difference in reflected energy of the target can be effectively avoided without increasing the complexity of photon counting 3D imaging Lidar system.

  20. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  1. Cramer-Rao lower bound on range error for LADARs with Geiger-mode avalanche photodiodes.

    PubMed

    Johnson, Steven E

    2010-08-20

    The Cramer-Rao lower bound (CRLB) on range error is calculated for laser detection and ranging (LADAR) systems using Geiger-mode avalanche photodiodes (GMAPDs) to detect reflected laser pulses. For the cases considered, the GMAPD range error CRLB is greater than the CRLB for a photon-counting device. It is also shown that the GMAPD range error CRLB is minimized when the mean energy in the received laser pulse is finite. Given typical LADAR system parameters, a Gaussian-envelope received pulse, and a noise detection rate of less than 4 MHz, the GMAPD range error CRLB is minimized when the quantum efficiency times the mean number of received laser pulse photons is between 2.2 and 2.3. PMID:20733630

  2. The effect of proficiency level on measurement error of range of motion

    PubMed Central

    Akizuki, Kazunori; Yamaguchi, Kazuto; Morita, Yoshiyuki; Ohashi, Yukari

    2016-01-01

    [Purpose] The aims of this study were to evaluate the type and extent of error in the measurement of range of motion and to evaluate the effect of evaluators’ proficiency level on measurement error. [Subjects and Methods] The participants were 45 university students, in different years of their physical therapy education, and 21 physical therapists, with up to three years of clinical experience in a general hospital. Range of motion of right knee flexion was measured using a universal goniometer. An electrogoniometer attached to the right knee and hidden from the view of the participants was used as the criterion to evaluate error in measurement using the universal goniometer. The type and magnitude of error were evaluated using the Bland-Altman method. [Results] Measurements with the universal goniometer were not influenced by systematic bias. The extent of random error in measurement decreased as the level of proficiency and clinical experience increased. [Conclusion] Measurements of range of motion obtained using a universal goniometer are influenced by random errors, with the extent of error being a factor of proficiency. Therefore, increasing the amount of practice would be an effective strategy for improving the accuracy of range of motion measurements. PMID:27799712

  3. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes.

  4. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes. PMID:19571273

  5. Towards more complete specifications for acceptable analytical performance - a plea for error grid analysis.

    PubMed

    Krouwer, Jan S; Cembrowski, George S

    2011-07-01

    Abstract We examine limitations of common analytical performance specifications for quantitative assays. Specifications can be either clinical or regulatory. Problems with current specifications include specifying limits for only 95% of the results, having only one set of limits that demarcate no harm from minor harm, using incomplete models for total error, not accounting for the potential of user error, and not supplying sufficient protocol requirements. Error grids are recommended to address these problems as error grids account for 100% of the data and stratify errors into different severity categories. Total error estimation from a method comparison can be used to estimate the inner region of an error grid, but the outer region needs to be addressed using risk management techniques. The risk management steps, foreign to many in laboratory medicine, are outlined.

  6. Assessment of an adjustment factor to model radar range dependent error

    NASA Astrophysics Data System (ADS)

    Sebastianelli, S.; Russo, F.; Napolitano, F.; Baldini, L.

    2012-09-01

    Quantitative radar precipitation estimates are affected by errors determined by many causes such as radar miscalibration, range degradation, attenuation, ground clutter, variability of Z-R relation, variability of drop size distribution, vertical air motion, anomalous propagation and beam-blocking. Range degradation (including beam broadening and sampling of precipitation at an increasing altitude)and signal attenuation, determine a range dependent behavior of error. The aim of this work is to model the range-dependent error through an adjustment factor derived from the G/R ratio trend against the range, where G and R are the corresponding rain gauge and radar rainfall amounts computed at each rain gauge location. Since range degradation and signal attenuation effects are negligible close to the radar, resultsshowthatwithin 40 km from radar the overall range error is independent of the distance from Polar 55C and no range-correction is needed. Nevertheless, up to this distance, the G/R ratiocan showa concave trend with the range, which is due to the melting layer interception by the radar beam during stratiform events.

  7. Systematic errors analysis for a large dynamic range aberrometer based on aberration theory.

    PubMed

    Wu, Peng; Liu, Sheng; DeHoog, Edward; Schwiegerling, Jim

    2009-11-10

    In Ref. 1, it was demonstrated that the significant systematic errors of a type of large dynamic range aberrometer are strongly related to the power error (defocus) in the input wavefront. In this paper, a generalized theoretical analysis based on vector aberration theory is presented, and local shift errors of the SH spot pattern as a function of the lenslet position and the local wavefront tilt over the corresponding lenslet are derived. Three special cases, a spherical wavefront, a crossed cylindrical wavefront, and a cylindrical wavefront, are analyzed and the possibly affected Zernike terms in the wavefront reconstruction are investigated. The simulation and experimental results are illustrated to verify the theoretical predictions.

  8. STUDY TO ESTABLISH THE ACCEPTANCE RANGE FOR PEROXYL RADICALS SCAVENGER CAPACITY OF NATURAL SOD.

    PubMed

    Lupu, Andreea-Roxana; Cremer, Lidia

    2015-01-01

    In the context of an emerging market of food supplements, the proven quality of the antioxidant products should be the main criteria for using them. The production process has to be carefully controlled and complementary tests are needed to demonstrate the correspondence between real and declared properties of final product. Using well characterized compounds with proven antioxidant activity in biological systems as reference brings a plus of rigorously to the testing protocol. The aim of this study was to determine the acceptance range for the antioxidant (peroxyl radicals scavenger) capacity of "Natural SOD" by using for comparison ascorbic acid (vitamin C). The established acceptance range complete our previous results concerning the antioxidant capacity of Natural SOD using validated ORAC method and creates premises for supplementary checking of the batches in the current production and improving the product quality. PMID:27328523

  9. Bootstrap standard error and confidence intervals for the correlations corrected for indirect range restriction.

    PubMed

    Li, Johnson Ching-Hong; Chan, Wai; Cui, Ying

    2011-11-01

    The standard Pearson correlation coefficient, r, is a biased estimator of the population correlation coefficient, ρ(XY) , when predictor X and criterion Y are indirectly range-restricted by a third variable Z (or S). Two correction algorithms, Thorndike's (1949) Case III, and Schmidt, Oh, and Le's (2006) Case IV, have been proposed to correct for the bias. However, to our knowledge, the two algorithms did not provide a procedure to estimate the associated standard error and confidence intervals. This paper suggests using the bootstrap procedure as an alternative. Two Monte Carlo simulations were conducted to systematically evaluate the empirical performance of the proposed bootstrap procedure. The results indicated that the bootstrap standard error and confidence intervals were generally accurate across simulation conditions (e.g., selection ratio, sample size). The proposed bootstrap procedure can provide a useful alternative for the estimation of the standard error and confidence intervals for the correlation corrected for indirect range restriction.

  10. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  11. Potentiometric Measurement of Transition Ranges and Titration Errors for Acid/Base Indicators

    NASA Astrophysics Data System (ADS)

    Flowers, Paul A.

    1997-07-01

    Sophomore analytical chemistry courses typically devote a substantial amount of lecture time to acid/base equilibrium theory, and usually include at least one laboratory project employing potentiometric titrations. In an effort to provide students a laboratory experience that more directly supports their classroom discussions on this important topic, an experiment involving potentiometric measurement of transition ranges and titration errors for common acid/base indicators has been developed. The pH and visually-assessed color of a millimolar strong acid/base system are monitored as a function of added titrant volume, and the resultant data plotted to permit determination of the indicator's transition range and associated titration error. Student response is typically quite positive, and the measured quantities correlate reasonably well to literature values.

  12. Using kriging to bound satellite ranging errors due to the ionosphere

    NASA Astrophysics Data System (ADS)

    Blanch, Juan

    The Global Positioning System (GPS) has the potential to become the primary navigational aid for civilian aircraft, thanks to satellite based augmentation systems (SBAS). SBAS systems, including the United State's Wide Area Augmentation System (WAAS), provide corrections and hard bounds on the user errors. The ionosphere is the largest and least predictable source of error. The only ionospheric information available to WAAS is a set of range delay measurements taken at reference stations. From this data, the master station must compute a real time estimate of the ionospheric delay and a hard error bound valid for any user. The variability of the ionospheric behavior has caused the confidence bounds corresponding to the ionosphere to be very large in WAAS. These ranging bounds translate into conservative bounds on user position error. These position error bounds (called protection levels) have values of 30 to 50 meters. Since these values fluctuate near the maximum tolerable limit, WAAS is not always available. In order to increase the availability of WAAS, we must decrease the confidence bounds corresponding to ionospheric uncertainty while maintaining integrity. In this work, I present an ionospheric estimation algorithm based on kriging. I first introduce a simple model of the Vertical Ionospheric Delay that captures both the deterministic behavior and the random behavior of the ionosphere. Under this model, the kriging method is optimal. More importantly kriging provides an estimation variance that can be translated into an error bound. However, this method must be modified for three reasons; first, the state of the ionosphere is unknown and can only be estimated through real time measurements; second, because of bandwidth constraints, the user cannot receive all the measurements and third there is noise in the measurements. I will show how these three obstacles can be overcome. The algorithm presented here provides a reduction in the error bound corresponding

  13. Sensitivity analysis of short-arc station coordinate determinations from range data. [geocentric coordinate range errors in satellite tracking

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.

    1976-01-01

    The accurate determination of the geocentric coordinates of a tracking station is essential for most geodetic and geophysical satellite applications. Since most of these satellites are close to the earth, the geopotential model is a dominant source of error which significantly influences station coordinate determinations. Other sources, such as GM error and drag, also influence the accuracy of the station coordinate determination. One technique for reducing the effect of these errors is to use short-arcs consisting of a few passes of the satellite over the tracking station. This paper analyzes the sensitivity of short-arc station coordinate estimates to various errors in the physical model, to the number of observations, and to the station-satellite geometry using simulated as well as real data.

  14. Diagnosis of Short Range Forecast Errors Using Piecewise Inversion of Potential Vorticity

    NASA Astrophysics Data System (ADS)

    Klinker, E.

    Under the assumption of balanced flow dynamics the evolution of atmospheric sy stems such as cyclones are investigated in the single parameter environment of poten- tial vorticity (PV). Based on the property of invertibility, it is then possible to calculate the distribution of the balanced flow from a knowledge o f the 3-dimensional distribu- tion of Ertel's PV. The diagnosis of atmospheric model errors has to take into account the effects of all di- abatic and adiabatic processes. The difficulty of a comprehensive di agnostic approach arises from the fact that different processes produce tenden cies for different model parameters. A diabatic process may produce tendencies for temperature alone (like radiation); other processes may produce tendencie s for momentum, temperature and humidity (like vertical diffusion or cumulus c onvection). However, a one-parameter diagnosis has been achieved by combining temperature and momentum increments to appropriate increments of Ertel's PV. The advantage of using PV in the frame work of quasi-balanced dynamics is that the flow associated with diabatic PV perturbations can be obtained from the p iecewise in- version technique. The method provides a basis to identify atmosphe ric developments that are noticeably influenced by diabatic processes. For the diagnosis of ECMWF short range forecast tendencies and ultimately for an esti mate of model errors, a di- agnostic system has been set up that calculates the flow perturbations associated with all diabatic and adiabatic processes.

  15. Dose Uncertainties in IMPT for Oropharyngeal Cancer in the Presence of Anatomical, Range, and Setup Errors

    SciTech Connect

    Kraan, Aafke C.; Water, Steven van de; Teguh, David N.; Al-Mamgani, Abrahim; Madden, Tom; Kooy, Hanne M.; Heijmen, Ben J.M.; Hoogeman, Mischa S.

    2013-12-01

    Purpose: Setup, range, and anatomical uncertainties influence the dose delivered with intensity modulated proton therapy (IMPT), but clinical quantification of these errors for oropharyngeal cancer is lacking. We quantified these factors and investigated treatment fidelity, that is, robustness, as influenced by adaptive planning and by applying more beam directions. Methods and Materials: We used an in-house treatment planning system with multicriteria optimization of pencil beam energies, directions, and weights to create treatment plans for 3-, 5-, and 7-beam directions for 10 oropharyngeal cancer patients. The dose prescription was a simultaneously integrated boost scheme, prescribing 66 Gy to primary tumor and positive neck levels (clinical target volume-66 Gy; CTV-66 Gy) and 54 Gy to elective neck levels (CTV-54 Gy). Doses were recalculated in 3700 simulations of setup, range, and anatomical uncertainties. Repeat computed tomography (CT) scans were used to evaluate an adaptive planning strategy using nonrigid registration for dose accumulation. Results: For the recalculated 3-beam plans including all treatment uncertainty sources, only 69% (CTV-66 Gy) and 88% (CTV-54 Gy) of the simulations had a dose received by 98% of the target volume (D98%) >95% of the prescription dose. Doses to organs at risk (OARs) showed considerable spread around planned values. Causes for major deviations were mixed. Adaptive planning based on repeat imaging positively affected dose delivery accuracy: in the presence of the other errors, percentages of treatments with D98% >95% increased to 96% (CTV-66 Gy) and 100% (CTV-54 Gy). Plans with more beam directions were not more robust. Conclusions: For oropharyngeal cancer patients, treatment uncertainties can result in significant differences between planned and delivered IMPT doses. Given the mixed causes for major deviations, we advise repeat diagnostic CT scans during treatment, recalculation of the dose, and if required, adaptive

  16. Development of Algorithms and Error Analyses for the Short Baseline Lightning Detection and Ranging System

    NASA Technical Reports Server (NTRS)

    Starr, Stanley O.

    1998-01-01

    NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.

  17. Single-plane versus three-plane methods for relative range error evaluation of medium-range 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David K.; Cournoyer, Luc; Beraldin, J.-Angelo

    2015-05-01

    Within the context of the ASTM E57 working group WK12373, we compare the two methods that had been initially proposed for calculating the relative range error of medium-range (2 m to 150 m) optical non-contact 3D imaging systems: the first is based on a single plane (single-plane assembly) and the second on an assembly of three mutually non-orthogonal planes (three-plane assembly). Both methods are evaluated for their utility in generating a metric to quantify the relative range error of medium-range optical non-contact 3D imaging systems. We conclude that the three-plane assembly is comparable to the single-plane assembly with regard to quantification of relative range error while eliminating the requirement to isolate the edges of the target plate face.

  18. Target error for image-to-physical space registration: preliminary clinical results using laser range scanning

    NASA Astrophysics Data System (ADS)

    Cao, Aize; Miga, Michael I.; Dumpuri, P.; Ding, S.; Dawant, B. M.; Thompson, R. C.

    2007-03-01

    In this paper, preliminary results from an image-to-physical space registration platform are presented. The current platform employs traditional and novel methods of registration which use a variety of data sources to include: traditional synthetic skin-fiducial point-based registration, surface registration based on facial contours, brain feature point-based registration, brain vessel-to-vessel registration, and a more comprehensive cortical surface registration method that utilizes both geometric and intensity information from both the image volume and physical patient. The intraoperative face and cortical surfaces were digitized using a laser range scanner (LRS) capable of producing highly resolved textured point clouds. In two in vivo cases, a series of registrations were performed using these techniques and compared within the context of a true target error. One of the advantages of using a textured point cloud data stream is that true targets among the physical cortical surface and the preoperative image volume can be identified and used to assess image-to-physical registration methods. The results suggest that iterative closest point (ICP) method for intraoperative face surface registration is equivalent to point-based registration (PBR) method of skin fiducial markers. With regard to the initial image and physical space registration, for patient 1, mean target registration error (TRE) were 3.1+/-0.4 mm and 3.6 +/-0.9 mm for face ICP and skin fiducial PBR, respectively. For patient 2, the mean TRE were 5.7 +/-1.3 mm, and 6.6 +/-0.9 mm for face ICP and skin fiducial PBR, respectively. With regard to intraoperative cortical surface registration, SurfaceMI outperformed feature based PBR and vessel ICP with 1.7+/-1.8 mm for patient 1. For patient 2, the best result was achieved by using vessel ICP with 1.9+/-0.5 mm.

  19. SU-E-T-550: Range Effects in Proton Therapy Caused by Systematic Errors in the Stoichiometric Calibration

    SciTech Connect

    Doolan, P; Dias, M; Collins Fekete, C; Seco, J

    2014-06-01

    Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)

  20. Bootstrap Standard Error and Confidence Intervals for the Correlation Corrected for Range Restriction: A Simulation Study

    ERIC Educational Resources Information Center

    Chan, Wai; Chan, Daniel W.-L.

    2004-01-01

    The standard Pearson correlation coefficient is a biased estimator of the true population correlation, ?, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, r-sub(c), has been recommended, and a standard formula based on asymptotic results for estimating its standard…

  1. Error Analysis of Combined Optical-Flow and Stereo Passive Ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The motion of an imaging sensor causes each imaged point of the seem to describe a time trajectory on the image plane. The trajectories of all imaged points are reminiscent of a flow (eg, of liquid) which is the source of the term "optical flow". Optical-flow ranging is a method by which the stream of two-dimensional images obtained from a forward-looking forward-moving passive sensor is used to compute range to points in the field of view. Another well-known ranging method consists of triangulation based on stereo images obtained from at least two stationary sensors. In this paper we analyze the potential accuracies of a combined optical flow and stereo passive-ranging system in the context of helicopter nap-of-the-earth obstacle avoidance. The Cramer-Rao lower bound (CRLB) is developed for the combined system under the assumption of a random angular misalignment common to both cameras of a stereo pair. It is shown that the range accuracy degradations caused by misalignment is negligible for a combined optical-flow and stereo system as compared with a monocular optical-flow system.

  2. Extended range (10-30 days) heavy rain forecasting study based on a nonlinear cross-prediction error model

    NASA Astrophysics Data System (ADS)

    Xia, Zhiye; Chen, Hongbin; Xu, Lisheng; Wang, Yongqian

    2015-12-01

    Extended range (10-30 d) heavy rain forecasting is difficult but performs an important function in disaster prevention and mitigation. In this paper, a nonlinear cross prediction error (NCPE) algorithm that combines nonlinear dynamics and statistical methods is proposed. The method is based on phase space reconstruction of chaotic single-variable time series of precipitable water and is tested in 100 global cases of heavy rain. First, nonlinear relative dynamic error for local attractor pairs is calculated at different stages of the heavy rain process, after which the local change characteristics of the attractors are analyzed. Second, the eigen-peak is defined as a prediction indicator based on an error threshold of about 1.5, and is then used to analyze the forecasting validity period. The results reveal that the prediction indicator features regarded as eigenpeaks for heavy rain extreme weather are all reflected consistently, without failure, based on the NCPE model; the prediction validity periods for 1-2 d, 3-9 d and 10-30 d are 4, 22 and 74 cases, respectively, without false alarm or omission. The NCPE model developed allows accurate forecasting of heavy rain over an extended range of 10-30 d and has the potential to be used to explore the mechanisms involved in the development of heavy rain according to a segmentation scale. This novel method provides new insights into extended range forecasting and atmospheric predictability, and also allows the creation of multi-variable chaotic extreme weather prediction models based on high spatiotemporal resolution data.

  3. Building Energy Simulation Test for Existing Homes (BESTEST-EX): Instructions for Implementing the Test Procedure, Calibration Test Reference Results, and Example Acceptance-Range Criteria

    SciTech Connect

    Judkoff, R.; Polly, B.; Bianchi, M.; Neymark, J.; Kennedy, M.

    2011-08-01

    This publication summarizes building energy simulation test for existing homes (BESTEST-EX): instructions for implementing the test procedure, calibration tests reference results, and example acceptance-range criteria.

  4. Long range hybrid tube-wedge plasmonic waveguide with extreme light confinement and good fabrication error tolerance.

    PubMed

    Ding, Li; Qin, Jin; Xu, Kai; Wang, Liang

    2016-02-22

    We studied a novel long range hybrid tube-wedge plasmonic (LRHTWP) waveguide consisting of a high index dielectric nanotube placed above a triangular metal wedge substrate. Using comprehensive numerical simulations on guiding properties of the designed waveguide, it is found that extreme light confinement and low propagation loss are obtained due to strong coupling between dielectric nanotube mode and wedge plasmon polariton. Comparing with previous studied hybrid plasmonic waveguides, the LRHTWP waveguide has longer propagation length and tighter mode confinement. In addition, the LRHTWP waveguide is quite tolerant to practical fabrication errors such as variation of the wedge tip angle and the horizontal misalignment between the nanotube and the metal wedge. The proposed LRHTWP waveguide could have many application potentials for various high performance nanophotonic components.

  5. A simple method for high-precision calibration of long-range errors in an angle encoder using an electronic nulling autocollimator

    NASA Astrophysics Data System (ADS)

    Kinnane, Mark N.; Hudson, Lawrence T.; Henins, Albert; Mendenhall, Marcus H.

    2015-04-01

    We describe a simple method for high-precision rotary angle encoder calibration for long-range angular errors. By using a redesigned electronic nulling autocollimator, an optical-polygon artifact is calibrated simultaneously with determining the encoder error function over a rotation of 2π rad. The technique is applied to the NIST vacuum double crystal spectrometer, which depends on precise measurement of diffraction angles to determine absolute x-ray wavelengths. By oversampling, the method returned the encoder error function with an expanded uncertainty (k = 2) of 0.004 s of plane angle. Knowledge of the error function permits the instrument to make individual encoder readings with an accuracy of 0.06 s (k = 2), which is limited primarily by the least count and noise of the encoder electronics. While the error function lay within the nominal specifications, it differed from the intrinsic factory curve, indicating the need for in situ calibration in high-precision applications.

  6. Restraint of range walk error in a Geiger-mode avalanche photodiode lidar to acquire high-precision depth and intensity information.

    PubMed

    Xu, Lu; Zhang, Yu; Zhang, Yong; Yang, Chenghua; Yang, Xu; Zhao, Yuan

    2016-03-01

    There exists a range walk error in a Geiger-mode avalanche photodiode (Gm-APD) lidar because of the fluctuation in the number of signal photoelectrons. To restrain this range walk error, we propose a new returning-wave signal processing technique based on the Poisson probability response model and the Gaussian functions fitting method. High-precision depth and intensity information of the target at the distance of 5 m is obtained by a Gm-APD lidar using a 6 ns wide pulsed laser. The experiment results show that the range and intensity precisions are 1.2 cm and 0.015 photoelectrons, respectively. PMID:26974630

  7. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-06-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  8. Passive ranging errors due to multipath distortion of deterministic transient signals with application to the localization of small arms fire

    NASA Astrophysics Data System (ADS)

    Ferguson, Brian G.; Lo, Kam W.

    2002-01-01

    A passive ranging technique based on wavefront curvature is used to estimate the ranges and bearings of impulsive sound sources represented by small arms fire. The discharge of a firearm results in the generation of a transient acoustic signal whose energy propagates radially outwards from the omnidirectional source. The radius of curvature of the spherical wavefront at any instant is equal to the instantaneous range from the source. The curvature of the acoustic wavefront is sensed with a three-microphone linear array by first estimating the differential time of arrival (or time delay) of the acoustic wavefront at each of the two adjacent sensor pairs and then processing the time-delay information to extract the range and bearing of the source. However, modeling the passive ranging performance of the wavefront curvature method for a deterministic transient signal source in a multipath environment shows that when the multipath and direct path arrivals are unresolvable, the time-delay estimates are biased which, in turn, biases the range estimates. The model explains the observed under-ranging of small arms firing positions during a field experiment.

  9. Passive ranging errors due to multipath distortion of deterministic transient signals with application to the localization of small arms fire.

    PubMed

    Ferguson, Brian G; Lo, Kam W

    2002-01-01

    A passive ranging technique based on wavefront curvature is used to estimate the ranges and bearings of impulsive sound sources represented by small arms fire. The discharge of a firearm results in the generation of a transient acoustic signal whose energy propagates radially outwards from the omnidirectional source. The radius of curvature of the spherical wavefront at any instant is equal to the instantaneous range from the source. The curvature of the acoustic wavefront is sensed with a three-microphone linear array by first estimating the differential time of arrival (or time delay) of the acoustic wavefront at each of the two adjacent sensor pairs and then processing the time-delay information to extract the range and bearing of the source. However, modeling the passive ranging performance of the wavefront curvature method for a deterministic transient signal source in a multipath environment shows that when the multipath and direct path arrivals are unresolvable, the time-delay estimates are biased which, in turn, biases the range estimates. The model explains the observed under-ranging of small arms firing positions during a field experiment. PMID:11831787

  10. Impact of a distance estimation error inducing a visualized zone gap on the target illuminance in range-gated active imaging.

    PubMed

    Matwyschuk, Alexis

    2014-01-01

    Some stand-alone airborne systems of target reconnaissance such as a missile seeker head use range-gated laser active imaging to visualize a target in the scene. To center the visualized zone on the target, it is important to know the distance between the active imaging system and the target. However, as this exact distance is not known before the detection of the target, it can be only estimated. This estimated distance can be erroneous (max≈500  m) with some technological drifts (gyrometric drift, accelerometric drift, missile position error, etc.). To be able to evaluate the impact of a distance estimation error on target illuminance in active imaging, the expressions of the illuminance attenuation ratio according to the decentered target position with regard to the visualized zone were determined. These different equations will be used to determine, in future stand-alone reconnaissance systems, the target signal-to-noise ratio as a function of the localization error. Generally speaking, two modes of visualization were used: first by using a fixed width of the visualized zone, and second by increasing the width of the visualized zone as a function of the distance. The defined different expressions allowed us to study the illuminance behavior of the target with regard to the value of the gap (difference between the estimated distance and the real distance) for each mode of visualization. The results showed that from a target distance of about 1 km, the visualization mode with variable zone width allowed us to decrease the target illuminance less during a gap caused by an estimation error of the target distance.

  11. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  12. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-01

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  13. Acceptance procedures: Microfilm printer

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1973-01-01

    Acceptance tests were made for a special order automatic additive color microfilm printer. Tests include film capacity, film transport, resolution, illumination uniformity, exposure range checks, and color cuing considerations.

  14. Acceptance threshold theory can explain occurrence of homosexual behaviour.

    PubMed

    Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors.

  15. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  16. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  17. TU-C-BRE-08: IMRT QA: Selecting Meaningful Gamma Criteria Based On Error Detection Sensitivity

    SciTech Connect

    Steers, J; Fraass, B

    2014-06-15

    Purpose: To develop a strategy for defining meaningful tolerance limits and studying the sensitivity of IMRT QA gamma criteria by inducing known errors in QA plans. Methods: IMRT QA measurements (ArcCHECK, Sun Nuclear) were compared to QA plan calculations with induced errors. Many (>24) gamma comparisons between data and calculations were performed for each of several kinds of cases and classes of induced error types with varying magnitudes (e.g. MU errors ranging from -10% to +10%), resulting in over 3,000 comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using various gamma criteria. Results: This study demonstrates that random, case-specific, and systematic errors can be detected by the error curve analysis. Depending on location of the peak of the error curve (e.g., not centered about zero), 3%/3mm threshold=10% criteria may miss MU errors of up to 10% and random MLC errors of up to 5 mm. Additionally, using larger dose thresholds for specific devices may increase error sensitivity (for the same X%/Ymm criteria) by up to a factor of two. This analysis will allow clinics to select more meaningful gamma criteria based on QA device, treatment techniques, and acceptable error tolerances. Conclusion: We propose a strategy for selecting gamma parameters based on the sensitivity of gamma criteria and individual QA devices to induced calculation errors in QA plans. Our data suggest large errors may be missed using conventional gamma criteria and that using stricter criteria with an increased dose threshold may reduce the range of missed errors. This approach allows quantification of gamma criteria sensitivity and is straightforward to apply to other combinations of devices and treatment techniques.

  18. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  19. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  20. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  1. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  2. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  3. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  4. Estimating patient specific uncertainty parameters for adaptive treatment re-planning in proton therapy using in vivo range measurements and Bayesian inference: application to setup and stopping power errors.

    PubMed

    Labarbe, Rudi; Janssens, Guillaume; Sterpin, Edmond

    2016-09-01

    In proton therapy, quantification of the proton range uncertainty is important to achieve dose distribution compliance. The promising accuracy of prompt gamma imaging (PGI) suggests the development of a mathematical framework using the range measurements to convert population based estimates of uncertainties into patient specific estimates with the purpose of plan adaptation. We present here such framework using Bayesian inference. The sources of uncertainty were modeled by three parameters: setup bias m, random setup precision r and water equivalent path length bias u. The evolution of the expectation values E(m), E(r) and E(u) during the treatment was simulated. The expectation values converged towards the true simulation parameters after 5 and 10 fractions, for E(m) and E(u), respectively. E(r) settle on a constant value slightly lower than the true value after 10 fractions. In conclusion, the simulation showed that there is enough information in the frequency distribution of the range errors measured by PGI to estimate the expectation values and the confidence interval of the model parameters by Bayesian inference. The updated model parameters were used to compute patient specific lateral and local distal margins for adaptive re-planning. PMID:27494118

  5. Estimating patient specific uncertainty parameters for adaptive treatment re-planning in proton therapy using in vivo range measurements and Bayesian inference: application to setup and stopping power errors

    NASA Astrophysics Data System (ADS)

    Labarbe, Rudi; Janssens, Guillaume; Sterpin, Edmond

    2016-09-01

    In proton therapy, quantification of the proton range uncertainty is important to achieve dose distribution compliance. The promising accuracy of prompt gamma imaging (PGI) suggests the development of a mathematical framework using the range measurements to convert population based estimates of uncertainties into patient specific estimates with the purpose of plan adaptation. We present here such framework using Bayesian inference. The sources of uncertainty were modeled by three parameters: setup bias m, random setup precision r and water equivalent path length bias u. The evolution of the expectation values E(m), E(r) and E(u) during the treatment was simulated. The expectation values converged towards the true simulation parameters after 5 and 10 fractions, for E(m) and E(u), respectively. E(r) settle on a constant value slightly lower than the true value after 10 fractions. In conclusion, the simulation showed that there is enough information in the frequency distribution of the range errors measured by PGI to estimate the expectation values and the confidence interval of the model parameters by Bayesian inference. The updated model parameters were used to compute patient specific lateral and local distal margins for adaptive re-planning.

  6. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  7. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  8. Rectifying calibration error of Goldmann applanation tonometer is easy!

    PubMed

    Choudhari, Nikhil S; Moorthy, Krishna P; Tungikar, Vinod B; Kumar, Mohan; George, Ronnie; Rao, Harsha L; Senthil, Sirisha; Vijaya, Lingam; Garudadri, Chandra Sekhar

    2014-11-01

    Purpose: Goldmann applanation tonometer (GAT) is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland) were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn't suffice. We followed the South East Asia Glaucoma Interest Group's definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively). Results: Twelve out of 29 (41.3%) GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6%) faulty instruments. Only one (8.3%) faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  9. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  10. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  11. TU-C-BRE-07: Quantifying the Clinical Impact of VMAT Delivery Errors Relative to Prior Patients’ Plans and Adjusted for Anatomical Differences

    SciTech Connect

    Stanhope, C; Wu, Q; Yuan, L; Liu, J; Hood, R; Yin, F; Adamson, J

    2014-06-15

    Purpose: There is increased interest in the Radiation Oncology Physics community regarding sensitivity of pre-treatment IMRT/VMAT QA to delivery errors. Consequently, tools mapping pre-treatment QA to the patient DVH have been developed. However, the quantity of plan degradation that is acceptable remains uncertain. Using DVHs adapted from prior patients’ plans, we developed a technique to determine the magnitude of various delivery errors required to degrade a treatment plan to outside the clinically accepted range. Methods: DVHs for relevant organs at risk were adapted from a population of prior patients’ plans using a machine learning algorithm to establish the clinically acceptable DVH range specific to the patient’s anatomy. We applied this technique to six low-risk prostate cancer patients treated with single-arc VMAT and compared error-induced DVH changes to the adapted DVHs to determine the magnitude of error required to push the plan outside of the acceptable range. The procedure follows: (1) Errors (systematic ' random shift of MLCs, gantry-MLC desynchronization, dose rate fluctuations, etc.) were simulated and degraded DVHs calculated using the Varian Eclipse TPS. (2) Adapted DVHs and acceptable ranges for DVHs were established. (3) Relevant dosimetric indices and corresponding acceptable ranges were calculated from the DVHs. Key indices included NTCP (Lyman-Kutcher-Burman Model) and QUANTEC’s dose-volume Objectives: s of V75Gy≤0.15 for the rectum and V75Gy≤0.25 for the bladder. Results: Degradations to the clinical plan became “unacceptable” for 19±29mm and 1.9±2.0mm systematic outward shifts of a single leaf and leaf bank, respectively. All other simulated errors fell within the acceptable range. Conclusion: Utilizing machine learning and prior patients’ plans one can predict a clinically acceptable range of DVH degradation for a specific patient. Comparing error-induced DVH degradations to this range, it is shown that single

  12. The Study of Prescribing Errors Among General Dentists

    PubMed Central

    Araghi, Solmaz; Sharifi, Rohollah; Ahmadi, Goran; Esfehani, Mahsa; Rezaei, Fatemeh

    2016-01-01

    Introduction: In dentistry, medicine often prescribed to relieve pain and remove infections. Therefore, wrong prescription can lead to a range of problems including lack of pain, antimicrobial treatment failure and the development of resistance to antibiotics. Materials and Methods: In this cross-sectional study, the aim was to evaluate the common errors in written prescriptions by general dentists in Kermanshah in 2014. Dentists received a questionnaire describing five hypothetical patient and the appropriate prescription for the patient in question was asked. Information about age, gender, work experience and the admission in university was collected. The frequency of errors in prescriptions was determined. Data by SPSS 20 statistical software and using statistical t-test, chi-square and Pearson correlation were analyzed (0.05> P). Results: A total of 180 dentists (62.6% male and 37.4% female) with a mean age of 8.23 ± 39.199 participated in this study. Prescription errors include the wrong in pharmaceutical form (11%), not having to write therapeutic dose (13%), writing wrong dose (14%), typos (15%), error prescription (23%) and writing wrong number of drugs (24%). The most frequent errors in the administration of antiviral drugs (31%) and later stages of antifungal drugs (30%), analgesics (23%) and antibiotics (16%) was observed. Males dentists compared with females dentists showed more frequent errors (P=0.046). Error frequency among dentists with a long work history (P>0.001) and the acceptance in the university except for the entrance examination (P=0.041) had a statistically significant relationship. Conclusion: This study showed that the written prescription by general dentists examined contained significant errors and improve prescribing through continuing education of dentists is essential. PMID:26573049

  13. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    NASA Astrophysics Data System (ADS)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  14. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  15. Offer/Acceptance Ratio.

    ERIC Educational Resources Information Center

    Collins, Mimi

    1997-01-01

    Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…

  16. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  17. Computational Errors of Mentally Retarded Students.

    ERIC Educational Resources Information Center

    Janke, Robert W.

    1980-01-01

    Examined computational errors made by educable mentally retarded students on the arithmetic subtest of the Wide Range Achievement Test. Retarded students had a lower percent of grouping and inappropriate inversion errors and a higher percent of incorrect operation errors than regular students had in Engelhardt's study. (Author)

  18. Error growth in operational ECMWF forecasts

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Dalcher, A.

    1985-01-01

    A parameterization scheme used at the European Centre for Medium Range Forecasting to model the average growth of the difference between forecasts on consecutive days was extended by including the effect of error growth on forecast model deficiencies. Error was defined as the difference between the forecast and analysis fields during the verification time. Systematic and random errors were considered separately in calculating the error variance for a 10 day operational forecast. A good fit was obtained with measured forecast errors and a satisfactory trend was achieved in the difference between forecasts. Fitting six parameters to forecast errors and differences that were performed separately for each wavenumber revealed that the error growth rate grew with wavenumber. The saturation error decreased with the total wavenumber and the limit of predictability, i.e., when error variance reaches 95 percent of saturation, decreased monotonically with the total wavenumber.

  19. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  20. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  1. Acceptability of BCG vaccination.

    PubMed

    Mande, R

    1977-01-01

    The acceptability of BCG vaccination varies a great deal according to the country and to the period when the vaccine is given. The incidence of complications has not always a direct influence on this acceptability, which depends, for a very large part, on the risk of tuberculosis in a given country at a given time.

  2. ATLAS ACCEPTANCE TEST

    SciTech Connect

    Cochrane, J. C. , Jr.; Parker, J. V.; Hinckley, W. B.; Hosack, K. W.; Mills, D.; Parsons, W. M.; Scudder, D. W.; Stokes, J. L.; Tabaka, L. J.; Thompson, M. C.; Wysocki, Frederick Joseph; Campbell, T. N.; Lancaster, D. L.; Tom, C. Y.

    2001-01-01

    The acceptance test program for Atlas, a 23 MJ pulsed power facility for use in the Los Alamos High Energy Density Hydrodynamics program, has been completed. Completion of this program officially releases Atlas from the construction phase and readies it for experiments. Details of the acceptance test program results and of machine capabilities for experiments will be presented.

  3. Some legal implications of pilot error.

    PubMed

    Hill, I R; Pile, R L

    1982-07-01

    Pilots are not expected to be superhuman beings, and it must therefore be accepted that they will make mistakes, some of which may have disastrous consequences. If it can be proven that the error equates with negligence in the pursuance of their duties, then they may be subjected to the full force of the Law. However, because pilot error is a multifactorial phenomenon, which is imperfectly understood, the initiation of legal proceedings may be difficult. If a penalty is to be imposed, the law demands a degree of proof which may be greater than that demanded by some investigating authorities, before implementing the appellation 'pilot error'.

  4. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  5. Structure and dating errors in the geologic time scale and periodicity in mass extinctions

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1989-01-01

    Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.

  6. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  7. Quantifying errors without random sampling

    PubMed Central

    Phillips, Carl V; LaPole, Luwanna M

    2003-01-01

    Background All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. Discussion We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Summary Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research. PMID:12892568

  8. Field error lottery

    NASA Astrophysics Data System (ADS)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  9. Field error lottery

    NASA Astrophysics Data System (ADS)

    Elliott, C. James; McVey, Brian D.; Quimby, David C.

    1990-11-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.

  10. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  11. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  12. Immediate error correction process following sleep deprivation.

    PubMed

    Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling

    2007-06-01

    Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation. PMID:17542943

  13. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  14. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  15. Disruption of N terminus long range non covalent interactions shifted temp.opt 25°C to cold: Evolution of point mutant Bacillus lipase by error prone PCR.

    PubMed

    Goomber, Shelly; Kumar, Arbind; Kaur, Jagdeep

    2016-01-15

    Cold adapted enzymes have applications in detergent, textile, food, bioremediation and biotechnology processes. Bacillus lipases are 'generally recognized as safe' (GRAS) and hence are industrially attractive. Bacillus lipase of 1.4 subfamily are of lowest molecular weight and are reversibly unfolded due to absence of disulphide bonds. Therefore these are largely used to study energetic of protein stability that represents unfolding of native protein to fully unfolded state. In present study, metagenomically isolated Bacillus LipJ was laboratory evolved for cold adaptation by error Prone PCR. Library of variants were screened for high relative activity at low temperature of 10°C compared to native protein LipJ. Point mutant sequenced as Phe19→Leu was determined to be active at cold and was selected for extensive biochemical, biophysical characterization. Variant F19L showed its maximum activity at 10°C where parent protein LipJ had 20% relative activity. Psychrophilic nature of F19L was established with about 50% relative active at 5°C where native protein was frozen to act. Variant F19L showed no activity at temperature 40°C and above, establishing its thermolabile nature. Thermostability studies determined mutant to be unstable above 20°C and three fold decrease in its half life at 30°C compared to native protein. Far UV-CD and intrinsic fluorescence study demonstrated unstable tertiary structure of point variant F19L leading to its unfolding at low temperature of 20°C. Cold adaptation of mutant F19L is accompanied with increased specific activity. Mutant was catalytically more efficient with 1.3 fold increase in kcat. Homologue structure modelling predicted disruption of intersecondary hydrophobic core formed by aromatic ring of Phe19 with non polar residues placed at β3, β4, β5, β6, αF. Increased local flexibility of variant F19L explains molecular basis of its psychrophilic nature.

  16. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  17. [Medical errors in obstetrics].

    PubMed

    Marek, Z

    1984-08-01

    Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.

  18. [Errors in laboratory daily practice].

    PubMed

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  19. Diagnostic 'errors' in anatomical pathology: relevance to Australian laboratories.

    PubMed

    Leong, Anthony S Y; Braye, Stephen; Bhagwandeen, Brahm

    2006-12-01

    Failure to recognise that anatomical pathology diagnosis is a process of cognitive interpretation of the morphological features present in a small tissue sample has led to the public misperception that the process is infallible. The absence of a universally accepted definition of diagnostic error makes comparison of error rates impossible and one large study of laboratories in the United States shows a significant error rate of about 5%, most of which have no major impact on patient management. A recent review of the work of one pathologist in New South Wales confirms a lack of appreciation in medical administration that variable diagnostic thresholds result in an inherent fallibility of anatomical pathology diagnoses. The outcome of the review emphasises the need to educate both public and non-pathology colleagues of the nature of our work and brings into consideration the requirement to establish baseline error rates for Australian laboratories and the role of the Royal College of Pathologists of Australasia (RCPA) in developing fair and unbiased protocols for review of diagnostic errors. The responsibility of ensuring that diagnostic error rates are kept to the minimum is a shared one. Area health services must play their part by seeking to ensure that pathologists in any laboratory are not overworked and have adequate support and back-up from pathologists with expertise in specialised areas. It has been clearly enunciated by the Royal College of Pathologists in the United Kingdom that it is not safe for any histopathology service to be operated single-handedly by one histopathologist. Service managers and clinicians have to understand that country pathologists cannot provide the full range and depth of pathology expertise in the many clinical subspecialty areas that are often practised in non-metropolitan areas. Attending clinicians share the responsibility of accepting proffered pathology diagnoses only if it conforms to the clinical context. Pathology

  20. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  1. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    SciTech Connect

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-09-15

    orientation of the beam with respect to the dose grid was also investigated. The maximum acceptable dose grid size depends on the gradient of dose profile and in turn the range of proton beam. In the case that only the phantom scattering was considered and that the beam was aligned with the dose grid, grid sizes from 0.4 to 6.8 mm were required for proton beams with ranges from 2 to 30 cm for 2% error limit at the Bragg peak point. A near linear relation between the maximum acceptable grid size and beam range was observed. For this analysis model, the resolution requirement was not significantly related to the orientation of the beam with respect to the grid.

  2. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  3. Smaller hospitals accept advertising.

    PubMed

    Mackesy, R

    1988-07-01

    Administrators at small- and medium-sized hospitals gradually have accepted the role of marketing in their organizations, albeit at a much slower rate than larger institutions. This update of a 1983 survey tracks the increasing competitiveness, complexity and specialization of providing health care and of advertising a small hospital's services. PMID:10288550

  4. Students Accepted on Probation.

    ERIC Educational Resources Information Center

    Lorberbaum, Caroline S.

    This report is a justification of the Dalton Junior College admissions policy designed to help students who had had academic and/or social difficulties at other schools. These students were accepted on probation, their problems carefully analyzed, and much effort devoted to those with low academic potential. They received extensive academic and…

  5. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  6. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  7. The problem of assessing landmark error in geometric morphometrics: theory, methods, and modifications.

    PubMed

    von Cramon-Taubadel, Noreen; Frazier, Brenda C; Lahr, Marta Mirazón

    2007-09-01

    Geometric morphometric methods rely on the accurate identification and quantification of landmarks on biological specimens. As in any empirical analysis, the assessment of inter- and intra-observer error is desirable. A review of methods currently being employed to assess measurement error in geometric morphometrics was conducted and three general approaches to the problem were identified. One such approach employs Generalized Procrustes Analysis to superimpose repeatedly digitized landmark configurations, thereby establishing whether repeat measures fall within an acceptable range of variation. The potential problem of this error assessment method (the "Pinocchio effect") is demonstrated and its effect on error studies discussed. An alternative approach involves employing Euclidean distances between the configuration centroid and repeat measures of a landmark to assess the relative repeatability of individual landmarks. This method is also potentially problematic as the inherent geometric properties of the specimen can result in misleading estimates of measurement error. A third approach involved the repeated digitization of landmarks with the specimen held in a constant orientation to assess individual landmark precision. This latter approach is an ideal method for assessing individual landmark precision, but is restrictive in that it does not allow for the incorporation of instrumentally defined or Type III landmarks. Hence, a revised method for assessing landmark error is proposed and described with the aid of worked empirical examples.

  8. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  9. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Acceptable flight risk. 415.35 Section 415... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for...

  10. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Acceptable flight risk. 415.35 Section 415... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for...

  11. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Acceptable flight risk. 415.35 Section 415... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for...

  12. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Acceptable flight risk. 415.35 Section 415... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for...

  13. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Acceptable flight risk. 415.35 Section 415... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for...

  14. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  15. Numerical modelling errors in electrical impedance tomography.

    PubMed

    Dehghani, Hamid; Soleimani, Manuchehr

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique that aims to reconstruct images of internal impedance values of a volume of interest, based on measurements taken on the external boundary. Since most reconstruction algorithms rely on model-based approximations, it is important to ensure numerical accuracy for the model being used. This work demonstrates and highlights the importance of accurate modelling in terms of model discretization (meshing) and shows that although the predicted boundary data from a forward model may be within an accepted error, the calculated internal field, which is often used for image reconstruction, may contain errors, based on the mesh quality that will result in image artefacts.

  16. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  17. Error margin for antenna gain measurements

    NASA Technical Reports Server (NTRS)

    Cable, V.

    2002-01-01

    The specification of measured antenna gain is incomplete without knowing the error of the measurement. Also, unless gain is measured many times for a single antenna or over many identical antennas, the uncertainty or error in a single measurement is only an estimate. In this paper, we will examine in detail a typical error budget for common antenna gain measurements. We will also compute the gain uncertainty for a specific UHF horn test that was recently performed on the Jet Propulsion Laboratory (JPL) antenna range. The paper concludes with comments on these results and how they compare with the 'unofficial' JPL range standard of +/- ?.

  18. Preventing errors in laterality.

    PubMed

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2015-04-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.

  19. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  20. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  1. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  2. Proofreading for word errors.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  3. Reduced discretization error in HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm2 exposed to both solar particle event and galactic cosmic ray environments.

  4. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  5. Errors in neuroradiology.

    PubMed

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  6. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  7. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  8. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  9. On-Machine Acceptance

    SciTech Connect

    Arnold, K.F.

    2000-02-14

    Probing processes are used intermittently and not effectively as an on-line measurement device. This project was needed to evolve machine probing from merely a setup aid to an on-the-machine inspection system. Use of probing for on-machine inspection would significantly decrease cycle time by elimination of the need for first-piece inspection (at a remote location). Federal Manufacturing and Technologies (FM and T) had the manufacturing facility and the ability to integrate the system into production. The Contractor had a system that could optimize the machine tool to compensate for thermal growth and related error.

  10. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  11. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  12. Alcohol and error processing.

    PubMed

    Holroyd, Clay B; Yeung, Nick

    2003-08-01

    A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.

  13. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  14. Computer acceptance of older adults.

    PubMed

    Nägle, Sibylle; Schmidt, Ludger

    2012-01-01

    Even though computers play a massive role in everyday life of modern societies, older adults, and especially older women, are less likely to use a computer, and they perform fewer activities on it than younger adults. To get a better understanding of the factors affecting older adults' intention towards and usage of computers, the Unified Theory of Acceptance and Usage of Technology (UTAUT) was applied as part of a more extensive study with 52 users and non-users of computers, ranging in age from 50 to 90 years. The model covers various aspects of computer usage in old age via four key constructs, namely performance expectancy, effort expectancy, social influences, and facilitating conditions, as well as the variables gender, age, experience, and voluntariness it. Interestingly, next to performance expectancy, facilitating conditions showed the strongest correlation with use as well as with intention. Effort expectancy showed no significant correlation with the intention of older adults to use a computer.

  15. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  16. Telemetry location error in a forested habitat

    USGS Publications Warehouse

    Chu, D.S.; Hoover, B.A.; Fuller, M.R.; Geissler, P.H.; Amlaner, Charles J.

    1989-01-01

    The error associated with locations estimated by radio-telemetry triangulation can be large and variable in a hardwood forest. We assessed the magnitude and cause of telemetry location errors in a mature hardwood forest by using a 4-element Yagi antenna and compass bearings toward four transmitters, from 21 receiving sites. The distance error from the azimuth intersection to known transmitter locations ranged from 0 to 9251 meters. Ninety-five percent of the estimated locations were within 16 to 1963 meters, and 50% were within 99 to 416 meters of actual locations. Angles with 20o of parallel had larger distance errors than other angles. While angle appeared most important, greater distances and the amount of vegetation between receivers and transmitters also contributed to distance error.

  17. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  18. Radar range measurements in the atmosphere.

    SciTech Connect

    Doerry, Armin Walter

    2013-02-01

    The earths atmosphere affects the velocity of propagation of microwave signals. This imparts a range error to radar range measurements that assume the typical simplistic model for propagation velocity. This range error is a function of atmospheric constituents, such as water vapor, as well as the geometry of the radar data collection, notably altitude and range. Models are presented for calculating atmospheric effects on radar range measurements, and compared against more elaborate atmospheric models.

  19. Clinical errors in cognitive-behavior therapy.

    PubMed

    Kim, Eun Ha; Hollon, Steven D; Olatunji, Bunmi O

    2016-09-01

    Although cognitive-behavioral therapy (CBT) has been shown to be highly effective for a wide range of disorders, many patients do not benefit. The failure to fully benefit from CBT may be due to a wide range of factors, one of which includes "clinical errors" that often occur during the therapeutic process. We briefly note 4 such clinical errors including neglecting to conduct a detailed functional analysis of the presenting problem(s), not adequately engaging the patient in developing a case formulation for the purposes of treatment planning, getting wrapped up in simply examining beliefs without behavioral tests, and not holding patients accountable for fear of rupturing the therapeutic alliance. We then discuss the context in which these clinical errors may occur during CBT and highlight alternative approaches. Being mindful of these and other potential clinical errors during CBT may facilitate better treatment outcomes. (PsycINFO Database Record PMID:27505455

  20. Clinical errors in cognitive-behavior therapy.

    PubMed

    Kim, Eun Ha; Hollon, Steven D; Olatunji, Bunmi O

    2016-09-01

    Although cognitive-behavioral therapy (CBT) has been shown to be highly effective for a wide range of disorders, many patients do not benefit. The failure to fully benefit from CBT may be due to a wide range of factors, one of which includes "clinical errors" that often occur during the therapeutic process. We briefly note 4 such clinical errors including neglecting to conduct a detailed functional analysis of the presenting problem(s), not adequately engaging the patient in developing a case formulation for the purposes of treatment planning, getting wrapped up in simply examining beliefs without behavioral tests, and not holding patients accountable for fear of rupturing the therapeutic alliance. We then discuss the context in which these clinical errors may occur during CBT and highlight alternative approaches. Being mindful of these and other potential clinical errors during CBT may facilitate better treatment outcomes. (PsycINFO Database Record

  1. Error monitoring in musicians.

    PubMed

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  2. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  3. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  4. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  5. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  6. Dialogues on prediction errors.

    PubMed

    Niv, Yael; Schoenbaum, Geoffrey

    2008-07-01

    The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.

  7. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  8. Tracking errors in 2D multiple particle tracking microrheology

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Anne; Oelschlaeger, Claude; Willenbacher, Norbert

    2015-01-01

    Tracking errors due to particles moving in and out of the focal plane are a fundamental problem of multiple particle tracking microrheology. Here, we present a new approach to treat these errors so that a statistically significant number of particle trajectories with reasonable length are received, which is important for an unbiased analysis of multiple particle tracking data from inhomogeneous fluids. Starting from Crocker and Grier’s tracking algorithm, we identify particle displacements between subsequent images as artificial jumps; if this displacement deviates more than four standard deviations from the mean value, trajectories are terminated at such positions. In a further processing step, trajectories separated by a time gap Δ {τ\\text{max}} are merged based on an adaptive search radius criterion accounting for individual particle mobility. For a series of Newtonian fluids covering the viscosity range 6-1300 mPa s, this approach yields the correct viscosity but also results in a viscosity-independent number of trajectories equal to the average number of particles in an image with a minimum length covering at least two orders of magnitude in time. This allows for an unbiased characterization of heterogeneous fluids. For a Carbopol ETD 2050 solution we recover the expected broad variation of particle mobility. Consistent with the widely accepted structural model of highly swollen microgel particles suspended in a polymer solution, we find about 2/3 of the tracers are elastically trapped.

  9. Effect of Field Errors in Muon Collider IR Magnets on Beam Dynamics

    SciTech Connect

    Alexahin, Y.; Gianfelice-Wendt, E.; Kapin, V.V.; /Fermilab

    2012-05-01

    In order to achieve peak luminosity of a Muon Collider (MC) in the 10{sup 35} cm{sup -2}s{sup -1} range very small values of beta-function at the interaction point (IP) are necessary ({beta}* {le} 1 cm) while the distance from IP to the first quadrupole can not be made shorter than {approx}6 m as dictated by the necessity of detector protection from backgrounds. In the result the beta-function at the final focus quadrupoles can reach 100 km making beam dynamics very sensitive to all kind of errors. In the present report we consider the effects on momentum acceptance and dynamic aperture of multipole field errors in the body of IR dipoles as well as of fringe-fields in both dipoles and quadrupoles in the ase of 1.5 TeV (c.o.m.) MC. Analysis shows these effects to be strong but correctable with dedicated multipole correctors.

  10. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  11. Sonic boom acceptability studies

    NASA Astrophysics Data System (ADS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; McCurdy, David A.

    1992-04-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  12. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  13. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  14. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  15. [Medical device use errors].

    PubMed

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  16. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  17. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  18. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  19. Enhanced notification of infusion pump programming errors.

    PubMed

    Evans, R Scott; Carlson, Rick; Johnson, Kyle V; Palmer, Brent K; Lloyd, James F

    2010-01-01

    Hospitalized patients receive countless doses of medications through manually programmed infusion pumps. Many medication errors are the result of programming incorrect pump settings. When used appropriately, smart pumps have the potential to detect some programming errors. However, based on the current use of smart pumps, there are conflicting reports on their ability to prevent patient harm without additional capabilities and interfaces to electronic medical records (EMR). We developed a smart system that is connected to the EMR including medication charting that can detect and alert on potential pump programming errors. Acceptable programming limits of dose rate increases in addition to initial drug doses for 23 high-risk medications are monitored. During 22.5 months in a 24 bed ICU, 970 alerts (4% of 25,040 doses, 1.4 alerts per day) were generated for pump settings programmed outside acceptable limits of which 137 (14%) were found to have prevented potential harm. Monitoring pump programming at the system level rather than the pump provides access to additional patient data in the EMR including previous dosage levels, other concurrent medications and caloric intake, age, gender, vitals and laboratory results.

  20. Inborn Errors of Metabolism.

    PubMed

    Ezgu, Fatih

    2016-01-01

    Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available.

  1. Functional Error Models to Accelerate Nested Sampling

    NASA Astrophysics Data System (ADS)

    Josset, L.; Elsheikh, A. H.; Demyanov, V.; Lunati, I.

    2014-12-01

    Sampling algorithm, the proposed geostatistical realization is first evaluated through the approximate model to decide whether it is useful or not to perform a full physics simulation. This improves the acceptance rate of full physics simulations and opens the door to iteratively test the performance and improve the quality of the error model.

  2. Tropical errors and convection

    NASA Astrophysics Data System (ADS)

    Bechtold, P.; Bauer, P.; Engelen, R. J.

    2012-12-01

    Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical errors and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these errors is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind errors are the Indian Ocean with large interannual variations in forecast errors, and the East Pacific with persistent systematic errors that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation errors inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment time-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too

  3. Conditional Density Estimation in Measurement Error Problems.

    PubMed

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  4. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  5. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  6. Marking Errors: A Simple Strategy

    ERIC Educational Resources Information Center

    Timmons, Theresa Cullen

    1987-01-01

    Indicates that using highlighters to mark errors produced a 76% class improvement in removing comma errors and a 95.5% improvement in removing apostrophe errors. Outlines two teaching procedures, to be followed before introducing this tool to the class, that enable students to remove errors at this effective rate. (JD)

  7. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  8. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  9. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  10. A Simple Approach to Experimental Errors

    ERIC Educational Resources Information Center

    Phillips, M. D.

    1972-01-01

    Classifies experimental error into two main groups: systematic error (instrument, personal, inherent, and variational errors) and random errors (reading and setting errors) and presents mathematical treatments for the determination of random errors. (PR)

  11. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  12. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  13. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  14. [The notion and classification of expert errors].

    PubMed

    Klevno, V A

    2012-01-01

    The author presents the analysis of the legal and forensic medical literature concerning currently accepted concepts and classification of expert malpractice. He proposes a new easy-to-remember definition of the expert error and considers the classification of such mistakes. The analysis of the cases of erroneous application of the medical criteria for estimation of the harm to health made it possible to reveal and systematize the causes accounting for the cases of expert malpractice committed by forensic medical experts and health providers when determining the degree of harm to human health. PMID:22686055

  15. Type I error control for tree classification.

    PubMed

    Jung, Sin-Ho; Chen, Yong; Ahn, Hongshik

    2014-01-01

    Binary tree classification has been useful for classifying the whole population based on the levels of outcome variable that is associated with chosen predictors. Often we start a classification with a large number of candidate predictors, and each predictor takes a number of different cutoff values. Because of these types of multiplicity, binary tree classification method is subject to severe type I error probability. Nonetheless, there have not been many publications to address this issue. In this paper, we propose a binary tree classification method to control the probability to accept a predictor below certain level, say 5%.

  16. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  17. Design of a wavelength frame multiplication system using acceptance diagrams

    NASA Astrophysics Data System (ADS)

    Nekrassov, D.; Zendler, C.; Lieutenant, K.

    2013-07-01

    The concept of Wavelength Frame Multiplication (WFM) was developed to extend the usable wavelength range on long pulse neutron sources for instruments using pulse shaping choppers. For some instruments, it is combined with a pulse shaping double chopper, which defines a constant wavelength resolution, and a set of frame overlap choppers that prevent spurious neutrons from reaching the detector thus avoiding systematic errors in the calculation of wavelength from time of flight. Due to its complexity, the design of such a system is challenging and there are several criteria that need to be accounted for. In this work, the design of the WFM chopper system for a potential future liquids reflectometer at the European Spallation Source (ESS) is presented, which makes use of acceptance diagrams. They prove to be a powerful tool for understanding the work principle of the system and recognizing potential problems. The authors assume that the presented study can be useful for design or upgrade of further instruments, in particular the ones planned for the ESS.

  18. Dissolution test acceptance sampling plans.

    PubMed

    Tsong, Y; Hammerstrom, T; Lin, K; Ong, T E

    1995-07-01

    The U.S. Pharmacopeia (USP) general monograph provides a standard for dissolution compliance with the requirements as stated in the individual USP monograph for a tablet or capsule dosage form. The acceptance rules recommended by USP have important roles in the quality control process. The USP rules and their modifications are often used as an industrial lot release sampling plan, where a lot is accepted when the tablets or capsules sampled are accepted as proof of compliance with the requirement. In this paper, the operating characteristics of the USP acceptance rules are reviewed and compared to a selected modification. The operating characteristics curves show that the USP acceptance rules are sensitive to the true mean dissolution and do not reject a lot or batch that has a large percentage of tablets that dissolve with less than the dissolution specification.

  19. Optical range and range rate estimation for teleoperator systems

    NASA Technical Reports Server (NTRS)

    Shields, N. L., Jr.; Kirkpatrick, M., III; Malone, T. B.; Huggins, C. T.

    1974-01-01

    Range and range rate are crucial parameters which must be available to the operator during remote controlled orbital docking operations. A method was developed for the estimation of both these parameters using an aided television system. An experiment was performed to determine the human operator's capability to measure displayed image size using a fixed reticle or movable cursor as the television aid. The movable cursor was found to yield mean image size estimation errors on the order of 2.3 per cent of the correct value. This error rate was significantly lower than that for the fixed reticle. Performance using the movable cursor was found to be less sensitive to signal-to-noise ratio variation than was that for the fixed reticle. The mean image size estimation errors for the movable cursor correspond to an error of approximately 2.25 per cent in range suggesting that the system has some merit. Determining the accuracy of range rate estimation using a rate controlled cursor will require further experimentation.

  20. An experimental model for minimizing errors in laser speckle contrast imaging for microcirculation analysis

    NASA Astrophysics Data System (ADS)

    Sujatha, N.; Banerjee, Arnab

    2015-03-01

    Understanding the changes in microcirculatory flow and its measurements are very important for assessing the progress of various vascular malfunctions and their subsequent treatment effectiveness. Laser Speckle Contrast Imaging (LSCI) has been evolved as a whole-field, non-invasive and non-contact technique which has inherent advantages for microcirculation assessment in an in vivo environment compared to its noninvasive counterparts such as laser Doppler technique and video capillaroscopy. However, representation of flow velocity values in absolute units is still challenging and yet to be completely explored. In this paper, we propose an experimental model for estimating the flow velocity based for optimum camera exposure time. The LSCI experiments were conducted on a custom made phantom flow channel with induced flow in the microcirculation range using a syringe pump. The speckle image contrast was estimated temporally and is used to calculate velocity values. The relative error in the flow values is estimated to be a function of the calculated contrast. The estimated error has been incorporated as a correction factor in the obtained velocity term using LSCI and final velocity estimation was found to be within an acceptable error range independent of the flow velocity and scatterer concentration of the sample for optimum camera exposure duration.

  1. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  2. The Location of Error: Reflections on a Research Project

    ERIC Educational Resources Information Center

    Cook, Devan

    2010-01-01

    Andrea Lunsford and Karen Lunsford conclude "Mistakes Are a Fact of Life: A National Comparative Study," a discussion of their research project exploring patterns of formal grammar and usage error in first-year writing, with an invitation to "conduct a local version of this study." The author was eager to accept their invitation; learning and…

  3. Laser Ranging Simulation Program

    NASA Technical Reports Server (NTRS)

    Piazolla, Sabino; Hemmati, Hamid; Tratt, David

    2003-01-01

    Laser Ranging Simulation Program (LRSP) is a computer program that predicts selected aspects of the performances of a laser altimeter or other laser ranging or remote-sensing systems and is especially applicable to a laser-based system used to map terrain from a distance of several kilometers. Designed to run in a more recent version (5 or higher) of the MATLAB programming language, LRSP exploits the numerical and graphical capabilities of MATLAB. LRSP generates a graphical user interface that includes a pop-up menu that prompts the user for the input of data that determine the performance of a laser ranging system. Examples of input data include duration and energy of the laser pulse, the laser wavelength, the width of the laser beam, and several parameters that characterize the transmitting and receiving optics, the receiving electronic circuitry, and the optical properties of the atmosphere and the terrain. When the input data have been entered, LRSP computes the signal-to-noise ratio as a function of range, signal and noise currents, and ranging and pointing errors.

  4. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  5. Design of Large Momentum Acceptance Transport Systems

    SciTech Connect

    D.R. Douglas

    2005-05-01

    The use of energy recovery to enable high power linac operation often gives rise to an attendant challenge--the transport of high power beams subtending large phase space volumes. In particular applications--such as FEL driver accelerators--this manifests itself as a requirement for beam transport systems with large momentum acceptance. We will discuss the design, implementation, and operation of such systems. Though at times counterintuitive in behavior (perturbative descriptions may, for example, be misleading), large acceptance systems have been successfully utilized for generations as spectrometers and accelerator recirculators [1]. Such systems are in fact often readily designed using appropriate geometric descriptions of beam behavior; insight provided using such a perspective may in addition reveal inherent symmetries that simplify construction and improve operability. Our discussion will focus on two examples: the Bates-clone recirculator used in the Jefferson Lab 10 kW IR U pgrade FEL (which has an observed acceptance of 10% or more) and a compaction-managed mirror-bend achromat concept with an acceptance ranging from 50 to 150 MeV.

  6. Errors and mistakes in breast ultrasound diagnostics.

    PubMed

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  7. Extending the Technology Acceptance Model: Policy Acceptance Model (PAM)

    NASA Astrophysics Data System (ADS)

    Pierce, Tamra

    There has been extensive research on how new ideas and technologies are accepted in society. This has resulted in the creation of many models that are used to discover and assess the contributing factors. The Technology Acceptance Model (TAM) is one that is a widely accepted model. This model examines people's acceptance of new technologies based on variables that directly correlate to how the end user views the product. This paper introduces the Policy Acceptance Model (PAM), an expansion of TAM, which is designed for the analysis and evaluation of acceptance of new policy implementation. PAM includes the traditional constructs of TAM and adds the variables of age, ethnicity, and family. The model is demonstrated using a survey of people's attitude toward the upcoming healthcare reform in the United States (US) from 72 survey respondents. The aim is that the theory behind this model can be used as a framework that will be applicable to studies looking at the introduction of any new or modified policies.

  8. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  9. [Dealing with errors in medicine].

    PubMed

    Schoenenberger, R A; Perruchoud, A P

    1998-12-24

    Iatrogenic disease is probably more commonly than assumed the consequence of errors and mistakes committed by physicians and other medical personnel. Traditionally, strategies to prevent errors in medicine focus on inspection and rely on the professional ethos of health care personnel. The increasingly complex nature of medical practise and the multitude of interventions that each patient receives increases the likelihood of error. More efficient approaches to deal with errors have been developed. The methods include routine identification of errors (critical incidence report), systematic monitoring of multiple-step processes in medical practice, system analysis, and system redesign. A search for underlying causes of errors (rather than distal causes) will enable organizations to collectively learn without denying the inevitable occurrence of human error. Errors and mistakes may become precious chances to increase the quality of medical care.

  10. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  11. Preventing medication errors in cancer chemotherapy.

    PubMed

    Cohen, M R; Anderson, R W; Attilio, R M; Green, L; Muller, R J; Pruemer, J M

    1996-04-01

    Recommendations for preventing medication errors in cancer chemotherapy are made. Before a health care provider is granted privileges to prescribe, dispense, or administer antineoplastic agents, he or she should undergo a tailored educational program and possibly testing or certification. Appropriate reference materials should be developed. Each institution should develop a dose-verification process with as many independent checks as possible. A detailed checklist covering prescribing, transcribing, dispensing, and administration should be used. Oral orders are not acceptable. All doses should be calculated independently by the physician, the pharmacist, and the nurse. Dosage limits should be established and a review process set up for doses that exceed the limits. These limits should be entered into pharmacy computer systems, listed on preprinted order forms, stated on the product packaging, placed in strategic locations in the institution, and communicated to employees. The prescribing vocabulary must be standardized. Acronyms, abbreviations, and brand names must be avoided and steps taken to avoid other sources of confusion in the written orders, such as trailing zeros. Preprinted antineoplastic drug order forms containing checklists can help avoid errors. Manufacturers should be encouraged to avoid or eliminate ambiguities in drug names and dosing information. Patients must be educated about all aspects of their cancer chemotherapy, as patients represent a last line of defense against errors. An interdisciplinary team at each practice site should review every medication error reported. Pharmacists should be involved at all sites where antineoplastic agents are dispensed. Although it may not be possible to eliminate all medication errors in cancer chemotherapy, the risk can be minimized through specific steps. Because of their training and experience, pharmacists should take the lead in this effort. PMID:8697025

  12. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage.

    PubMed

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  13. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  14. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  15. Reducing nurse medicine administration errors.

    PubMed

    Ofosu, Rose; Jarrett, Patricia

    Errors in administering medicines are common and can compromise the safety of patients. This review discusses the causes of drug administration error in hospitals by student and registered nurses, and the practical measures educators and hospitals can take to improve nurses' knowledge and skills in medicines management, and reduce drug errors.

  16. Error Bounds for Interpolative Approximations.

    ERIC Educational Resources Information Center

    Gal-Ezer, J.; Zwas, G.

    1990-01-01

    Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)

  17. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  18. L-286 Acceptance Test Record

    SciTech Connect

    HARMON, B.C.

    2000-01-14

    This document provides a detailed account of how the acceptance testing was conducted for Project L-286, ''200E Area Sanitary Water Plant Effluent Stream Reduction''. The testing of the L-286 instrumentation system was conducted under the direct supervision

  19. Accepted scientific research works (abstracts).

    PubMed

    2014-01-01

    These are the 39 accepted abstracts for IAYT's Symposium on Yoga Research (SYR) September 24-24, 2014 at the Kripalu Center for Yoga & Health and published in the Final Program Guide and Abstracts. PMID:25645134

  20. Agriculture, forestry, range resources

    NASA Technical Reports Server (NTRS)

    Crea, W. J.

    1974-01-01

    In the area of crop specie identification, it has been found that temporal data analysis, preliminary stratification, and unequal probability analysis were several of the factors that contributed to high identification accuracies. Single data set accuracies on fields of greater than 80,000 sq m (20 acres) are in the 70- to 90-percent range; however, with the use of temporal data, accuracies of 95 percent have been reported. Identification accuracy drops off significantly on areas of less than 80,000 sq m (20 acres) as does measurement accuracy. Forest stratification into coniferous and deciduous areas has been accomplished to a 90- to 95-percent accuracy level. Using multistage sampling techniques, the timber volume of a national forest district has been estimated to a confidence level and standard deviation acceptable to the Forest Service at a very favorable cost-benefit time ratio. Range specie/plant community vegetation mapping has been accomplished at various levels of success (69- to 90-percent accuracy). However, several investigators have obtained encouraging initial results in range biomass (forage production) estimation and range readiness predictions. Soil association map correction and soil association mapping in new area appear to have been proven feasible on large areas; however, testing in a complex soil area should be undertaken.

  1. Characterization of the error budget of Alba-NOM

    NASA Astrophysics Data System (ADS)

    Nicolas, Josep; Martínez, Juan Carlos

    2013-05-01

    The Alba-NOM instrument is a high accuracy scanning machine capable of measuring the slope profile of long mirrors with resolution below the nanometer scale and for a wide range of curvatures. We present the characterization of different sources of errors that limit the uncertainty of the instrument. We have investigated three main contributions to the uncertainty of the measurements: errors introduced by the scanning system and the pentaprism, errors due to environmental conditions, and optical errors of the autocollimator. These sources of error have been investigated by measuring the corresponding motion errors with a high accuracy differential interferometer and by simulating their impact on the measurements by means of ray-tracing. Optical error contributions have been extracted from the analysis of redundant measurements of test surfaces. The methods and results are presented, as well as an example of application that has benefited from the achieved accuracy.

  2. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  3. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  4. Validation and acceptance of synthetic infrared imagery

    NASA Astrophysics Data System (ADS)

    Smith, Moira I.; Bernhardt, Mark; Angell, Christopher R.; Hickman, Duncan; Whitehead, Philip; Patel, Dilip

    2004-08-01

    This paper describes the use of an image query database (IQ-DB) tool as a means of implementing a validation strategy for synthetic long-wave infrared images of sea clutter. Specifically it was required to determine the validity of the synthetic imagery for use in developing and testing automatic target detection algorithms. The strategy adopted for exploiting synthetic imagery is outlined and the key issues of validation and acceptance are discussed in detail. A wide range of image metrics has been developed to achieve pre-defined validation criteria. A number of these metrics, which include post processing algorithms, are presented. Furthermore, the IQ-DB provides a robust mechanism for configuration management and control of the large volume of data used. The implementation of the IQ-DB is reviewed in terms of its cardinal point specification and its central role in synthetic imagery validation and EOSS progressive acceptance.

  5. Register file soft error recovery

    SciTech Connect

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  6. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  7. Preventing Communication Errors in Telephone Medicine

    PubMed Central

    Reisman, Anna B; Brown, Karen E

    2005-01-01

    Errors in telephone communication can result in outcomes ranging from inconvenience and anxiety to serious compromises in patient safety. Although 25% of interactions between physicians and patients take place on the telephone, little has been written about telephone communication and medical mishaps. Similarly, training in telephone medicine skills is limited; only 6% of residency programs teach any aspect of telephone medicine. Increasing familiarity with common telephone challenges with patients may help physicians decrease the likelihood of negative outcomes. We use case vignettes to highlight communication errors in common telephone scenarios. These scenarios include giving sensitive test results, requests for narcotics, managing ill patients who are not sick enough for the emergency room, dealing with late-night calls, communicating with unintelligible patients, and handling calls from family members. We provide management strategies to minimize the occurrence of these errors. PMID:16191150

  8. Long-range connectomics.

    PubMed

    Jbabdi, Saad; Behrens, Timothy E

    2013-12-01

    Decoding neural algorithms is one of the major goals of neuroscience. It is generally accepted that brain computations rely on the orchestration of neural activity at local scales, as well as across the brain through long-range connections. Understanding the relationship between brain activity and connectivity is therefore a prerequisite to cracking the neural code. In the past few decades, tremendous technological advances have been achieved in connectivity measurement techniques. We now possess a battery of tools to measure brain activity and connections at all available scales. A great source of excitement are the new in vivo tools that allow us to measure structural and functional connections noninvasively. Here, we discuss how these new technologies may contribute to deciphering the neural code.

  9. Identifying state-dependent model error in numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Moskaitis, J.; Hansen, J.; Toth, Z.; Zhu, Y.

    2003-04-01

    Model forecasts of complex systems such as the atmosphere lose predictive skill because of two different sources of error: initial conditions error and model error. While much study has been done to determine the nature and consequences of initial conditions error in operational forecast models, relatively little has been done to identify the source of model error and to quantify the effects of model error on forecasts. Here, we attempt to "disentangle" model error from initial conditions error by applying a diagnostic tool in a simple model framework to identify poor forecasts for which model error is likely responsible. The diagnostic is based on the premise that for a perfect ensemble forecast, verification should fall outside the range of ensemble forecast states only a small percentage of the time, according to the size of the ensemble. Identifying these outlier verifications and comparing the statistics of their occurrence to those of a perfect ensemble can tell us about the role of model error in a quantitative, state-dependent manner. The same diagnostic is applied to operational NWP models to quantify the role of model error in poor forecasts (see companion paper by Toth et al.). From these results, we can infer the atmospheric processes the model cannot adequately simulate.

  10. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  11. How perioperative nurses define, attribute causes of, and react to intraoperative nursing errors.

    PubMed

    Chard, Robin

    2010-01-01

    Errors in nursing practice pose a continuing threat to patient safety. A descriptive, correlational study was conducted to examine the definitions, circumstances, and perceived causes of intraoperative nursing errors; reactions of perioperative nurses to intraoperative nursing errors; and the relationships among coping with intraoperative nursing errors, emotional distress, and changes in practice made as a result of error. The results indicate that strategies of accepting responsibility and using self-control are significant predictors of emotional distress. Seeking social support and planful problem solving emerged as significant predictors of constructive changes in practice. Most predictive of defensive changes was the strategy of escape/avoidance.

  12. Parental Reports of Children's Scale Errors in Everyday Life

    ERIC Educational Resources Information Center

    Rosengren, Karl S.; Gutierrez, Isabel T.; Anderson, Kathy N.; Schein, Stevie S.

    2009-01-01

    Scale errors refer to behaviors where young children attempt to perform an action on an object that is too small to effectively accommodate the behavior. The goal of this study was to examine the frequency and characteristics of scale errors in everyday life. To do so, the researchers collected parental reports of children's (age range = 13-21…

  13. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  14. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  15. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  16. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  17. Sepsis: Medical errors in Poland.

    PubMed

    Rorat, Marta; Jurek, Tomasz

    2016-01-01

    Health, safety and medical errors are currently the subject of worldwide discussion. The authors analysed medico-legal opinions trying to determine types of medical errors and their impact on the course of sepsis. The authors carried out a retrospective analysis of 66 medico-legal opinions issued by the Wroclaw Department of Forensic Medicine between 2004 and 2013 (at the request of the prosecutor or court) in cases examined for medical errors. Medical errors were confirmed in 55 of the 66 medico-legal opinions. The age of victims varied from 2 weeks to 68 years; 49 patients died. The analysis revealed medical errors committed by 113 health-care workers: 98 physicians, 8 nurses and 8 emergency medical dispatchers. In 33 cases, an error was made before hospitalisation. Hospital errors occurred in 35 victims. Diagnostic errors were discovered in 50 patients, including 46 cases of sepsis being incorrectly recognised and insufficient diagnoses in 37 cases. Therapeutic errors occurred in 37 victims, organisational errors in 9 and technical errors in 2. In addition to sepsis, 8 patients also had a severe concomitant disease and 8 had a chronic disease. In 45 cases, the authors observed glaring errors, which could incur criminal liability. There is an urgent need to introduce a system for reporting and analysing medical errors in Poland. The development and popularisation of standards for identifying and treating sepsis across basic medical professions is essential to improve patient safety and survival rates. Procedures should be introduced to prevent health-care workers from administering incorrect treatment in cases.

  18. From requirements to acceptance tests

    NASA Technical Reports Server (NTRS)

    Baize, Lionel; Pasquier, Helene

    1993-01-01

    From user requirements definition to accepted software system, the software project management wants to be sure that the system will meet the requirements. For the development of a telecommunication satellites Control Centre, C.N.E.S. has used new rules to make the use of tracing matrix easier. From Requirements to Acceptance Tests, each item of a document must have an identifier. A unique matrix traces the system and allows the tracking of the consequences of a change in the requirements. A tool has been developed, to import documents into a relational data base. Each record of the data base corresponds to an item of a document, the access key is the item identifier. Tracing matrix is also processed, providing automatically links between the different documents. It enables the reading on the same screen of traced items. For example one can read simultaneously the User Requirements items, the corresponding Software Requirements items and the Acceptance Tests.

  19. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  20. Development of quantitative risk acceptance criteria

    SciTech Connect

    Griesmeyer, J. M.; Okrent, D.

    1981-01-01

    Some of the major considerations for effective management of risk are discussed, with particular emphasis on risks due to nuclear power plant operations. Although there are impacts associated with the rest of the fuel cycle, they are not addressed here. Several previously published proposals for quantitative risk criteria are reviewed. They range from a simple acceptance criterion on individual risk of death to a quantitative risk management framework. The final section discussed some of the problems in the establishment of a framework for the quantitative management of risk.

  1. NOTE: The influence of CT image noise on proton range calculation in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alexei V.; Paige, Sandra L.

    2010-03-01

    The purpose of this note is to evaluate the relationship between the stochastic errors in CT numbers and the standard deviation of the computed proton beam range in radiotherapy planning. The stochastic voxel-to-voxel variation in CT numbers called 'noise,' may be due to signal registration, processing and numerical image reconstruction technique. Noise in CT images may cause a deviation in the computed proton range from the physical proton range, even assuming that the error due to CT number-stopping power calibration is removed. To obtain the probability density function (PDF) of the computed proton range, we have used the continuing slowing down approximation (CSDA) and the uncorrelated white Gaussian noise along the proton path. The model of white noise was accepted because for the slice-based fan-beam CT scanner; the power-spectrum properties apply only to the axial (x, y) domain and the noise is uncorrelated in the z domain. However, the possible influence of the noise power spectrum on the standard deviation of the range should be investigated in the future. A random number generator was utilized for noise simulation and this procedure was iteratively repeated to obtain convergence of range PDF, which approached a Gaussian distribution. We showed that the standard deviation of the range, σ, increases linearly with the initial proton energy, computational grid size and standard deviation of the voxel values. The 95% confidence interval width of the range PDF, which is defined as 4σ, may reach 0.6 cm for the initial proton energy of 200 MeV, computational grid 0.25 cm and 5% standard deviation of CT voxel values. Our results show that the range uncertainty due to random errors in CT numbers may be significant and comparable to the uncertainties due to calibration of CT numbers. Presented at the 51st Annual Meeting of the American Association of Physicists in Medicine, Anaheim, CA, July 26-30, 2009.

  2. Acceptance Criteria Framework for Autonomous Biological Detectors

    SciTech Connect

    Dzenitis, J M

    2006-12-12

    The purpose of this study was to examine a set of user acceptance criteria for autonomous biological detection systems for application in high-traffic, public facilities. The test case for the acceptance criteria was the Autonomous Pathogen Detection System (APDS) operating in high-traffic facilities in New York City (NYC). However, the acceptance criteria were designed to be generally applicable to other biological detection systems in other locations. For such detection systems, ''users'' will include local authorities (e.g., facility operators, public health officials, and law enforcement personnel) and national authorities [including personnel from the Department of Homeland Security (DHS), the BioWatch Program, the Centers for Disease Control and Prevention (CDC), and the Federal Bureau of Investigation (FBI)]. The panel members brought expertise from a broad range of backgrounds to complete this picture. The goals of this document are: (1) To serve as informal guidance for users in considering the benefits and costs of these systems. (2) To serve as informal guidance for developers in understanding the needs of users. In follow-up work, this framework will be used to systematically document the APDS for appropriateness and readiness for use in NYC.

  3. Simulation of large acceptance LINAC for muons

    SciTech Connect

    Miyadera, H; Kurennoy, S; Jason, A J

    2010-01-01

    There has been a recent need for muon accelerators not only for future Neutrino Factories and Muon Colliders but also for other applications in industry and medical use. We carried out simulations on a large-acceptance muon linac with a new concept 'mixed buncher/acceleration'. The linac can accept pions/muons from a production target with large acceptance and accelerate muon without any beam cooling which makes the initial section of muon-linac system very compact. The linac has a high impact on Neutrino Factory and Muon Collider (NF/MC) scenario since the 300-m injector section can be replaced by the muon linac of only 10-m length. The current design of the linac consists of the following components: independent 805-MHz cavity structure with 6- or 8-cm-radius aperture window; injection of a broad range of pion/muon energies, 10-100 MeV, and acceleration to 150 - 200 MeV. Further acceleration of the muon beam are relatively easy since the beam is already bunched.

  4. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  5. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  6. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  7. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  8. Motion estimation performance models with application to hardware error tolerance

    NASA Astrophysics Data System (ADS)

    Cheong, Hye-Yeon; Ortega, Antonio

    2007-01-01

    The progress of VLSI technology towards deep sub-micron feature sizes, e.g., sub-100 nanometer technology, has created a growing impact of hardware defects and fabrication process variability, which lead to reductions in yield rate. To address these problems, a new approach, system-level error tolerance (ET), has been recently introduced. Considering that a significant percentage of the entire chip production is discarded due to minor imperfections, this approach is based on accepting imperfect chips that introduce imperceptible/acceptable system-level degradation; this leads to increases in overall effective yield. In this paper, we investigate the impact of hardware faults on the video compression performance, with a focus on the motion estimation (ME) process. More specifically, we provide an analytical formulation of the impact of single and multiple stuck-at-faults within ME computation. We further present a model for estimating the system-level performance degradation due to such faults, which can be used for the error tolerance based decision strategy of accepting a given faulty chip. We also show how different faults and ME search algorithms compare in terms of error tolerance and define the characteristics of search algorithm that lead to increased error tolerance. Finally, we show that different hardware architectures performing the same metric computation have different error tolerance characteristics and we present the optimal ME hardware architecture in terms of error tolerance. While we focus on ME hardware, our work could also applied to systems (e.g., classifiers, matching pursuits, vector quantization) where a selection is made among several alternatives (e.g., class label, basis function, quantization codeword) based on which choice minimizes an additive metric of interest.

  9. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  10. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  11. Precise Orbit Determination for GEOSAT Follow-On Using Satellite Laser Ranging Data and Intermission Altimeter Crossovers

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Rowlands, David D.; Luthcke, Scott B.; Zelensky, Nikita P.; Chinn, Douglas S.; Pavlis, Despina E.; Marr, Gregory

    2001-01-01

    The US Navy's GEOSAT Follow-On Spacecraft was launched on February 10, 1998 with the primary objective of the mission to map the oceans using a radar altimeter. Following an extensive set of calibration campaigns in 1999 and 2000, the US Navy formally accepted delivery of the satellite on November 29, 2000. Satellite laser ranging (SLR) and Doppler (Tranet-style) beacons track the spacecraft. Although limited amounts of GPS data were obtained, the primary mode of tracking remains satellite laser ranging. The GFO altimeter measurements are highly precise, with orbit error the largest component in the error budget. We have tuned the non-conservative force model for GFO and the gravity model using SLR, Doppler and altimeter crossover data sampled over one year. Gravity covariance projections to 70x70 show the radial orbit error on GEOSAT was reduced from 2.6 cm in EGM96 to 1.3 cm with the addition of SLR, GFO/GFO and TOPEX/GFO crossover data. Evaluation of the gravity fields using SLR and crossover data support the covariance projections and also show a dramatic reduction in geographically-correlated error for the tuned fields. In this paper, we report on progress in orbit determination for GFO using GFO/GFO and TOPEX/GFO altimeter crossovers. We will discuss improvements in satellite force modeling and orbit determination strategy, which allows reduction in GFO radial orbit error from 10-15 cm to better than 5 cm.

  12. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  13. Critical evidence for the prediction error theory in associative learning

    PubMed Central

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an “auto-blocking”, which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  14. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  15. Imaginary Companions and Peer Acceptance

    ERIC Educational Resources Information Center

    Gleason, Tracy R.

    2004-01-01

    Early research on imaginary companions suggests that children who create them do so to compensate for poor social relationships. Consequently, the peer acceptance of children with imaginary companions was compared to that of their peers. Sociometrics were conducted on 88 preschool-aged children; 11 had invisible companions, 16 had personified…

  16. Acceptance of Others (Number Form).

    ERIC Educational Resources Information Center

    Masters, James R.; Laverty, Grace E.

    As part of the instrumentation to assess the effectiveness of the Schools Without Failure (SWF) program in 10 elementary schools in the New Castle, Pa. School District, the Acceptance of Others (Number Form) was prepared to determine pupil's attitudes toward classmates. Given a list of all class members, pupils are asked to circle a number from 1…

  17. W-025, acceptance test report

    SciTech Connect

    Roscha, V.

    1994-10-04

    This acceptance test report (ATR) has been prepared to establish the results of the field testing conducted on W-025 to demonstrate that the electrical/instrumentation systems functioned as intended by design. This is part of the RMW Land Disposal Facility.

  18. Euthanasia Acceptance: An Attitudinal Inquiry.

    ERIC Educational Resources Information Center

    Klopfer, Fredrick J.; Price, William F.

    The study presented was conducted to examine potential relationships between attitudes regarding the dying process, including acceptance of euthanasia, and other attitudinal or demographic attributes. The data of the survey was comprised of responses given by 331 respondents to a door-to-door interview. Results are discussed in terms of preferred…

  19. Helping Our Children Accept Themselves.

    ERIC Educational Resources Information Center

    Gamble, Mae

    1984-01-01

    Parents of a child with muscular dystrophy recount their reactions to learning of the diagnosis, their gradual acceptance, and their son's resistance, which was gradually lessened when he was provided with more information and treated more normally as a member of the family. (CL)

  20. Acceptance and Commitment Therapy: Introduction

    ERIC Educational Resources Information Center

    Twohig, Michael P.

    2012-01-01

    This is the introductory article to a special series in Cognitive and Behavioral Practice on Acceptance and Commitment Therapy (ACT). Instead of each article herein reviewing the basics of ACT, this article contains that review. This article provides a description of where ACT fits within the larger category of cognitive behavior therapy (CBT):…

  1. Who accepts first aid training?

    PubMed

    Pearn, J; Dawson, B; Leditschke, F; Petrie, G; Nixon, J

    1980-09-01

    The percentage of individuals trained in first aid skills in the general community is inadequate. We report here a study to investigate factors which influence motivation to accept voluntary training in first aid. A group of 700 randomly selected owners of inground swimming pools (a parental high-risk group) was offered a course of formal first aid instruction. Nine per cent attended the offered training course. The time commitment involved in traditional courses (eight training nights spread over four weeks) is not a deterrent, the same percentage accepting such courses as that who accept a course of one night's instruction. Cost is an important deterrent factor, consumer resistance rising over 15 cost units (one cost unit = the price of a loaf of bread). The level of competent first aid training within the community can be raised by (a) keeping to traditional course content, but (b) by ensuring a higher acceptance rate of first aid courses by a new approach to publicity campaigns, to convince prospective students of the real worth of first aid training. Questions concerning who should be taught first aid, and factors influencing motivation, are discussed.

  2. Improving medication administration error reporting systems. Why do errors occur?

    PubMed

    Wakefield, B J; Wakefield, D S; Uden-Holman, T

    2000-01-01

    Monitoring medication administration errors (MAE) is often included as part of the hospital's risk management program. While observation of actual medication administration is the most accurate way to identify errors, hospitals typically rely on voluntary incident reporting processes. Although incident reporting systems are more economical than other methods of error detection, incident reporting can also be a time-consuming process depending on the complexity or "user-friendliness" of the reporting system. Accurate incident reporting systems are also dependent on the ability of the practitioner to: 1) recognize an error has actually occurred; 2) believe the error is significant enough to warrant reporting; and 3) overcome the embarrassment of having committed a MAE and the fear of punishment for reporting a mistake (either one's own or another's mistake).

  3. Predictive error analysis for a water resource management model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  4. Frequency analysis of nonlinear oscillations via the global error minimization

    NASA Astrophysics Data System (ADS)

    Kalami Yazdi, M.; Hosseini Tehrani, P.

    2016-06-01

    The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.

  5. Reducing number entry errors: solving a widespread, serious problem.

    PubMed

    Thimbleby, Harold; Cairns, Paul

    2010-10-01

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  6. Inductively Coupled Plasma Mass Spectrometry Uranium Error Propagation

    SciTech Connect

    Hickman, D P; Maclean, S; Shepley, D; Shaw, R K

    2001-07-01

    The Hazards Control Department at Lawrence Livermore National Laboratory (LLNL) uses Inductively Coupled Plasma Mass Spectrometer (ICP/MS) technology to analyze uranium in urine. The ICP/MS used by the Hazards Control Department is a Perkin-Elmer Elan 6000 ICP/MS. The Department of Energy Laboratory Accreditation Program requires that the total error be assessed for bioassay measurements. A previous evaluation of the errors associated with the ICP/MS measurement of uranium demonstrated a {+-} 9.6% error in the range of 0.01 to 0.02 {micro}g/l. However, the propagation of total error for concentrations above and below this level have heretofore been undetermined. This document is an evaluation of the errors associated with the current LLNL ICP/MS method for a more expanded range of uranium concentrations.

  7. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  8. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  9. Passport officers' errors in face matching.

    PubMed

    White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  10. An adaptive error-resilient video encoder

    NASA Astrophysics Data System (ADS)

    Cheng, Liang; El Zarki, Magda

    2003-06-01

    When designing an encoder for a real-time video application over a wireless channel, we must take into consideration the unpredictable fluctuation of the quality of the channel and its impact on the transmitted video data. This uncertainty motivates the development of an adaptive video encoding mechanism that can compensate for the infidelity caused either by data loss and/or by the post-processing (error concealment) at the decoder. In this paper, we first explore the major factors that cause quality degradation. We then propose an adaptive progressive replenishment algorithm for a packet loss rate (PLR) feedback enabled system. Assuming the availability of a feedback channel, we discuss a video quality assessment method, which allows the encoder to be aware of the decoder-side perceptual quality. Finally, we present a novel dual-feedback mechanism that guarantees an acceptable level of quality at the receiver side with modest increase in the complexity of the encoder.

  11. Passport Officers’ Errors in Face Matching

    PubMed Central

    White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682

  12. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  13. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  14. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  15. Quantifying error distributions in crowding.

    PubMed

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  16. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  17. Challenge and error: critical events and attention-related errors.

    PubMed

    Cheyne, James Allan; Carriere, Jonathan S A; Solman, Grayden J F; Smilek, Daniel

    2011-12-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error↔attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention lapses; resource-depleting cognitions interfering with attention to subsequent task challenges. Attention lapses lead to errors, and errors themselves are a potent consequence often leading to further attention lapses potentially initiating a spiral into more serious errors. We investigated this challenge-induced error↔attention-lapse model using the Sustained Attention to Response Task (SART), a GO-NOGO task requiring continuous attention and response to a number series and withholding of responses to a rare NOGO digit. We found response speed and increased commission errors following task challenges to be a function of temporal distance from, and prior performance on, previous NOGO trials. We conclude by comparing and contrasting the present theory and findings to those based on choice paradigms and argue that the present findings have implications for the generality of conflict monitoring and control models.

  18. Human error in recreational boating.

    PubMed

    McKnight, A James; Becker, Wayne W; Pettit, Anthony J; McKnight, A Scott

    2007-03-01

    Each year over 600 people die and more than 4000 are reported injured in recreational boating accidents. As with most other accidents, human error is the major contributor. U.S. Coast Guard reports of 3358 accidents were analyzed to identify errors in each of the boat types by which statistics are compiled: auxiliary (motor) sailboats, cabin motorboats, canoes and kayaks, house boats, personal watercraft, open motorboats, pontoon boats, row boats, sail-only boats. The individual errors were grouped into categories on the basis of similarities in the behavior involved. Those presented here are the categories accounting for at least 5% of all errors when summed across boat types. The most revealing and significant finding is the extent to which the errors vary across types. Since boating is carried out with one or two types of boats for long periods of time, effective accident prevention measures, including safety instruction, need to be geared to individual boat types.

  19. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  20. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  1. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  2. Error decomposition and estimation of inherent optical properties.

    PubMed

    Salama, Mhd Suhyb; Stein, Alfred

    2009-09-10

    We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation. PMID:19745859

  3. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  4. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  5. Errors as allies: error management training in health professions education.

    PubMed

    King, Aimee; Holder, Michael G; Ahmed, Rami A

    2013-06-01

    This paper adopts methods from the organisational team training literature to outline how health professions education can improve patient safety. We argue that health educators can improve training quality by intentionally encouraging errors during simulation-based team training. Preventable medical errors are inevitable, but encouraging errors in low-risk settings like simulation can allow teams to have better emotional control and foresight to manage the situation if it occurs again with live patients. Our paper outlines an innovative approach for delivering team training.

  6. Predicting the acceptance of advanced rider assistance systems.

    PubMed

    Huth, Véronique; Gelau, Christhard

    2013-01-01

    The strong prevalence of human error as a crash causation factor in motorcycle accidents calls for countermeasures that help tackling this issue. Advanced rider assistance systems pursue this goal, providing the riders with support and thus contributing to the prevention of crashes. However, the systems can only enhance riding safety if the riders use them. For this reason, acceptance is a decisive aspect to be considered in the development process of such systems. In order to be able to improve behavioural acceptance, the factors that influence the intention to use the system need to be identified. This paper examines the particularities of motorcycle riding and the characteristics of this user group that should be considered when predicting the acceptance of advanced rider assistance systems. Founded on theories predicting behavioural intention, the acceptance of technologies and the acceptance of driver support systems, a model on the acceptance of advanced rider assistance systems is proposed, including the perceived safety when riding without support, the interface design and the social norm as determinants of the usage intention. Since actual usage cannot be measured in the development stage of the systems, the willingness to have the system installed on the own motorcycle and the willingness to pay for the system are analyzed, constituting relevant conditions that allow for actual usage at a later stage. Its validation with the results from user tests on four advanced rider assistance systems allows confirming the social norm and the interface design as powerful predictors of the acceptance of ARAS, while the extent of perceived safety when riding without support did not have any predictive value in the present study.

  7. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  8. Sweeteners: consumer acceptance in tea.

    PubMed

    Sprowl, D J; Ehrcke, L A

    1984-09-01

    Sucrose, fructose, aspartame, and saccharin were compared for consumer preference, aftertaste, and cost to determine acceptability of the sweeteners. A 23-member taste panel evaluated tea samples for preference and aftertaste. Mean retail cost of the sweeteners were calculated and adjusted to take sweetening power into consideration. Sucrose was the least expensive and most preferred sweetener. No significant difference in preference for fructose and aspartame was found, but both sweeteners were rated significantly lower than sucrose. Saccharin was the most disliked sweetener. Fructose was the most expensive sweetener and aspartame the next most expensive. Scores for aftertaste followed the same pattern as those for preference. Thus, a strong, unpleasant aftertaste seems to be associated with a dislike for a sweetener. From the results of this study, it seems that there is no completely acceptable low-calorie substitute for sucrose available to consumers.

  9. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-01-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it does not appear that reactors add measurably to the risk associated with the Space Transportation System.

  10. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-04-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it dies not appear that reactors add measurably to the risk associated with the Space Transportation System.

  11. Designing to Control Flight Crew Errors

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Willshire, Kelli F.

    1997-01-01

    It is widely accepted that human error is a major contributing factor in aircraft accidents. There has been a significant amount of research in why these errors occurred, and many reports state that the design of flight deck can actually dispose humans to err. This research has led to the call for changes in design according to human factors and human-centered principles. The National Aeronautics and Space Administration's (NASA) Langley Research Center has initiated an effort to design a human-centered flight deck from a clean slate (i.e., without constraints of existing designs.) The effort will be based on recent research in human-centered design philosophy and mission management categories. This design will match the human's model of the mission and function of the aircraft to reduce unnatural or non-intuitive interfaces. The product of this effort will be a flight deck design description, including training and procedures, and a cross reference or paper trail back to design hypotheses, and an evaluation of the design. The present paper will discuss the philosophy, process, and status of this design effort.

  12. 48 CFR 12.402 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Acceptance. 12.402 Section... Acceptance. (a) The acceptance paragraph in 52.212-4 is based upon the assumption that the Government will rely on the contractor's assurances that the commercial item tendered for acceptance conforms to...

  13. Laser Ranging Experiment on Lunar Reconnaissance Orbiter: Clocks and Ranges

    NASA Astrophysics Data System (ADS)

    Mao, D.; Rowlands, D. D.; McGarry, J.; Zuber, M. T.; Smith, D. E.; Torrence, M. H.; Neumann, G. A.; Mazarico, E.; Sun, X.; Zagwodzki, T. W.; Cavanaugh, J. F.; Ramos-Izquierdo, L.

    2010-12-01

    Accurate ranges from Earth to the Lunar Reconnaissance Orbiter (LRO) spacecraft Laser Ranging (LR) system supplement the precision orbit determination (POD) of LRO. LRO is tracked by ten LR stations from the International Laser Ranging Service (ILRS), using H-maser, GPS steered Rb, and Cs standard oscillators as reference clocks. The LR system routinely makes one-way range measurements via laser time-of-flight from Earth to LRO. Uplink photons are received by a telescope mounted on the high-gain antenna on LRO , transferred through a fiber optic cable to the Lunar Orbiter Laser Altimeter (LOLA), and timed-tagged by the spacecraft clock. The range from the LR Earth station to LRO is derived from paired outgoing and received times. Accurate ranges can only be obtained after solving for both the spacecraft and ground station clock errors. The drift rate and aging rate of the LRO clock are calculated from data provided by the primary LR station, NASA's Next Generation Satellite Laser Ranging System (NGSLR) in Greenbelt, Maryland. The results confirm the LRO clock oscillator mid to long term stability measured during ground testing. These rates also agree well with those determined through POD. Simultaneous and near-simultaneous ranging to LRO from multiple LR stations in America, Europe, and Australia has been successfully achieved within a 10 hour window. Data analysis of these ranging experiments allows for precision modeling of the clock behaviors of each LR ground station and characterization of the station ground fire times.

  14. Computerized analysis of error patterns in digit span recall.

    PubMed

    Woods, David L; Herron, T J; Yund, E W; Hink, R F; Kishiyama, M M; Reed, Bruce

    2011-08-01

    We analyzed error patterns during digit span (DS) testing in four experiments. In Experiment 1, error patterns analyzed from a community sample of 427 subjects revealed strong primacy and recency effects. Subjects with shorter DSs showed an increased incidence of transposition errors in comparison with other error types and a greater incidence of multiple errors on incorrect trials. Experiment 2 investigated 46 young subjects in three test sessions. The results replicated those of Experiment 1 and demonstrated that error patterns of individual subjects were consistent across repeated test administrations. Experiment 3 investigated 40 subjects from Experiment 2 who feigned symptoms of traumatic brain injury (TBI) with 80% of malingering subjects producing digit spans in the abnormal range. A digit span malingering index (DSMI) was developed to detect atypical error patterns in malingering subjects. Overall, 59% of malingering subjects with abnormal digit spans showed DSMIs in the abnormal range and DSMI values correlated significantly with the magnitude of malingering. Experiment 4 compared 29 patients with TBI with a new group of 38 control subjects. The TBI group showed significant reductions in digit span. Overall, 32% of the TBI patients showed DS abnormalities and 11% showed abnormal DSMIs. Computerized error-pattern analysis improves the sensitivity of DS assessment and can assist in the detection of malingering.

  15. Theoretical analysis of errors when estimating snow distribution through point measurements

    NASA Astrophysics Data System (ADS)

    Trujillo, E.; Lehning, M.

    2015-06-01

    In recent years, marked improvements in our knowledge of the statistical properties of the spatial distribution of snow properties have been achieved thanks to improvements in measuring technologies (e.g., LIDAR, terrestrial laser scanning (TLS), and ground-penetrating radar (GPR)). Despite this, objective and quantitative frameworks for the evaluation of errors in snow measurements have been lacking. Here, we present a theoretical framework for quantitative evaluations of the uncertainty in average snow depth derived from point measurements over a profile section or an area. The error is defined as the expected value of the squared difference between the real mean of the profile/field and the sample mean from a limited number of measurements. The model is tested for one- and two-dimensional survey designs that range from a single measurement to an increasing number of regularly spaced measurements. Using high-resolution (~ 1 m) LIDAR snow depths at two locations in Colorado, we show that the sample errors follow the theoretical behavior. Furthermore, we show how the determination of the spatial location of the measurements can be reduced to an optimization problem for the case of the predefined number of measurements, or to the designation of an acceptable uncertainty level to determine the total number of regularly spaced measurements required to achieve such an error. On this basis, a series of figures are presented as an aid for snow survey design under the conditions described, and under the assumption of prior knowledge of the spatial covariance/correlation properties. With this methodology, better objective survey designs can be accomplished that are tailored to the specific applications for which the measurements are going to be used. The theoretical framework can be extended to other spatially distributed snow variables (e.g., SWE - snow water equivalent) whose statistical properties are comparable to those of snow depth.

  16. Modeling Patients' Acceptance of Provider-delivered E-health

    PubMed Central

    Wilson, E. Vance; Lankton, Nancy K.

    2004-01-01

    Objective: Health care providers are beginning to deliver a range of Internet-based services to patients; however, it is not clear which of these e-health services patients need or desire. The authors propose that patients' acceptance of provider-delivered e-health can be modeled in advance of application development by measuring the effects of several key antecedents to e-health use and applying models of acceptance developed in the information technology (IT) field. Design: This study tested three theoretical models of IT acceptance among patients who had recently registered for access to provider-delivered e-health. Measurements: An online questionnaire administered items measuring perceptual constructs from the IT acceptance models (intrinsic motivation, perceived ease of use, perceived usefulness/extrinsic motivation, and behavioral intention to use e-health) and five hypothesized antecedents (satisfaction with medical care, health care knowledge, Internet dependence, information-seeking preference, and health care need). Responses were collected and stored in a central database. Results: All tested IT acceptance models performed well in predicting patients' behavioral intention to use e-health. Antecedent factors of satisfaction with provider, information-seeking preference, and Internet dependence uniquely predicted constructs in the models. Conclusion: Information technology acceptance models provide a means to understand which aspects of e-health are valued by patients and how this may affect future use. In addition, antecedents to the models can be used to predict e-health acceptance in advance of system development. PMID:15064290

  17. BFC: correcting Illumina sequencing errors

    PubMed Central

    2015-01-01

    Summary: BFC is a free, fast and easy-to-use sequencing error corrector designed for Illumina short reads. It uses a non-greedy algorithm but still maintains a speed comparable to implementations based on greedy methods. In evaluations on real data, BFC appears to correct more errors with fewer overcorrections in comparison to existing tools. It particularly does well in suppressing systematic sequencing errors, which helps to improve the base accuracy of de novo assemblies. Availability and implementation: https://github.com/lh3/bfc Contact: hengli@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25953801

  18. Neural markers of errors as endophenotypes in neuropsychiatric disorders

    PubMed Central

    Manoach, Dara S.; Agam, Yigal

    2013-01-01

    Learning from errors is fundamental to adaptive human behavior. It requires detecting errors, evaluating what went wrong, and adjusting behavior accordingly. These dynamic adjustments are at the heart of behavioral flexibility and accumulating evidence suggests that deficient error processing contributes to maladaptively rigid and repetitive behavior in a range of neuropsychiatric disorders. Neuroimaging and electrophysiological studies reveal highly reliable neural markers of error processing. In this review, we evaluate the evidence that abnormalities in these neural markers can serve as sensitive endophenotypes of neuropsychiatric disorders. We describe the behavioral and neural hallmarks of error processing, their mediation by common genetic polymorphisms, and impairments in schizophrenia, obsessive-compulsive disorder, and autism spectrum disorders. We conclude that neural markers of errors meet several important criteria as endophenotypes including heritability, established neuroanatomical and neurochemical substrates, association with neuropsychiatric disorders, presence in syndromally-unaffected family members, and evidence of genetic mediation. Understanding the mechanisms of error processing deficits in neuropsychiatric disorders may provide novel neural and behavioral targets for treatment and sensitive surrogate markers of treatment response. Treating error processing deficits may improve functional outcome since error signals provide crucial information for flexible adaptation to changing environments. Given the dearth of effective interventions for cognitive deficits in neuropsychiatric disorders, this represents a potentially promising approach. PMID:23882201

  19. Reducing number entry errors: solving a widespread, serious problem

    PubMed Central

    Thimbleby, Harold; Cairns, Paul

    2010-01-01

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an ‘out by 10 error’, which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical. To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact.(Charles Darwin 1879 [2008], p. 229) PMID:20375037

  20. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  1. Disentangling timing and amplitude errors in streamflow simulations

    NASA Astrophysics Data System (ADS)

    Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin

    2016-09-01

    This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to

  2. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  3. Airborne 2 color ranging experiment

    NASA Technical Reports Server (NTRS)

    Millar, Pamela S.; Abshire, James B.; Mcgarry, Jan F.; Zagwodzki, Thomas W.; Pacini, Linda K.

    1993-01-01

    Horizontal variations in the atmospheric refractivity are a limiting error source for many precise laser and radio space geodetic techniques. This experiment was designed to directly measure horizontal variations in atmospheric refractivity, for the first time, by using 2 color laser ranging measurements to an aircraft. The 2 color laser system at the Goddard Optical Research Facility (GORF) ranged to a cooperative laser target package on a T-39 aircraft. Circular patterns which extended from the southern edge of the Washington D.C. Beltway to the southern edge of Baltimore, MD were flown counter clockwise around Greenbelt, MD. Successful acquisition, tracking, and ranging for 21 circular paths were achieved on three flights in August 1992, resulting in over 20,000 two color ranging measurements.

  4. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  5. Cirrus cloud retrieval using infrared sounding data: Multilevel cloud errors

    NASA Technical Reports Server (NTRS)

    Baum, Bryan A.; Wielicki, Bruce A.

    1994-01-01

    In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-microns CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1-1.0) and cloud-top pressures (850-250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud effective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all cases, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300-500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.

  6. Total error vs. measurement uncertainty: revolution or evolution?

    PubMed

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  7. Correcting for particle counting bias error in turbulent flow

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  8. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  9. FORCE: FORtran for Cosmic Errors

    NASA Astrophysics Data System (ADS)

    Colombi, Stéphane; Szapudi, István

    We review the theory of cosmic errors we have recently developed for count-in-cells statistics. The corresponding FORCE package provides a simple and useful way to compute cosmic covariance on factorial moments and cumulants measured in galaxy catalogs.

  10. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  11. Quantile Regression With Measurement Error

    PubMed Central

    Wei, Ying; Carroll, Raymond J.

    2010-01-01

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802

  12. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  13. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  14. Dual processing and diagnostic errors.

    PubMed

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  15. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  16. Orbital and Geodetic Error Analysis

    NASA Technical Reports Server (NTRS)

    Felsentreger, T.; Maresca, P.; Estes, R.

    1985-01-01

    Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

  17. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900

  18. Multicenter Assessment of Gram Stain Error Rates.

    PubMed

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories.

  19. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  20. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  1. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  2. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  3. Medical Error and Moral Luck.

    PubMed

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613

  4. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  5. Determining the optimal window length for pattern recognition-based myoelectric control: balancing the competing effects of classification error and controller delay

    PubMed Central

    Smith, Lauren H.; Hargrove, Levi J.; Lock, Blair A.; Kuiken, Todd A.

    2014-01-01

    Pattern recognition–based control of myoelectric prostheses has shown great promise in research environments, but has not been optimized for use in a clinical setting. To explore the relationship between classification error, controller delay, and real-time controllability, 13 able-bodied subjects were trained to operate a virtual upper-limb prosthesis using pattern recognition of electromyogram (EMG) signals. Classification error and controller delay were varied by training different classifiers with a variety of analysis window lengths ranging from 50 to 550 ms and either two or four EMG input channels. Offline analysis showed that classification error decreased with longer window lengths (p < 0.01). Real-time controllability was evaluated with the Target Achievement Control (TAC) Test, which prompted users to maneuver the virtual prosthesis into various target postures. The results indicated that user performance improved with lower classification error (p<0.01) and was reduced with longer controller delay (p<0.01), as determined by the window length. Therefore, both of these effects should be considered when choosing a window length; it may be beneficial to increase the window length if this results in a reduced classification error, despite the corresponding increase in controller delay. For the system employed in this study, the optimal window length was found to be between 150 and 250 ms, which is within acceptable controller delays for conventional multi-state amplitude controllers. PMID:21193383

  6. Error protection capability of space shuttle data bus designs

    NASA Technical Reports Server (NTRS)

    Proch, G. E.

    1974-01-01

    Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.

  7. Estimation of rod scale errors in geodetic leveling

    USGS Publications Warehouse

    Craymer, Michael R.; Vaníček, Petr; Castle, Robert O.

    1995-01-01

    Comparisons among repeated geodetic levelings have often been used for detecting and estimating residual rod scale errors in leveled heights. Individual rod-pair scale errors are estimated by a two-step procedure using a model based on either differences in heights, differences in section height differences, or differences in section tilts. It is shown that the estimated rod-pair scale errors derived from each model are identical only when the data are correctly weighted, and the mathematical correlations are accounted for in the model based on heights. Analyses based on simple regressions of changes in height versus height can easily lead to incorrect conclusions. We also show that the statistically estimated scale errors are not a simple function of height, height difference, or tilt. The models are valid only when terrain slope is constant over adjacent pairs of setups (i.e., smoothly varying terrain). In order to discriminate between rod scale errors and vertical displacements due to crustal motion, the individual rod-pairs should be used in more than one leveling, preferably in areas of contrasting tectonic activity. From an analysis of 37 separately calibrated rod-pairs used in 55 levelings in southern California, we found eight statistically significant coefficients that could be reasonably attributed to rod scale errors, only one of which was larger than the expected random error in the applied calibration-based scale correction. However, significant differences with other independent checks indicate that caution should be exercised before accepting these results as evidence of scale error. Further refinements of the technique are clearly needed if the results are to be routinely applied in practice.

  8. Explaining errors in children's questions.

    PubMed

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  9. The intrinsic error thresholds of the surface code with correlated errors

    NASA Astrophysics Data System (ADS)

    Jouzdani, Pejman; Mucciolo, Eduardo; Novais, Eduardo

    2014-03-01

    We study how the resilience of the surface code to decoherence is affected by the presence of a bosonic bath. The surface code experiences an effective dynamics due to the coupling to a bosonic bath that correlates the qubits of the code. The range of the effective induced qubit-qubit interaction depends on parameters related to the bath correlation functions. We show hat different ranges set different intrinsic bounds on the fidelity of the code. These bounds appear to be independent of the stochastic error probabilities frequently studied in the literature and to be merely a consequence of the induced dynamics by the bath. We introduce a new definition of stabilizers based on logical operators that allows us to efficiently implement a Metropolis algorithm to determine the intrinsic upper bounds to the error threshold. Supported by the ONR and the NSF grant CCF 1117241.

  10. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  11. Enhanced orbit determination filter sensitivity analysis: Error budget development

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Burkhart, P. D.

    1994-01-01

    An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

  12. Verification of the Forecast Errors Based on Ensemble Spread

    NASA Astrophysics Data System (ADS)

    Vannitsem, S.; Van Schaeybroeck, B.

    2014-12-01

    The use of ensemble prediction systems allows for an uncertainty estimation of the forecast. Most end users do not require all the information contained in an ensemble and prefer the use of a single uncertainty measure. This measure is the ensemble spread which serves to forecast the forecast error. It is however unclear how best the quality of these forecasts can be performed, based on spread and forecast error only. The spread-error verification is intricate for two reasons: First for each probabilistic forecast only one observation is substantiated and second, the spread is not meant to provide an exact prediction for the error. Despite these facts several advances were recently made, all based on traditional deterministic verification of the error forecast. In particular, Grimit and Mass (2007) and Hopson (2014) considered in detail the strengths and weaknesses of the spread-error correlation, while Christensen et al (2014) developed a proper-score extension of the mean squared error. However, due to the strong variance of the error given a certain spread, the error forecast should be preferably considered as probabilistic in nature. In the present work, different probabilistic error models are proposed depending on the spread-error metrics used. Most of these models allow for the discrimination of a perfect forecast from an imperfect one, independent of the underlying ensemble distribution. The new spread-error scores are tested on the ensemble prediction system of the European Centre of Medium-range forecasts (ECMWF) over Europe and Africa. ReferencesChristensen, H. M., Moroz, I. M. and Palmer, T. N., 2014, Evaluation of ensemble forecast uncertainty using a new proper score: application to medium-range and seasonal forecasts. In press, Quarterly Journal of the Royal Meteorological Society. Grimit, E. P., and C. F. Mass, 2007: Measuring the ensemble spread-error relationship with a probabilistic approach: Stochastic ensemble results. Mon. Wea. Rev., 135, 203

  13. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  14. Quantum error correcting codes from the compression formalism

    NASA Astrophysics Data System (ADS)

    Choi, Man-Duen; Kribs, David W.; Życzkowski, Karol

    2006-08-01

    We solve the fundamental quantum error correction problem for bi-unitary channels on two-qubit Hilbert space. By solving an algebraic compression problem, we construct qubit codes for such channels on arbitrary dimension Hilbert space, and identify correctable codes for Pauli-error models not obtained by the stabilizer formalism. This is accomplished through an application of a new tool for error correction in quantum computing called the "higher-rank numerical range". We describe its basic properties and discuss possible further applications.

  15. Masking of errors in transmission of VAPC-coded speech

    NASA Technical Reports Server (NTRS)

    Cox, Neil B.; Froese, Edwin L.

    1990-01-01

    A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.

  16. Aerial measurement error with a dot planimeter: Some experimental estimates

    NASA Technical Reports Server (NTRS)

    Yuill, R. S.

    1971-01-01

    A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

  17. Microdensitometer errors: Their effect on photometric data reduction

    NASA Technical Reports Server (NTRS)

    Bozyan, E. P.; Opal, C. B.

    1984-01-01

    The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.

  18. Error Cost Escalation Through the Project Life Cycle

    NASA Technical Reports Server (NTRS)

    Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory

    2004-01-01

    It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units

  19. Acceptance and Commitment Therapy (ACT): An Overview for Practitioners

    ERIC Educational Resources Information Center

    Bowden, Tim; Bowden, Sandra

    2012-01-01

    Acceptance and Commitment Therapy (ACT) offers school counsellors a practical and meaningful approach to helping students deal with a range of issues. This is achieved through encouraging psychological flexibility through the application of six key principles. This article describes our introduction to ACT, ACT's application to children and…

  20. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  1. Further characterization of the influence of crowding on medication errors

    PubMed Central

    Watts, Hannah; Nasim, Muhammad Umer; Sweis, Rolla; Sikka, Rishi; Kulstad, Erik

    2013-01-01

    Study Objectives: Our prior analysis suggested that error frequency increases disproportionately with Emergency department (ED) crowding. To further characterize, we measured this association while controlling for the number of charts reviewed and the presence of ambulance diversion status. We hypothesized that errors would occur significantly more frequently as crowding increased, even after controlling for higher patient volumes. Materials and Methods: We performed a prospective, observational study in a large, community hospital ED from May to October of 2009. Our ED has full-time pharmacists who review orders of patients to help identify errors prior to their causing harm. Research volunteers shadowed our ED pharmacists over discrete 4- hour time periods during their reviews of orders on patients in the ED. The total numbers of charts reviewed and errors identified were documented along with details for each error type, severity, and category. We then measured the correlation between error rate (number of errors divided by total number of charts reviewed) and ED occupancy rate while controlling for diversion status during the observational period. We estimated a sample size requirement of at least 45 errors identified to allow detection of an effect size of 0.6 based on our historical data. Results: During 324 hours of surveillance, 1171 charts were reviewed and 87 errors were identified. Median error rate per 4-hour block was 5.8% of charts reviewed (IQR 0-13). No significant change was seen with ED occupancy rate (Spearman's rho = –.08, P = .49). Median error rate during times on ambulance diversion was almost twice as large (11%, IQR 0-17), but this rate did not reach statistical significance in univariate or multivariate analysis. Conclusions: Error frequency appears to remain relatively constant across the range of crowding in our ED when controlling for patient volume via the quantity of orders reviewed. Error quantity therefore increases with crowding

  2. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  3. Synthetic aperture interferometry: error analysis

    SciTech Connect

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  4. Orbit IMU alignment: Error analysis

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  5. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. PMID:27184070

  6. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis.

  7. 20 Tips to Help Prevent Medical Errors

    MedlinePlus

    ... Prevent Medical Errors 20 Tips to Help Prevent Medical Errors: Patient Fact Sheet This information is for ... current information. Select to Download PDF (295 KB). Medical errors can occur anywhere in the health care ...

  8. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  9. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  10. Analysis of Medication Error Reports

    SciTech Connect

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  11. How psychotherapists handle treatment errors – an ethical analysis

    PubMed Central

    2013-01-01

    Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

  12. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... offered have either achieved commercial market acceptance or been satisfactorily supplied to an agency... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance....

  13. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... either— (i) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an...

  14. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  15. Older Adults' Acceptance of Information Technology

    ERIC Educational Resources Information Center

    Wang, Lin; Rau, Pei-Luen Patrick; Salvendy, Gavriel

    2011-01-01

    This study investigated variables contributing to older adults' information technology acceptance through a survey, which was used to find factors explaining and predicting older adults' information technology acceptance behaviors. Four factors, including needs satisfaction, perceived usability, support availability, and public acceptance, were…

  16. Apollo experience report environmental acceptance testing

    NASA Technical Reports Server (NTRS)

    Laubach, C. H. M.

    1976-01-01

    Environmental acceptance testing was used extensively to screen selected spacecraft hardware for workmanship defects and manufacturing flaws. The minimum acceptance levels and durations and methods for their establishment are described. Component selection and test monitoring, as well as test implementation requirements, are included. Apollo spacecraft environmental acceptance test results are summarized, and recommendations for future programs are presented.

  17. 48 CFR 245.606-3 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Acceptance. 245.606-3..., DEPARTMENT OF DEFENSE CONTRACT MANAGEMENT GOVERNMENT PROPERTY Reporting, Redistribution, and Disposal of Contractor Inventory 245.606-3 Acceptance. (a) If the schedules are acceptable, the plant clearance...

  18. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  19. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  20. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  1. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  2. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  3. Toward a theoretical approach to medical error reporting system research and design.

    PubMed

    Karsh, Ben-Tzion; Escoto, Kamisha Hamilton; Beasley, John W; Holden, Richard J

    2006-05-01

    The release of the Institute of Medicine (Kohn et al., 2000) report "To Err is Human", brought attention to the problem of medical errors, which led to a concerted effort to study and design medical error reporting systems for the purpose of capturing and analyzing error data so that safety interventions could be designed. However, to make real gains in the efficacy of medical error or event reporting systems, it is necessary to begin developing a theory of reporting systems adoption and use and to understand how existing theories may play a role in explaining adoption and use. This paper presents the results of a 9-month study exploring the barriers and facilitators for the design of a statewide medical error reporting system and discusses how several existing theories of technology acceptance, adoption and implementation fit with many of the results. In addition we present an integrated theoretical model of medical error reporting system design and implementation. PMID:16182233

  4. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  5. Management of human error by design

    NASA Technical Reports Server (NTRS)

    Wiener, Earl

    1988-01-01

    Design-induced errors and error prevention as well as the concept of lines of defense against human error are discussed. The concept of human error prevention, whose main focus has been on hardware, is extended to other features of the human-machine interface vulnerable to design-induced errors. In particular, it is pointed out that human factors and human error prevention should be part of the process of transport certification. Also, the concept of error tolerant systems is considered as a last line of defense against error.

  6. Reducing medical errors and adverse events.

    PubMed

    Pham, Julius Cuong; Aswani, Monica S; Rosen, Michael; Lee, HeeWon; Huddle, Matthew; Weeks, Kristina; Pronovost, Peter J

    2012-01-01

    Medical errors account for ∼98,000 deaths per year in the United States. They increase disability and costs and decrease confidence in the health care system. We review several important types of medical errors and adverse events. We discuss medication errors, healthcare-acquired infections, falls, handoff errors, diagnostic errors, and surgical errors. We describe the impact of these errors, review causes and contributing factors, and provide an overview of strategies to reduce these events. We also discuss teamwork/safety culture, an important aspect in reducing medical errors.

  7. Flight Test Results of an Angle of Attack and Angle of Sideslip Calibration Method Using Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Siu, Marie-Michele; Martos, Borja; Foster, John V.

    2013-01-01

    As part of a joint partnership between the NASA Aviation Safety Program (AvSP) and the University of Tennessee Space Institute (UTSI), research on advanced air data calibration methods has been in progress. This research was initiated to expand a novel pitot-static calibration method that was developed to allow rapid in-flight calibration for the NASA Airborne Subscale Transport Aircraft Research (AirSTAR) facility. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. Subscale flight tests demonstrated small 2-s error bounds with significant reduction in test time compared to other methods. Recent UTSI full scale flight tests have shown airspeed calibrations with the same accuracy or better as the Federal Aviation Administration (FAA) accepted GPS 'four-leg' method in a smaller test area and in less time. The current research was motivated by the desire to extend this method for inflight calibration of angle of attack (AOA) and angle of sideslip (AOS) flow vanes. An instrumented Piper Saratoga research aircraft from the UTSI was used to collect the flight test data and evaluate flight test maneuvers. Results showed that the output-error approach produces good results for flow vane calibration. In addition, maneuvers for pitot-static and flow vane calibration can be integrated to enable simultaneous and efficient testing of each system.

  8. Evaluating the Effect of Global Positioning System (GPS) Satellite Clock Error via GPS Simulation

    NASA Astrophysics Data System (ADS)

    Sathyamoorthy, Dinesh; Shafii, Shalini; Amin, Zainal Fitry M.; Jusoh, Asmariah; Zainun Ali, Siti

    2016-06-01

    This study is aimed at evaluating the effect of Global Positioning System (GPS) satellite clock error using GPS simulation. Two conditions of tests are used; Case 1: All the GPS satellites have clock errors within the normal range of 0 to 7 ns, corresponding to pseudorange error range of 0 to 2.1 m; Case 2: One GPS satellite suffers from critical failure, resulting in clock error in the pseudorange of up to 1 km. It is found that increase of GPS satellite clock error causes increase of average positional error due to increase of pseudorange error in the GPS satellite signals, which results in increasing error in the coordinates computed by the GPS receiver. Varying average positional error patterns are observed for the each of the readings. This is due to the GPS satellite constellation being dynamic, causing varying GPS satellite geometry over location and time, resulting in GPS accuracy being location / time dependent. For Case 1, in general, the highest average positional error values are observed for readings with the highest PDOP values, while the lowest average positional error values are observed for readings with the lowest PDOP values. For Case 2, no correlation is observed between the average positional error values and PDOP, indicating that the error generated is random.

  9. Hybrid Models for Trajectory Error Modelling in Urban Environments

    NASA Astrophysics Data System (ADS)

    Angelatsa, E.; Parés, M. E.; Colomina, I.

    2016-06-01

    This paper tackles the first step of any strategy aiming to improve the trajectory of terrestrial mobile mapping systems in urban environments. We present an approach to model the error of terrestrial mobile mapping trajectories, combining deterministic and stochastic models. Due to urban specific environment, the deterministic component will be modelled with non-continuous functions composed by linear shifts, drifts or polynomial functions. In addition, we will introduce a stochastic error component for modelling residual noise of the trajectory error function. First step for error modelling requires to know the actual trajectory error values for several representative environments. In order to determine as accurately as possible the trajectories error, (almost) error less trajectories should be estimated using extracted nonsemantic features from a sequence of images collected with the terrestrial mobile mapping system and from a full set of ground control points. Once the references are estimated, they will be used to determine the actual errors in terrestrial mobile mapping trajectory. The rigorous analysis of these data sets will allow us to characterize the errors of a terrestrial mobile mapping system for a wide range of environments. This information will be of great use in future campaigns to improve the results of the 3D points cloud generation. The proposed approach has been evaluated using real data. The data originate from a mobile mapping campaign over an urban and controlled area of Dortmund (Germany), with harmful GNSS conditions. The mobile mapping system, that includes two laser scanner and two cameras, was mounted on a van and it was driven over a controlled area around three hours. The results show the suitability to decompose trajectory error with non-continuous deterministic and stochastic components.

  10. Errors in airborne flux measurements

    NASA Astrophysics Data System (ADS)

    Mann, Jakob; Lenschow, Donald H.

    1994-07-01

    We present a general approach for estimating systematic and random errors in eddy correlation fluxes and flux gradients measured by aircraft in the convective boundary layer as a function of the length of the flight leg, or of the cutoff wavelength of a highpass filter. The estimates are obtained from empirical expressions for various length scales in the convective boundary layer and they are experimentally verified using data from the First ISLSCP (International Satellite Land Surface Climatology Experiment) Field Experiment (FIFE), the Air Mass Transformation Experiment (AMTEX), and the Electra Radome Experiment (ELDOME). We show that the systematic flux and flux gradient errors can be important if fluxes are calculated from a set of several short flight legs or if the vertical velocity and scalar time series are high-pass filtered. While the systematic error of the flux is usually negative, that of the flux gradient can change sign. For example, for temperature flux divergence the systematic error changes from negative to positive about a quarter of the way up in the convective boundary layer.

  11. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  12. Measurement error in geometric morphometrics.

    PubMed

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.

  13. The Errors of Our Ways

    ERIC Educational Resources Information Center

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  14. Typical errors of ESP users

    NASA Astrophysics Data System (ADS)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  15. Amplify Errors to Minimize Them

    ERIC Educational Resources Information Center

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  16. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  17. Error Patterns of Bilingual Readers.

    ERIC Educational Resources Information Center

    Gonzalez, Phillip C.; Elijah, David V.

    1979-01-01

    In a study of developmental reading behaviors, errors of 75 Spanish-English bilingual students (grades 2-9) on the McLeod GAP Comprehension Test were categorized in an attempt to ascertain a pattern of language difficulties. Contrary to previous research, bilingual readers minimally used native language cues in reading second language materials.…

  18. What Is a Reading Error?

    ERIC Educational Resources Information Center

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  19. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  20. Input/output error analyzer

    NASA Technical Reports Server (NTRS)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  1. A brief history of error.

    PubMed

    Murray, Andrew W

    2011-10-01

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it. PMID:21968991

  2. Prospective, multidisciplinary recording of perioperative errors in cerebrovascular surgery: is error in the eye of the beholder?

    PubMed

    Michalak, Suzanne M; Rolston, John D; Lawton, Michael T

    2016-06-01

    OBJECT Surgery requires careful coordination of multiple team members, each playing a vital role in mitigating errors. Previous studies have focused on eliciting errors from only the attending surgeon, likely missing events observed by other team members. METHODS Surveys were administered to the attending surgeon, resident surgeon, anesthesiologist, and nursing staff immediately following each of 31 cerebrovascular surgeries; participants were instructed to record any deviation from optimal course (DOC). DOCs were categorized and sorted by reporter and perioperative timing, then correlated with delays and outcome measures. RESULTS Errors were recorded in 93.5% of the 31 cases surveyed. The number of errors recorded per case ranged from 0 to 8, with an average of 3.1 ± 2.1 errors (± SD). Overall, technical errors were most common (24.5%), followed by communication (22.4%), management/judgment (16.0%), and equipment (11.7%). The resident surgeon reported the most errors (52.1%), followed by the circulating nurse (31.9%), the attending surgeon (26.6%), and the anesthesiologist (14.9%). The attending and resident surgeons were most likely to report technical errors (52% and 30.6%, respectively), while anesthesiologists and circulating nurses mostly reported anesthesia errors (36%) and communication errors (50%), respectively. The overlap in reported errors was 20.3%. If this study had used only the surveys completed by the attending surgeon, as in prior studies, 72% of equipment errors, 90% of anesthesia and communication errors, and 100% of nursing errors would have been missed. In addition, it would have been concluded that errors occurred in only 45.2% of cases (rather than 93.5%) and that errors resulting in a delay occurred in 3.2% of cases instead of the 74.2% calculated using data from 4 team members. Compiled results from all team members yielded significant correlations between technical DOCs and prolonged hospital stays and reported and actual delays (p = 0

  3. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  4. Acceptability of GM foods among Pakistani consumers.

    PubMed

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-04-01

    In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers.

  5. Acceptability of GM foods among Pakistani consumers.

    PubMed

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-04-01

    In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers. PMID:27494790

  6. Meteor radar signal processing and error analysis

    NASA Astrophysics Data System (ADS)

    Kang, Chunmei

    Meteor wind radar systems are a powerful tool for study of the horizontal wind field in the mesosphere and lower thermosphere (MLT). While such systems have been operated for many years, virtually no literature has focused on radar system error analysis. The instrumental error may prevent scientists from getting correct conclusions on geophysical variability. The radar system instrumental error comes from different sources, including hardware, software, algorithms and etc. Radar signal processing plays an important role in radar system and advanced signal processing algorithms may dramatically reduce the radar system errors. In this dissertation, radar system error propagation is analyzed and several advanced signal processing algorithms are proposed to optimize the performance of radar system without increasing the instrument costs. The first part of this dissertation is the development of a time-frequency waveform detector, which is invariant to noise level and stable to a wide range of decay rates. This detector is proposed to discriminate the underdense meteor echoes from the background white Gaussian noise. The performance of this detector is examined using Monte Carlo simulations. The resulting probability of detection is shown to outperform the often used power and energy detectors for the same probability of false alarm. Secondly, estimators to determine the Doppler shift, the decay rate and direction of arrival (DOA) of meteors are proposed and evaluated. The performance of these estimators is compared with the analytically derived Cramer-Rao bound (CRB). The results show that the fast maximum likelihood (FML) estimator for determination of the Doppler shift and decay rate and the spatial spectral method for determination of the DOAs perform best among the estimators commonly used on other radar systems. For most cases, the mean square error (MSE) of the estimator meets the CRB above a 10dB SNR. Thus meteor echoes with an estimated SNR below 10dB are

  7. [On the applied medicolegal significance of the notion of "medical error"].

    PubMed

    Iurasov, V V; Smakhtin, R E

    2014-01-01

    The current practice of expertise of the adequacy of organization of the provision of medical aid introduces a new aspect of the notion of "medical error" that is widely employed in medical profession, among lawyers, patients, and their relatives as well as in mass media. The universally accepted meaning of this notion has not thus far been proposed. The authors consider the medico-legal concept of "medical error" reconciling the contradictory opinions.

  8. Error in the honeybee waggle dance improves foraging flexibility.

    PubMed

    Okada, Ryuichi; Ikeno, Hidetoshi; Kimura, Toshifumi; Ohashi, Mizue; Aonuma, Hitoshi; Ito, Etsuro

    2014-02-26

    The honeybee waggle dance communicates the location of profitable food sources, usually with a certain degree of error in the directional information ranging from 10-15° at the lower margin. We simulated one-day colonial foraging to address the biological significance of information error in the waggle dance. When the error was 30° or larger, the waggle dance was not beneficial. If the error was 15°, the waggle dance was beneficial when the food sources were scarce. When the error was 10° or smaller, the waggle dance was beneficial under all the conditions tested. Our simulation also showed that precise information (0-5° error) yielded great success in finding feeders, but also caused failures at finding new feeders, i.e., a high-risk high-return strategy. The observation that actual bees perform the waggle dance with an error of 10-15° might reflect, at least in part, the maintenance of a successful yet risky foraging trade-off.

  9. Comparative analysis of planetary laser ranging concepts

    NASA Astrophysics Data System (ADS)

    Dirkx, D.; Bauer, S.; Noomen, R.; Vermeersen, B. L. A.; Visser, P. N.

    2014-12-01

    Laser ranging is an emerging technology for tracking interplanetary missions, offering improved range accuracy and precision (mm-cm), compared to existing DSN tracking. The ground segment uses existing Satellite Laser Ranging (SLR) technology, whereas the space segment is modified with an active system. In a one-way system, such as that currently being used on the LRO spacecraft (Zuber et al., 2010), only an active detector is required on the spacecraft. For a two-way system, such as that tested by using the laser altimeter system on the MESSENGER spacecraft en route to Mercury (Smith et al., 2006), a laser transmitter system is additionally placed on the space segment, which will asynchronously fire laser pulses towards the ground stations. Although the one-way system requires less hardware, clock errors on both the space and ground segments will accumulate over time, polluting the range measurements. For a two-way system, the range measurements are only sensitive to clock errors integrated over the the two-way light time.We investigate the performance of both one- and two-way laser range systems by simulating their operation. We generate realizations of clock error time histories from Allan variance profiles, and use them to create range measurement error profiles. We subsequently perform the orbit determination process from this data to quanitfy the system's performance. For our simulations, we use two test cases: a lunar orbiter similar to LRO and a Phobos lander similar to the Phobos Laser Ranging concept (Turyshev et al., 2010). For the lunar orbiter, we include an empirical model for unmodelled non-gravitational accelerations in our truth model to include errors ihe dynamics. We include the estimation of clock parameters over a number of arc lengths for our simulations of the one-way range system and use a variety of state arc durations for the lunar orbiter simulations.We perform Monte Carlo simulations and generate true error distributions for both

  10. Acceptance in Romantic Relationships: The Frequency and Acceptability of Partner Behavior Inventory

    ERIC Educational Resources Information Center

    Doss, Brian D.; Christensen, Andrew

    2006-01-01

    Despite the recent emphasis on acceptance in romantic relationships, no validated measure of relationship acceptance presently exists. To fill this gap, the 20-item Frequency and Acceptability of Partner Behavior Inventory (FAPBI; A. Christensen & N. S. Jacobson, 1997) was created to assess separately the acceptability and frequency of both…

  11. Acceptability of reductive interventions for the control of inappropriate child behavior.

    PubMed

    Witt, J C; Robbins, J R

    1985-03-01

    Teacher attitudes about the acceptability of classroom intervention strategies were evaluated in two experiments. In both, teachers read descriptions of an intervention that was applied to a child with a behavior problem. In Experiment 1, an evaluation of six interventions for reducing inappropriate behavior suggested that one was highly acceptable (DRO), one was highly unacceptable (corporal punishment), and four ranged from mildly acceptable to mildly unacceptable (DRL, reprimands, time-out, and staying after school). In Experiment 2, the acceptability of the same intervention (staying after school) was evaluated as a function of who implemented it (teacher vs. principal). Analyses suggested that the teacher-implemented intervention was perceived as more acceptable. In both experiments, interventions were rated as less acceptable by highly experienced teachers versus those newer to the teaching profession. In addition, there was a trend for the acceptability of an intervention to vary as a function of the severity of the behavior problem to which it was applied. PMID:3973252

  12. Radar ranging to Ganymede and Callisto

    NASA Astrophysics Data System (ADS)

    Harmon, J. K.; Ostro, S. J.; Chandler, J. F.; Hudson, R. S.

    1994-03-01

    Arecibo observations from 1992 February to March have yielded the first successful radar range measurements to the Galilean satellites. Round-up time delays were measured for Ganymede and Callisto with accuracies of 20 to 50 micrometer (3 to 7 km) and 90 micrometer (14 km), respectively. Both satellites showed round-trip delay residuals (relative to the E-3 ephemeris) of about a millisecond, most of which can be attributed to errors in the predicted along-track positions (orbital phases). Using a simple model that assumed that all of the ephemeris error was due to constant orbital phase and Jupiter range errors we estimate that Ganymede was leading its ephemeris by 122 +/- 4 km, Callisto was lagging its ephemeris by 307 +/- 14 km, and Jupiter was 11 +/- 4 km more distant than predicted by the PEP740 planetary ephemeris.

  13. Radar ranging to Ganymede and Callisto

    NASA Technical Reports Server (NTRS)

    Harmon, J. K.; Ostro, S. J.; Chandler, J. F.; Hudson, R. S.

    1994-01-01

    Arecibo observations from 1992 February to March have yielded the first successful radar range measurements to the Galilean satellites. Round-up time delays were measured for Ganymede and Callisto with accuracies of 20 to 50 micrometer (3 to 7 km) and 90 micrometer (14 km), respectively. Both satellites showed round-trip delay residuals (relative to the E-3 ephemeris) of about a millisecond, most of which can be attributed to errors in the predicted along-track positions (orbital phases). Using a simple model that assumed that all of the ephemeris error was due to constant orbital phase and Jupiter range errors we estimate that Ganymede was leading its ephemeris by 122 +/- 4 km, Callisto was lagging its ephemeris by 307 +/- 14 km, and Jupiter was 11 +/- 4 km more distant than predicted by the PEP740 planetary ephemeris.

  14. Error-disturbance uncertainty relations studied in neutron optics

    NASA Astrophysics Data System (ADS)

    Sponar, Stephan; Sulyok, Georg; Demirel, Bulent; Hasegawa, Yuji

    2016-09-01

    Heisenberg's uncertainty principle is probably the most famous statement of quantum physics and its essential aspects are well described by a formulations in terms of standard deviations. However, a naive Heisenberg-type error-disturbance relation is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid Ozawa's relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg's original EDUR is violated, and the Ozawa's and Branciard's EDURs are valid in a wide range of experimental parameters, applying a new measurement procedure referred to as two-state method.

  15. Sensitivity of SLR baselines to errors in Earth orientation

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Christodoulidis, D. C.

    1984-01-01

    The sensitivity of inter station distances derived from Satellite Laser Ranging (SLR) to errors in Earth orientation is discussed. An analysis experiment is performed which imposes a known polar motion error on all of the arcs used over this interval. The effect of the averaging of the errors over the tracking periods of individual sites is assessed. Baselines between stations that are supported by a global network of tracking stations are only marginally affected by errors in Earth orientation. The global network of stations retains its integrity even in the presence of systematic changes in the coordinate frame. The effect of these coordinate frame changes on the relative locations of the stations is minimal.

  16. GP-B error modeling and analysis

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1982-01-01

    Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

  17. Error estimation for ORION baseline vector determination

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1980-01-01

    Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.

  18. A simple double error correcting BCH codes

    NASA Astrophysics Data System (ADS)

    Sinha, V.

    1983-07-01

    With the availability of various cost effective digital hardware components, error correcting codes are realized in hardware in simpler fashion than was hitherto possible. Instead of computing error locations in BCH decoding by Berklekamp algorith, syndrome to error location mapping using an EPROM for double error correcting BCH code is described. The processing is parallel instead of serial. Possible applications are given.

  19. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  20. Error Analysis in the Introductory Physics Laboratory.

    ERIC Educational Resources Information Center

    Deacon, Christopher G.

    1992-01-01

    Describes two simple methods of error analysis: (1) combining errors in the measured quantities; and (2) calculating the error or uncertainty in the slope of a straight-line graph. Discusses significance of the error in the comparison of experimental results with some known value. (MDH)

  1. Medical Errors: Tips to Help Prevent Them

    MedlinePlus

    ... to Web version Medical Errors: Tips to Help Prevent Them Medical Errors: Tips to Help Prevent Them Medical errors are one of the nation's ... single most important way you can help to prevent errors is to be an active member of ...

  2. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  3. Analysis of ionospheric refraction error corrections for GRARR systems

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.

    1971-01-01

    A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.

  4. Study of an instrument for sensing errors in a telescope wavefront

    NASA Technical Reports Server (NTRS)

    Golden, L. J.; Shack, R. V.; Slater, D. N.

    1973-01-01

    Partial results are presented of theoretical and experimental investigations of different focal plane sensor configurations for determining the error in a telescope wavefront. The coarse range sensor and fine range sensors are used in the experimentation. The design of a wavefront error simulator is presented along with the Hartmann test, the shearing polarization interferometer, the Zernike test, and the Zernike polarization test.

  5. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  6. Report on errors in pretransfusion testing from a tertiary care center: A step toward transfusion safety

    PubMed Central

    Sidhu, Meena; Meenia, Renu; Akhter, Naveen; Sawhney, Vijay; Irm, Yasmeen

    2016-01-01

    Introduction: Errors in the process of pretransfusion testing for blood transfusion can occur at any stage from collection of the sample to administration of the blood component. The present study was conducted to analyze the errors that threaten patients’ transfusion safety and actual harm/serious adverse events that occurred to the patients due to these errors. Materials and Methods: The prospective study was conducted in the Department Of Transfusion Medicine, Shri Maharaja Gulab Singh Hospital, Government Medical College, Jammu, India from January 2014 to December 2014 for a period of 1 year. Errors were defined as any deviation from established policies and standard operating procedures. A near-miss event was defined as those errors, which did not reach the patient. Location and time of occurrence of the events/errors were also noted. Results: A total of 32,672 requisitions for the transfusion of blood and blood components were received for typing and cross-matching. Out of these, 26,683 products were issued to the various clinical departments. A total of 2,229 errors were detected over a period of 1 year. Near-miss events constituted 53% of the errors and actual harmful events due to errors occurred in 0.26% of the patients. Sample labeling errors were 2.4%, inappropriate request for blood components 2%, and information on requisition forms not matching with that on the sample 1.5% of all the requisitions received were the most frequent errors in clinical services. In transfusion services, the most common event was accepting sample in error with the frequency of 0.5% of all requisitions. ABO incompatible hemolytic reactions were the most frequent harmful event with the frequency of 2.2/10,000 transfusions. Conclusion: Sample labeling, inappropriate request, and sample received in error were the most frequent high-risk errors. PMID:27011670

  7. Acoustic evidence for phonologically mismatched speech errors.

    PubMed

    Gormley, Andrea

    2015-04-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.

  8. Robot learning and error correction

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  9. Negligence, genuine error, and litigation.

    PubMed

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  10. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, W. S.; Burkhart, J. F.; Kylling, A.

    2015-08-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  11. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Lanber, J. K.; Cooper, G. E.

    1974-01-01

    This report is a brief description of research being undertaken by the National Aeronautics and Space Administration. The project is designed to seek out factors in the aviation system which contribute to human error, and to search for ways of minimizing the potential threat posed by these factors. The philosophy and assumptions underlying the study are discussed, together with an outline of the research plan.

  12. Heuristic edge detector for noisy range images

    NASA Astrophysics Data System (ADS)

    Wu, Kung C.

    1994-10-01

    This paper presents a heuristic edge detector for extracting wireframe representations of objects from noisy range data. Jump and roof edges were detected successfully from range images containing additive white Gaussian noise with a standard deviation equal to as high as 1.2% of the measured range values. This represents an appreciable amount of noise since approximately 5% of the errors are greater than 12 cm and 32% of errors are greater than 6 cm at a distance of 5 meters. The noise insensitive characteristic of the heuristic edge detector enables low cost range scanners to be used for practical industrial applications. The availability of low cost active vision systems greatly broadens the horizon of integrating robotics vision systems to manufacturing automation.

  13. Resources and Long-Range Forecasts

    ERIC Educational Resources Information Center

    Smith, Waldo E.

    1973-01-01

    The author argues that forecasts of quick depletion of resources in the environment as a result of overpopulation and increased usage may not be free from error. Ignorance still exists in understanding the recovery mechanisms of nature. Long-range forecasts are likely to be wrong in such situations. (PS)

  14. Clinical review: Medication errors in critical care

    PubMed Central

    Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas

    2008-01-01

    Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883

  15. Treatment acceptability among mexican american parents.

    PubMed

    Borrego, Joaquin; Ibanez, Elizabeth S; Spendlove, Stuart J; Pemberton, Joy R

    2007-09-01

    There is a void in the literature with regard to Hispanic parents' views about common interventions for children with behavior problems. The purpose of this study was to examine the treatment acceptability of child management techniques in a Mexican American sample. Parents' acculturation was also examined to determine if it would account for differences in treatment acceptability. Mexican American parents found response cost, a punishment-based technique, more acceptable than positive reinforcement-based techniques (e.g., differential attention). Results suggest that Mexican American parents' acculturation has little impact on acceptability of child management interventions. No association was found between mothers' acculturation and treatment acceptability. However, more acculturated Mexican American fathers viewed token economy as more acceptable than less acculturated fathers. Results are discussed in the context of clinical work and research with Mexican Americans.

  16. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  17. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  18. The nearest neighbor and the bayes error rates.

    PubMed

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal. PMID:21869395

  19. Acceptability of blood and blood substitutes.

    PubMed

    Ferguson, E; Prowse, C; Townsend, E; Spence, A; Hilten, J A van; Lowe, K

    2008-03-01

    Alternatives to donor blood have been developed in part to meet increasing demand. However, new biotechnologies are often associated with increased perceptions of risk and low acceptance. This paper reviews developments of alternatives and presents data, from a field-based experiment in the UK and Holland, on the risks and acceptance of donor blood and alternatives (chemical, genetically modified and bovine). UK groups perceived all substitutes as riskier than the Dutch. There is a negative association between perceived risk and acceptability. Solutions to increasing acceptance are discussed in terms of implicit attitudes, product naming and emotional responses.

  20. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  1. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  2. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  3. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  4. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  5. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  6. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  7. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  8. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  9. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  10. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    SYR 2013 Accepted Poster abstracts: 1. Benefits of Yoga as a Wellness Practice in a Veterans Affairs (VA) Health Care Setting: If You Build It, Will They Come? 2. Yoga-based Psychotherapy Group With Urban Youth Exposed to Trauma. 3. Embodied Health: The Effects of a Mind�Body Course for Medical Students. 4. Interoceptive Awareness and Vegetable Intake After a Yoga and Stress Management Intervention. 5. Yoga Reduces Performance Anxiety in Adolescent Musicians. 6. Designing and Implementing a Therapeutic Yoga Program for Older Women With Knee Osteoarthritis. 7. Yoga and Life Skills Eating Disorder Prevention Among 5th Grade Females: A Controlled Trial. 8. A Randomized, Controlled Trial Comparing the Impact of Yoga and Physical Education on the Emotional and Behavioral Functioning of Middle School Children. 9. Feasibility of a Multisite, Community based Randomized Study of Yoga and Wellness Education for Women With Breast Cancer Undergoing Chemotherapy. 10. A Delphi Study for the Development of Protocol Guidelines for Yoga Interventions in Mental Health. 11. Impact Investigation of Breathwalk Daily Practice: Canada�India Collaborative Study. 12. Yoga Improves Distress, Fatigue, and Insomnia in Older Veteran Cancer Survivors: Results of a Pilot Study. 13. Assessment of Kundalini Mantra and Meditation as an Adjunctive Treatment With Mental Health Consumers. 14. Kundalini Yoga Therapy Versus Cognitive Behavior Therapy for Generalized Anxiety Disorder and Co-Occurring Mood Disorder. 15. Baseline Differences in Women Versus Men Initiating Yoga Programs to Aid Smoking Cessation: Quitting in Balance Versus QuitStrong. 16. Pranayam Practice: Impact on Focus and Everyday Life of Work and Relationships. 17. Participation in a Tailored Yoga Program is Associated With Improved Physical Health in Persons With Arthritis. 18. Effects of Yoga on Blood Pressure: Systematic Review and Meta-analysis. 19. A Quasi-experimental Trial of a Yoga based Intervention to Reduce Stress and

  11. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    SYR 2013 Accepted Poster abstracts: 1. Benefits of Yoga as a Wellness Practice in a Veterans Affairs (VA) Health Care Setting: If You Build It, Will They Come? 2. Yoga-based Psychotherapy Group With Urban Youth Exposed to Trauma. 3. Embodied Health: The Effects of a Mind�Body Course for Medical Students. 4. Interoceptive Awareness and Vegetable Intake After a Yoga and Stress Management Intervention. 5. Yoga Reduces Performance Anxiety in Adolescent Musicians. 6. Designing and Implementing a Therapeutic Yoga Program for Older Women With Knee Osteoarthritis. 7. Yoga and Life Skills Eating Disorder Prevention Among 5th Grade Females: A Controlled Trial. 8. A Randomized, Controlled Trial Comparing the Impact of Yoga and Physical Education on the Emotional and Behavioral Functioning of Middle School Children. 9. Feasibility of a Multisite, Community based Randomized Study of Yoga and Wellness Education for Women With Breast Cancer Undergoing Chemotherapy. 10. A Delphi Study for the Development of Protocol Guidelines for Yoga Interventions in Mental Health. 11. Impact Investigation of Breathwalk Daily Practice: Canada�India Collaborative Study. 12. Yoga Improves Distress, Fatigue, and Insomnia in Older Veteran Cancer Survivors: Results of a Pilot Study. 13. Assessment of Kundalini Mantra and Meditation as an Adjunctive Treatment With Mental Health Consumers. 14. Kundalini Yoga Therapy Versus Cognitive Behavior Therapy for Generalized Anxiety Disorder and Co-Occurring Mood Disorder. 15. Baseline Differences in Women Versus Men Initiating Yoga Programs to Aid Smoking Cessation: Quitting in Balance Versus QuitStrong. 16. Pranayam Practice: Impact on Focus and Everyday Life of Work and Relationships. 17. Participation in a Tailored Yoga Program is Associated With Improved Physical Health in Persons With Arthritis. 18. Effects of Yoga on Blood Pressure: Systematic Review and Meta-analysis. 19. A Quasi-experimental Trial of a Yoga based Intervention to Reduce Stress and

  12. Theoretical versus operational state-dependent error growth

    NASA Astrophysics Data System (ADS)

    Khade, V.; Hansen, J.

    2003-04-01

    The state dependence of singular values is well known, and shows that global measures of uncertainty growth like Lyapunov exponents are irrelevant when it comes to forecasts over time scales of interest. But the studies to date do not tell the whole story. It is found that the singular vector error growth in the perfect model scenario, with perfect initial conditions and isotropic uncertainty is markedly different from that for imperfect initial conditions & operationally obtainable uncertainty (for example when data is assimilated). When the model error is included the picture changes yet again. The state dependent error growth in a range of scenarios using a toy model has been compared and contrasted, giving a picture of the large range of issues impacting the predictability problem.

  13. Phonologic error distributions in the Iowa-Nebraska Articulation Norms Project: consonant singletons.

    PubMed

    Smit, A B

    1993-06-01

    The errors on consonant singletons made by children in the Iowa-Nebraska Articulation Norms Project (Smit, Hand, Freilinger, Bernthal, & Bird, 1990) were tabulated by age range and frequency. The prominent error types can usually be described as phonological processes, but there are other common errors as well, especially distortions of liquids and fricatives. Moreover, some of the relevant phonological processes appear to be restricted in the range of consonants or word-positions to which they apply. A metric based on frequency of use is proposed for determining that an error type is or is not atypical. Changes in frequency of error types over the age range are examined to determine if certain atypical error types are likely to be developmental, that is, likely to self-correct as the child matures. Finally, the clinical applications of these data for evaluation and intervention are explored.

  14. Ranging/tracking system for proximity operations

    NASA Technical Reports Server (NTRS)

    Nilsen, P.; Udalov, S.

    1982-01-01

    The hardware development and testing phase of a hand held radar for the ranging and tracking for Shuttle proximity operations are considered. The radar is to measure range to a 3 sigma accuracy of 1 m (3.28 ft) to a maximum range of 1850 m (6000 ft) and velocity to a 3 sigma accuracy of 0.03 m/s (0.1 ft/s). Size and weight are similar to hand held radars, frequently seen in use by motorcycle police officers. Meeting these goals for a target in free space was very difficult to obtain in the testing program; however, at a range of approximately 700 m, the 3 sigma range error was found to be 0.96 m. It is felt that much of this error is due to clutter in the test environment. As an example of the velocity accuracy, at a range of 450 m, a 3 sigma velocity error of 0.02 m/s was measured. The principles of the radar and recommended changes to its design are given. Analyses performed in support of the design process, the actual circuit diagrams, and the software listing are included.

  15. In acceptance we trust? Conceptualising acceptance as a viable approach to NGO security management.

    PubMed

    Fast, Larissa A; Freeman, C Faith; O'Neill, Michael; Rowley, Elizabeth

    2013-04-01

    This paper documents current understanding of acceptance as a security management approach and explores issues and challenges non-governmental organisations (NGOs) confront when implementing an acceptance approach to security management. It argues that the failure of organisations to systematise and clearly articulate acceptance as a distinct security management approach and a lack of organisational policies and procedures concerning acceptance hinder its efficacy as a security management approach. The paper identifies key and cross-cutting components of acceptance that are critical to its effective implementation in order to advance a comprehensive and systematic concept of acceptance. The key components of acceptance illustrate how organisational and staff functions affect positively or negatively an organisation's acceptance, and include: an organisation's principles and mission, communications, negotiation, programming, relationships and networks, stakeholder and context analysis, staffing, and image. The paper contends that acceptance is linked not only to good programming, but also to overall organisational management and structures. PMID:23278470

  16. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    PubMed

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  17. Neuromotor Noise, Error Tolerance and Velocity-Dependent Costs in Skilled Performance

    PubMed Central

    Sternad, Dagmar; Abe, Masaki O.; Hu, Xiaogang; Müller, Hermann

    2011-01-01

    In motor tasks with redundancy neuromotor noise can lead to variations in execution while achieving relative invariance in the result. The present study examined whether humans find solutions that are tolerant to intrinsic noise. Using a throwing task in a virtual set-up where an infinite set of angle and velocity combinations at ball release yield throwing accuracy, our computational approach permitted quantitative predictions about solution strategies that are tolerant to noise. Based on a mathematical model of the task expected results were computed and provided predictions about error-tolerant strategies (Hypothesis 1). As strategies can take on a large range of velocities, a second hypothesis was that subjects select strategies that minimize velocity at release to avoid costs associated with signal- or velocity-dependent noise or higher energy demands (Hypothesis 2). Two experiments with different target constellations tested these two hypotheses. Results of Experiment 1 showed that subjects chose solutions with high error-tolerance, although these solutions also had relatively low velocity. These two benefits seemed to outweigh that for many subjects these solutions were close to a high-penalty area, i.e. they were risky. Experiment 2 dissociated the two hypotheses. Results showed that individuals were consistent with Hypothesis 1 although their solutions were distributed over a range of velocities. Additional analyses revealed that a velocity-dependent increase in variability was absent, probably due to the presence of a solution manifold that channeled variability in a task-specific manner. Hence, the general acceptance of signal-dependent noise may need some qualification. These findings have significance for the fundamental understanding of how the central nervous system deals with its inherent neuromotor noise. PMID:21966262

  18. Quantum error correction of photon-scattering errors

    NASA Astrophysics Data System (ADS)

    Akerman, Nitzan; Glickman, Yinnon; Kotler, Shlomi; Ozeri, Roee

    2011-05-01

    Photon scattering by an atomic ground-state superposition is often considered as a source of decoherence. The same process also results in atom-photon entanglement which had been directly observed in various experiments using single atom, ion or a diamond nitrogen-vacancy center. Here we combine these two aspects to implement a quantum error correction protocol. We encode a qubit in the two Zeeman-splitted ground states of a single trapped 88 Sr+ ion. Photons are resonantly scattered on the S1 / 2 -->P1 / 2 transition. We study the process of single photon scattering i.e. the excitation of the ion to the excited manifold followed by a spontaneous emission and decay. In the absence of any knowledge on the emitted photon, the ion-qubit coherence is lost. However the joined ion-photon system still maintains coherence. We show that while scattering events where spin population is preserved (Rayleigh scattering) do not affect coherence, spin-changing (Raman) scattering events result in coherent amplitude exchange between the two qubit states. By applying a unitary spin rotation that is dependent on the detected photon polarization we retrieve the ion-qubit initial state. We characterize this quantum error correction protocol by process tomography and demonstrate an ability to preserve ion-qubit coherence with high fidelity.

  19. Towards a Bayesian total error analysis of conceptual rainfall-runoff models: Characterising model error using storm-dependent parameters

    NASA Astrophysics Data System (ADS)

    Kuczera, George; Kavetski, Dmitri; Franks, Stewart; Thyer, Mark

    2006-11-01

    SummaryCalibration and prediction in conceptual rainfall-runoff (CRR) modelling is affected by the uncertainty in the observed forcing/response data and the structural error in the model. This study works towards the goal of developing a robust framework for dealing with these sources of error and focuses on model error. The characterisation of model error in CRR modelling has been thwarted by the convenient but indefensible treatment of CRR models as deterministic descriptions of catchment dynamics. This paper argues that the fluxes in CRR models should be treated as stochastic quantities because their estimation involves spatial and temporal averaging. Acceptance that CRR models are intrinsically stochastic paves the way for a more rational characterisation of model error. The hypothesis advanced in this paper is that CRR model error can be characterised by storm-dependent random variation of one or more CRR model parameters. A simple sensitivity analysis is used to identify the parameters most likely to behave stochastically, with variation in these parameters yielding the largest changes in model predictions as measured by the Nash-Sutcliffe criterion. A Bayesian hierarchical model is then formulated to explicitly differentiate between forcing, response and model error. It provides a very general framework for calibration and prediction, as well as for testing hypotheses regarding model structure and data uncertainty. A case study calibrating a six-parameter CRR model to daily data from the Abercrombie catchment (Australia) demonstrates the considerable potential of this approach. Allowing storm-dependent variation in just two model parameters (with one of the parameters characterising model error and the other reflecting input uncertainty) yields a substantially improved model fit raising the Nash-Sutcliffe statistic from 0.74 to 0.94. Of particular significance is the use of posterior diagnostics to test the key assumptions about the data and model errors

  20. Processing In A GPS Receiver To Reduce Multipath Errors

    NASA Technical Reports Server (NTRS)

    Meehan, Thomas K.

    1994-01-01

    Four techniques of ancillary real-time digital processing of signals in Global Positioning System, GPS, receiver introduced reducing effects of multipath propagation of signals on position estimates produced by receiver. Multipath range errors halved. Applied in addition to other signal-processing techniques and to other techniques designing as receiving antenna to make it insensitive to reflections of GPS signals from nearby objects.

  1. Modified McLeod pressure gage eliminates measurement errors

    NASA Technical Reports Server (NTRS)

    Kells, M. C.

    1966-01-01

    Modification of a McLeod gage eliminates errors in measuring absolute pressure of gases in the vacuum range. A valve which is internal to the gage and is magnetically actuated is positioned between the mercury reservoir and the sample gas chamber.

  2. Error Gravity: Perceptions of Native-Speaking and Non-Native Speaking Faculty in EFL.

    ERIC Educational Resources Information Center

    Kresovich, Brant M.

    1988-01-01

    A survey of teachers of composition in English as a Second Language in Japan addressed the perceptions of native-English-speaking and non-native-English-speaking teachers of the acceptability of specific error types within sentences. The native speakers of English were one British and 16 Americans. The non-native group was comprised of 26 Japanese…

  3. Meditation, mindfulness and executive control: the importance of emotional acceptance and brain-based performance monitoring.

    PubMed

    Teper, Rimma; Inzlicht, Michael

    2013-01-01

    Previous studies have documented the positive effects of mindfulness meditation on executive control. What has been lacking, however, is an understanding of the mechanism underlying this effect. Some theorists have described mindfulness as embodying two facets-present moment awareness and emotional acceptance. Here, we examine how the effect of meditation practice on executive control manifests in the brain, suggesting that emotional acceptance and performance monitoring play important roles. We investigated the effect of meditation practice on executive control and measured the neural correlates of performance monitoring, specifically, the error-related negativity (ERN), a neurophysiological response that occurs within 100 ms of error commission. Meditators and controls completed a Stroop task, during which we recorded ERN amplitudes with electroencephalography. Meditators showed greater executive control (i.e. fewer errors), a higher ERN and more emotional acceptance than controls. Finally, mediation pathway models further revealed that meditation practice relates to greater executive control and that this effect can be accounted for by heightened emotional acceptance, and to a lesser extent, increased brain-based performance monitoring.

  4. Why the distribution of medical errors matters.

    PubMed

    McLean, Thomas R

    2015-07-01

    During the last decade, interventions to reduce the number of medical errors have been largely ineffective. Although it is widely assumed that medical errors follow a Gaussian distribution, they may actually follow a Power Rule distribution. This article presents the evidence in favor of a Power Rule distribution for medical errors and then examines the consequences of such a distribution for medical errors. As the distribution of medical errors has real-world implications, further research is needed to determine whether medical errors follow a Gaussian or Power Rule distribution.

  5. Quantum error correction via robust probe modes

    SciTech Connect

    Yamaguchi, Fumiko; Nemoto, Kae; Munro, William J.

    2006-06-15

    We propose a scheme for quantum error correction using robust continuous variable probe modes, rather than fragile ancilla qubits, to detect errors without destroying data qubits. The use of such probe modes reduces the required number of expensive qubits in error correction and allows efficient encoding, error detection, and error correction. Moreover, the elimination of the need for direct qubit interactions significantly simplifies the construction of quantum circuits. We will illustrate how the approach implements three existing quantum error correcting codes: the three-qubit bit-flip (phase-flip) code, the Shor code, and an erasure code.

  6. Risk Factors for Increased Severity of Paediatric Medication Administration Errors

    PubMed Central

    Sears, Kim; Goodman, William M.

    2012-01-01

    Patients' risks from medication errors are widely acknowledged. Yet not all errors, if they occur, have the same risks for severe consequences. Facing resource constraints, policy makers could prioritize factors having the greatest severe–outcome risks. This study assists such prioritization by identifying work-related risk factors most clearly associated with more severe consequences. Data from three Canadian paediatric centres were collected, without identifiers, on actual or potential errors that occurred. Three hundred seventy-two errors were reported, with outcome severities ranging from time delays up to fatalities. Four factors correlated significantly with increased risk for more severe outcomes: insufficient training; overtime; precepting a student; and off-service patient. Factors' impacts on severity also vary with error class: for wrong-time errors, the factors precepting a student or working overtime significantly increase severe-outcomes risk. For other types, caring for an off-service patient has greatest severity risk. To expand such research, better standardization is needed for categorizing outcome severities. PMID:23968607

  7. 12 CFR 250.164 - Bankers' acceptances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Bankers' acceptances. 250.164 Section 250.164 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM MISCELLANEOUS INTERPRETATIONS Interpretations § 250.164 Bankers' acceptances. (a) Section 207 of the Bank...

  8. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  9. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  10. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  11. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  12. 12 CFR 615.5550 - Bankers' acceptances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FUNDING AND FISCAL AFFAIRS, LOAN POLICIES AND OPERATIONS, AND FUNDING OPERATIONS Bankers' Acceptances § 615.5550 Bankers' acceptances. Banks... cooperatives' board of directors, under established policies, may delegate this authority to management....

  13. Mindfulness, Acceptance and Catastrophizing in Chronic Pain

    PubMed Central

    de Boer, Maaike J.; Steinhagen, Hannemike E.; Versteegen, Gerbrig J.; Struys, Michel M. R. F.; Sanderman, Robbert

    2014-01-01

    Objectives Catastrophizing is often the primary target of the cognitive-behavioral treatment of chronic pain. Recent literature on acceptance and commitment therapy (ACT) suggests an important role in the pain experience for the concepts mindfulness and acceptance. The aim of this study is to examine the influence of mindfulness and general psychological acceptance on pain-related catastrophizing in patients with chronic pain. Methods A cross-sectional survey was conducted, including 87 chronic pain patients from an academic outpatient pain center. Results The results show that general psychological acceptance (measured with the AAQ-II) is a strong predictor of pain-related catastrophizing, independent of gender, age and pain intensity. Mindfulness (measured with the MAAS) did not predict levels of pain-related catastrophizing. Discussion Acceptance of psychological experiences outside of pain itself is related to catastrophizing. Thus, acceptance seems to play a role in the pain experience and should be part of the treatment of chronic pain. The focus of the ACT treatment of chronic pain does not necessarily have to be on acceptance of pain per se, but may be aimed at acceptance of unwanted experiences in general. Mindfulness in the sense of “acting with awareness” is however not related to catastrophizing. Based on our research findings in comparisons with those of other authors, we recommend a broader conceptualization of mindfulness and the use of a multifaceted questionnaire for mindfulness instead of the unidimensional MAAS. PMID:24489915

  14. Consumer acceptance of ginseng food products.

    PubMed

    Chung, Hee Sook; Lee, Young-Chul; Rhee, Young Kyung; Lee, Soo-Yeun

    2011-01-01

    Ginseng has been utilized less in food products than in dietary supplements in the United States. Sensory acceptance of ginseng food products by U.S. consumers has not been reported. The objectives of this study were to: (1) determine the sensory acceptance of commercial ginseng food products and (2) assess influence of the addition of sweeteners to ginseng tea and ginseng extract to chocolate on consumer acceptance. Total of 126 consumers participated in 3 sessions for (1) 7 commercial red ginseng food products, (2) 10 ginseng teas varying in levels of sugar or honey, and (3) 10 ginseng milk or dark chocolates varying in levels of ginseng extract. Ginseng candy with vitamin C and ginseng crunchy white chocolate were the most highly accepted, while sliced ginseng root product was the least accepted among the seven commercial products. Sensory acceptance increased in proportion to the content of sugar and honey in ginseng tea, whereas acceptance decreased with increasing content of ginseng extract in milk and dark chocolates. Findings demonstrate that ginseng food product types with which consumers have been already familiar, such as candy and chocolate, will have potential for success in the U.S. market. Chocolate could be suggested as a food matrix into which ginseng can be incorporated, as containing more bioactive compounds than ginseng tea at a similar acceptance level. Future research may include a descriptive analysis with ginseng-based products to identify the key drivers of liking and disliking for successful new product development.

  15. 36 CFR 251.62 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 251.62 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LAND USES Special Uses § 251.62 Acceptance. Except for an easement, a special use authorization shall become effective... extended by the authorized officer. Refusal of an applicant to sign and accept a special use...

  16. 36 CFR 251.62 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 251.62 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LAND USES Special Uses § 251.62 Acceptance. Except for an easement, a special use authorization shall become effective... extended by the authorized officer. Refusal of an applicant to sign and accept a special use...

  17. 36 CFR 251.62 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 251.62 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LAND USES Special Uses § 251.62 Acceptance. Except for an easement, a special use authorization shall become effective... extended by the authorized officer. Refusal of an applicant to sign and accept a special use...

  18. 36 CFR 251.62 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 251.62 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LAND USES Special Uses § 251.62 Acceptance. Except for an easement, a special use authorization shall become effective... extended by the authorized officer. Refusal of an applicant to sign and accept a special use...

  19. 36 CFR 251.62 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 251.62 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE LAND USES Special Uses § 251.62 Acceptance. Except for an easement, a special use authorization shall become effective... extended by the authorized officer. Refusal of an applicant to sign and accept a special use...

  20. Improving Acceptance of Automated Counseling Procedures.

    ERIC Educational Resources Information Center

    Johnson, James H.; And Others

    This paper discusses factors that may influence the acceptance of automated counseling procedures by the military. A consensual model of the change process is presented which structures organizational readiness, the change strategy, and acceptance as integrated variables to be considered in a successful installation. A basic introduction to the…

  1. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... supported by market research; (4) Include consideration of items supplied satisfactorily under recent or... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a)...

  2. 49 CFR 193.2303 - Construction acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Construction acceptance. 193.2303 Section 193.2303 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY...: FEDERAL SAFETY STANDARDS Construction § 193.2303 Construction acceptance. No person may place in...

  3. 7 CFR 1205.326 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 10 2012-01-01 2012-01-01 false Acceptance. 1205.326 Section 1205.326 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... Research and Promotion Order Cotton Board § 1205.326 Acceptance. Any person selected by the Secretary as...

  4. 7 CFR 1205.326 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Acceptance. 1205.326 Section 1205.326 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... Research and Promotion Order Cotton Board § 1205.326 Acceptance. Any person selected by the Secretary as...

  5. 12 CFR 250.164 - Bankers' acceptances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 4 2012-01-01 2012-01-01 false Bankers' acceptances. 250.164 Section 250.164... reserve requirements under section 7 of the International Banking Act of 1978 (12 U.S.C. 3105). The Board..., Form FR Y-7, are also to be used in the calculation of the acceptance limits applicable to...

  6. 16 CFR 1110.5 - Acceptable certificates.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Acceptable certificates. 1110.5 Section 1110.5 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION CONSUMER PRODUCT SAFETY ACT REGULATIONS CERTIFICATES OF COMPLIANCE § 1110.5 Acceptable certificates. A certificate that is in hard copy or...

  7. Enzyme Reactions and Acceptability of Plant Foods.

    ERIC Educational Resources Information Center

    Palmer, James K.

    1984-01-01

    Provides an overview of enzyme reactions which contribute to the character and acceptability of plant foods. A detailed discussion of polyphenoloxidase is also provided as an example of an enzyme which can markedly affect the character and acceptability of such foods. (JN)

  8. Consumer acceptance of ginseng food products.

    PubMed

    Chung, Hee Sook; Lee, Young-Chul; Rhee, Young Kyung; Lee, Soo-Yeun

    2011-01-01

    Ginseng has been utilized less in food products than in dietary supplements in the United States. Sensory acceptance of ginseng food products by U.S. consumers has not been reported. The objectives of this study were to: (1) determine the sensory acceptance of commercial ginseng food products and (2) assess influence of the addition of sweeteners to ginseng tea and ginseng extract to chocolate on consumer acceptance. Total of 126 consumers participated in 3 sessions for (1) 7 commercial red ginseng food products, (2) 10 ginseng teas varying in levels of sugar or honey, and (3) 10 ginseng milk or dark chocolates varying in levels of ginseng extract. Ginseng candy with vitamin C and ginseng crunchy white chocolate were the most highly accepted, while sliced ginseng root product was the least accepted among the seven commercial products. Sensory acceptance increased in proportion to the content of sugar and honey in ginseng tea, whereas acceptance decreased with increasing content of ginseng extract in milk and dark chocolates. Findings demonstrate that ginseng food product types with which consumers have been already familiar, such as candy and chocolate, will have potential for success in the U.S. market. Chocolate could be suggested as a food matrix into which ginseng can be incorporated, as containing more bioactive compounds than ginseng tea at a similar acceptance level. Future research may include a descriptive analysis with ginseng-based products to identify the key drivers of liking and disliking for successful new product development. PMID:22416723

  9. Heavy Metal, Religiosity, and Suicide Acceptability.

    ERIC Educational Resources Information Center

    Stack, Steven

    1998-01-01

    Reports on data taken from the General Social Survey that found a link between "heavy metal" rock fanship and suicide acceptability. Finds that relationship becomes nonsignificant once level of religiosity is controlled. Heavy metal fans are low in religiosity, which contributes to greater suicide acceptability. (Author/JDM)

  10. Nevada Test Site Waste Acceptance Criteria (NTSWAC)

    SciTech Connect

    NNSA /NSO Waste Management Project

    2008-06-01

    This document establishes the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office, Nevada Test Site Waste Acceptance Criteria (NTSWAC). The NTSWAC provides the requirements, terms, and conditions under which the Nevada Test Site will accept low-level radioactive (LLW) and LLW Mixed Waste (MW) for disposal.

  11. Direct Geolocation of TerraSAR-X Spotlight Mode Image and Error Correction

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Zeng, Qiming; Jiao, Jian; Zhang, Jingfa; Gong, Lixia

    2013-01-01

    The research dealt with direct geolocation of spaceborne high-resolution SAR image. The TerraSAR-X spotlight mode image was chosen as the study object. The mathematical model of SAR geolocation is Range-Doppler (RD) model. Its resolving algorithms had been studied and the ASF algorithm was chosen because of its high accuracy. The focus of this research laid on the error sources and their correction method which could affect the geolocation accuracy, such as the orbit errors, azimuth timing errors and range timing errors. At last, the accuracy of this method was verified by the experiment results.

  12. Error Estimation for Reduced Order Models of Dynamical systems

    SciTech Connect

    Homescu, C; Petzold, L R; Serban, R

    2003-12-16

    The use of reduced order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors, by a combination of the small sample statistical condition estimation method and of error estimation using the adjoint method. More importantly, the proposed approach allows the assessment of so-called regions of validity for reduced models, i.e., ranges of perturbations in the original system over which the reduced model is still appropriate. This question is particularly important for applications in which reduced models are used not just to approximate the solution to the system that provided the data used in constructing the reduced model, but rather to approximate the solution of systems perturbed from the original one. Numerical examples validate our approach: the error norm estimates approximate well the forward error while the derived bounds are within an order of magnitude.

  13. SAR interferometry for DEM generation: wide-area error assessment

    NASA Astrophysics Data System (ADS)

    Carrasco, Daniel; Broquetas, Antoni; Pena, Ramon; Arbiol, Roman; Castillo, Manuel; Pala, Vincenc

    1998-11-01

    The present work consists on the generation of a DEM using ERS satellites interferometric data over a wide area (50 X 50 Km) with an error study using a high accuracy reference DEM, focusing on the atmosphere induced errors. The area is heterogeneous with flat and rough topography ranging from sea level up to 1200 m in the inland ranges. The ERS image has a 100 X 100 Km2 area and has been divided in four quarters to ease the processing. The phase unwrapping algorithm, which is a combination of region growing and least squares techniques, worked out successfully the rough topography areas. One quarter of the full scene was geocoded over a local datum ellipsoid to a UTM grid. The resulting DEM was compared to a reference one provided by the Institut Cartografic de Catalunya. Two types of atmospheric error or artifacts were found: a set of very localized spots, up to one phase cycle, which generated ghost hills up to 100, and a slow trend effect which added up to 50 m to some areas in the image. Besides of the atmospheric errors, the quality of the DEM was assessed. The quantitative error study was carried out locally at several areas with different topography.

  14. Improving visual estimates of cervical spine range of motion.

    PubMed

    Hirsch, Brandon P; Webb, Matthew L; Bohl, Daniel D; Fu, Michael; Buerba, Rafael A; Gruskay, Jordan A; Grauer, Jonathan N

    2014-11-01

    Cervical spine range of motion (ROM) is a common measure of cervical conditions, surgical outcomes, and functional impairment. Although ROM is routinely assessed by visual estimation in clinical practice, visual estimates have been shown to be unreliable and inaccurate. Reliable goniometers can be used for assessments, but the associated costs and logistics generally limit their clinical acceptance. To investigate whether training can improve visual estimates of cervical spine ROM, we asked attending surgeons, residents, and medical students at our institution to visually estimate the cervical spine ROM of healthy subjects before and after a training session. This training session included review of normal cervical spine ROM in 3 planes and demonstration of partial and full motion in 3 planes by multiple subjects. Estimates before, immediately after, and 1 month after this training session were compared to assess reliability and accuracy. Immediately after training, errors decreased by 11.9° (flexion-extension), 3.8° (lateral bending), and 2.9° (axial rotation). These improvements were statistically significant. One month after training, visual estimates remained improved, by 9.5°, 1.6°, and 3.1°, respectively, but were statistically significant only in flexion-extension. Although the accuracy of visual estimates can be improved, clinicians should be aware of the limitations of visual estimates of cervical spine ROM. Our study results support scrutiny of visual assessment of ROM as a criterion for diagnosing permanent impairment or disability. PMID:25379754

  15. Who accepts responsibility for their transgressions?

    PubMed

    Schumann, Karina; Dweck, Carol S

    2014-12-01

    After committing an offense, transgressors can optimize their chances of reconciling with the victim by accepting responsibility. However, transgressors may be motivated to avoid admitting fault because it can feel threatening to accept blame for harmful behavior. Who, then, is likely to accept responsibility for a transgression? We examined how implicit theories of personality--whether people see personality as malleable (incremental theory) or fixed (entity theory)--influence transgressors' likelihood of accepting responsibility. We argue that incremental theorists may feel less threatened by accepting responsibility because they are more likely to view the situation as an opportunity for them to grow as a person and develop their relationship with the victim. We found support for our predictions across four studies using a combination of real-world and hypothetical offenses, and correlational and experimental methods. These studies therefore identify an important individual difference factor that can lead to more effective responses from transgressors. PMID:25252938

  16. Understanding diversity: the importance of social acceptance.

    PubMed

    Chen, Jacqueline M; Hamilton, David L

    2015-04-01

    Two studies investigated how people define and perceive diversity in the historically majority-group dominated contexts of business and academia. We hypothesized that individuals construe diversity as both the numeric representation of racial minorities and the social acceptance of racial minorities within a group. In Study 1, undergraduates' (especially minorities') perceptions of campus diversity were predicted by perceived social acceptance on a college campus, above and beyond perceived minority representation. Study 2 showed that increases in a company's representation and social acceptance independently led to increases in perceived diversity of the company among Whites. Among non-Whites, representation and social acceptance only increased perceived diversity of the company when both qualities were high. Together these findings demonstrate the importance of both representation and social acceptance to the achievement of diversity in groups and that perceiver race influences the relative importance of these two components of diversity.

  17. Acceptability of prenatal testing and termination of pregnancy in Pakistan.

    PubMed

    Jafri, H; Hewison, J; Sheridan, E; Ahmed, S

    2015-01-01

    This study aimed to assess acceptability of prenatal testing (PNT) and termination of pregnancy (TOP) for a range of conditions in Pakistani parents with and without a child with a genetic condition. A structured questionnaire assessing acceptability of PNT and TOP for 30 conditions was completed by 400 Pakistani participants: 200 parents with a child with a genetic condition (100 fathers and 100 mothers) and 200 parents without an affected child (100 fathers and 100 mothers). There was a high level of interest in PNT, where over 80 % of parents in all four study groups would want PNT for the majority of the conditions. There was comparatively less interest in TOP for the same conditions (ranging from 5 to 70 % of parents, with mothers of an affected child being most interested). Parents were most likely to be interested in TOP for conditions at the serious end of the continuum. More than half of the participants in each group would consider TOP for anencephaly and quadriplegia. The interest in PNT and TOP for a range of conditions suggests that rapidly developing PNT technologies are likely to be acceptable in Pakistan, a low-middle income level and Muslim country. The comparatively lower level of interest in TOP for the same conditions highlights ethical dilemmas that such technologies are likely to raise. PMID:25081227

  18. Flow measurement by cardiovascular magnetic resonance: a multi-centre multi-vendor study of background phase offset errors that can compromise the accuracy of derived regurgitant or shunt flow measurements

    PubMed Central

    2010-01-01

    Aims Cardiovascular magnetic resonance (CMR) allows non-invasive phase contrast measurements of flow through planes transecting large vessels. However, some clinically valuable applications are highly sensitive to errors caused by small offsets of measured velocities if these are not adequately corrected, for example by the use of static tissue or static phantom correction of the offset error. We studied the severity of uncorrected velocity offset errors across sites and CMR systems. Methods and Results In a multi-centre, multi-vendor study, breath-hold through-plane retrospectively ECG-gated phase contrast acquisitions, as are used clinically for aortic and pulmonary flow measurement, were applied to static gelatin phantoms in twelve 1.5 T CMR systems, using a velocity encoding range of 150 cm/s. No post-processing corrections of offsets were implemented. The greatest uncorrected velocity offset, taken as an average over a 'great vessel' region (30 mm diameter) located up to 70 mm in-plane distance from the magnet isocenter, ranged from 0.4 cm/s to 4.9 cm/s. It averaged 2.7 cm/s over all the planes and systems. By theoretical calculation, a velocity offset error of 0.6 cm/s (representing just 0.4% of a 150 cm/s velocity encoding range) is barely acceptable, potentially causing about 5% miscalculation of cardiac output and up to 10% error in shunt measurement. Conclusion In the absence of hardware or software upgrades able to reduce phase offset errors, all the systems tested appeared to require post-acquisition correction to achieve consistently reliable breath-hold measurements of flow. The effectiveness of offset correction software will still need testing with respect to clinical flow acquisitions. PMID:20074359

  19. Topographic gravitational potential up to second-order derivatives: an examination of approximation errors caused by rock-equivalent topography (RET)

    NASA Astrophysics Data System (ADS)

    Kuhn, Michael; Hirt, Christian

    2016-09-01

    In gravity forward modelling, the concept of Rock-Equivalent Topography (RET) is often used to simplify the computation of gravity implied by rock, water, ice and other topographic masses. In the RET concept, topographic masses are compressed (approximated) into equivalent rock, allowing the use of a single constant mass-density value. Many studies acknowledge the approximate character of the RET, but few have attempted yet to quantify and analyse the approximation errors in detail for various gravity field functionals and heights of computation points. Here, we provide an in-depth examination of approximation errors associated with the RET compression for the topographic gravitational potential and its first- and second-order derivatives. Using the Earth2014 layered topography suite we apply Newtonian integration in the spatial domain in the variants (a) rigorous forward modelling of all mass bodies, (b) approximative modelling using RET. The differences among both variants, which reflect the RET approximation error, are formed and studied for an ensemble of 10 different gravity field functionals at three levels of altitude (on and 3 km above the Earth's surface and at 250 km satellite height). The approximation errors are found to be largest at the Earth's surface over RET compression areas (oceans, ice shields) and to increase for the first- and second-order derivatives. Relative errors, computed here as ratio between the range of differences between both variants relative to the range in signal, are at the level of 0.06-0.08 % for the potential, ˜ 3-7 % for the first-order derivatives at the Earth's surface (˜ 0.1 % at satellite altitude). For the second-order derivatives, relative errors are below 1 % at satellite altitude, at the 10-20 % level at 3 km and reach maximum values as large as ˜ 20 to 110 % near the surface. As such, the RET approximation errors may be acceptable for functionals computed far away from the Earth's surface or studies focussing on

  20. Error detection and reduction in blood banking.

    PubMed

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  1. Errors associated with outpatient computerized prescribing systems

    PubMed Central

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  2. Evaluation of the Regional Atmospheric Modeling System in the Eastern Range Dispersion Assessment System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    2000-01-01

    The Applied Meteorology Unit is conducting an evaluation of the Regional Atmospheric Modeling System (RAMS) contained within the Eastern Range Dispersion Assessment System (ERDAS). ERDAS provides emergency response guidance for operations at the Cape Canaveral Air Force Station and the Kennedy Space Center in the event of an accidental hazardous material release or aborted vehicle launch. The prognostic data from RAMS is available to ERDAS for display and is used to initialize the 45th Range Safety (45 SW/SE) dispersion model. Thus, the accuracy of the 45 SW/SE dispersion model is dependent upon the accuracy of RAMS forecasts. The RAMS evaluation task consists of an objective and subjective component for the Florida warm and cool seasons of 1999-2000. The objective evaluation includes gridded and point error statistics at surface and upper-level observational sites, a comparison of the model errors to a coarser grid configuration of RAMS, and a benchmark of RAMS against the widely accepted Eta model. The warm-season subjective evaluation involves a verification of the onset and movement of the Florida east coast sea breeze and RAMS forecast precipitation. This interim report provides a summary of the RAMS objective and subjective evaluation for the 1999 Florida warm season only.

  3. Toward a Taxonomy of Written Errors: Investigation into the Written Errors of Hong Kong Cantonese ESL Learners

    ERIC Educational Resources Information Center

    Chan, Alice Y. W.

    2010-01-01

    This article examines common lexicogrammatical problems found in Cantonese English as a second language (ESL) learners' written English output. A study was conducted with 387 student participants, who were asked to do two untutored and unaided free-writing tasks of about 200-300 words each. A range of lexicogrammatical error types commonly found…

  4. Human decision error (HUMDEE) trees

    SciTech Connect

    Ostrom, L.T.

    1993-08-01

    Graphical presentations of human actions in incident and accident sequences have been used for many years. However, for the most part, human decision making has been underrepresented in these trees. This paper presents a method of incorporating the human decision process into graphical presentations of incident/accident sequences. This presentation is in the form of logic trees. These trees are called Human Decision Error Trees or HUMDEE for short. The primary benefit of HUMDEE trees is that they graphically illustrate what else the individuals involved in the event could have done to prevent either the initiation or continuation of the event. HUMDEE trees also present the alternate paths available at the operator decision points in the incident/accident sequence. This is different from the Technique for Human Error Rate Prediction (THERP) event trees. There are many uses of these trees. They can be used for incident/accident investigations to show what other courses of actions were available and for training operators. The trees also have a consequence component so that not only the decision can be explored, also the consequence of that decision.

  5. Relationship between behavioural coping strategies and acceptance in patients with fibromyalgia syndrome: Elucidating targets of interventions

    PubMed Central

    2011-01-01

    Background Previous research has found that acceptance of pain is more successful than cognitive coping variables for predicting adjustment to pain. This research has a limitation because measures of cognitive coping rely on observations and reports of thoughts or attempts to change thoughts rather than on overt behaviours. The purpose of the present study, therefore, is to compare the influence of acceptance measures and the influence of different behavioural coping strategies on the adjustment to chronic pain. Methods A sample of 167 individuals diagnosed with fibromyalgia syndrome completed the Chronic Pain Coping Inventory (CPCI) and the Chronic Pain Acceptance Questionnaire (CPAQ). Results Correlational analyses indicated that the acceptance variables were more related to distress and functioning than were behavioural coping variables. The average magnitudes of the coefficients for activity engagement and pain willingness (both subscales of pain acceptance) across the measures of distress and functioning were r = 0.42 and 0.25, respectively, meanwhile the average magnitude of the correlation between coping and functioning was r = 0.17. Regression analyses examined the independent, relative contributions of coping and acceptance to adjustment indicators and demonstrated that acceptance accounted for more variance than did coping variables. The variance contributed by acceptance scores ranged from 4.0 to 40%. The variance contributed by the coping variables ranged from 0 to 9%. Conclusions This study extends the findings of previous work in enhancing the adoption of acceptance-based interventions for maintaining accurate functioning in fibromyalgia patients. PMID:21714918

  6. Study of geopotential error models used in orbit determination error analysis

    NASA Technical Reports Server (NTRS)

    Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

    1991-01-01

    The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

  7. Refractive Errors - Multiple Languages: MedlinePlus

    MedlinePlus

    ... Are Here: Home → Multiple Languages → All Health Topics → Refractive Errors URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Refractive Errors - Multiple Languages To use the sharing features on ...

  8. Field errors in hybrid insertion devices

    SciTech Connect

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  9. Medication Errors - Multiple Languages: MedlinePlus

    MedlinePlus

    ... Are Here: Home → Multiple Languages → All Health Topics → Medication Errors URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Medication Errors - Multiple Languages To use the sharing features on ...

  10. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  11. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  12. Public Acceptance for Geological CO2-Storage

    NASA Astrophysics Data System (ADS)

    Schilling, F.; Ossing, F.; Würdemann, H.; Co2SINK Team

    2009-04-01

    Public acceptance is one of the fundamental prerequisites for geological CO2 storage. In highly populated areas like central Europe, especially in the vicinity of metropolitan areas like Berlin, underground operations are in the focus of the people living next to the site, the media, and politics. To gain acceptance, all these groups - the people in the neighbourhood, journalists, and authorities - need to be confident of the security of the planned storage operation as well as the long term security of storage. A very important point is to show that the technical risks of CO2 storage can be managed with the help of a proper short and long term monitoring concept, as well as appropriate mitigation technologies e.g adequate abandonment procedures for leaking wells. To better explain the possible risks examples for leakage scenarios help the public to assess and to accept the technical risks of CO2 storage. At Ketzin we tried the following approach that can be summed up on the basis: Always tell the truth! This might be self-evident but it has to be stressed that credibility is of vital importance. Suspiciousness and distrust are best friends of fear. Undefined fear seems to be the major risk in public acceptance of geological CO2-storage. Misinformation and missing communication further enhance the denial of geological CO2 storage. When we started to plan and establish the Ketzin storage site, we ensured a forward directed communication. Offensive information activities, an information centre on site, active media politics and open information about the activities taking place are basics. Some of the measures were: - information of the competent authorities through meetings (mayor, governmental authorities) - information of the local public, e.g. hearings (while also inviting local, regional and nation wide media) - we always treated the local people and press first! - organizing of bigger events to inform the public on site, e.g. start of drilling activities (open

  13. Analysis and classification of human error

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.; Rouse, S. H.

    1983-01-01

    The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.

  14. Error Propagation in a System Model

    NASA Technical Reports Server (NTRS)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  15. [Medical errors and conflicts in clinical practice].

    PubMed

    Doskin, V A; Dorinova, E A; Kartoeva, R A; Sokolova, M S

    2014-01-01

    The number of medical errors is increasing. Medical errors have negative impact on the professional activities of physicians. Analysis of the causes and incidence of medical errors and conflicts in clinical practice of foreign and domestic doctors is presented based on the author's observations and didactic materials recommended for training doctors to prevent conflict situations in their professional work and for developing a common strategy for the prevention of medical errors.

  16. Optimized entanglement-assisted quantum error correction

    SciTech Connect

    Taghavi, Soraya; Brun, Todd A.; Lidar, Daniel A.

    2010-10-15

    Using convex optimization, we propose entanglement-assisted quantum error-correction procedures that are optimized for given noise channels. We demonstrate through numerical examples that such an optimized error-correction method achieves higher channel fidelities than existing methods. This improved performance, which leads to perfect error correction for a larger class of error channels, is interpreted in at least some cases by quantum teleportation, but for general channels this interpretation does not hold.

  17. Compensation of optode sensitivity and position errors in diffuse optical tomography using the approximation error approach.

    PubMed

    Mozumder, Meghdoot; Tarvainen, Tanja; Arridge, Simon R; Kaipio, Jari; Kolehmainen, Ville

    2013-01-01

    Diffuse optical tomography is highly sensitive to measurement and modeling errors. Errors in the source and detector coupling and positions can cause significant artifacts in the reconstructed images. Recently the approximation error theory has been proposed to handle modeling errors. In this article, we investigate the feasibility of the approximation error approach to compensate for modeling errors due to inaccurately known optode locations and coupling coefficients. The approach is evaluated with simulations. The results show that the approximation error method can be used to recover from artifacts in reconstructed images due to optode coupling and position errors.

  18. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

  19. Negotiating vaccine acceptance in an era of reluctance.

    PubMed

    Larson, Heidi J

    2013-08-01

    Studies to better understand the determinants of vaccine acceptance have expanded to include more investigation into dynamics of individual decision-making as well as the influences of peers and social networks. Vaccine acceptance is determined by a range of factors, from structural issues of supply, costs and access to services, as well as the more demand-side determinants. The term vaccine hesitancy is increasingly used in the investigation of demand-side determinants, moving away from the more polarized framing of pro- and anti-vaccine groups to recognizing the importance of understanding and engaging those who are delaying vaccination, accepting only some vaccines, or who are yet undecided, but reluctant. As hesitancy is a state of indecision, it is difficult to measure, but the stage of indecision is a critical time to engage and support the decision-making process. This article suggests modes of investigating the determinants of vaccine confidence and levers of vaccine acceptance toward better engagement and dialogue early in the process of decision-making. Pressure to vaccinate can be counter-productive. Listening and dialog can support individual decision-making and more effectively inform the public health community of the issues and concerns influencing vaccine hesitancy. PMID:23896582

  20. Negotiating vaccine acceptance in an era of reluctance.

    PubMed

    Larson, Heidi J

    2013-08-01

    Studies to better understand the determinants of vaccine acceptance have expanded to include more investigation into dynamics of individual decision-making as well as the influences of peers and social networks. Vaccine acceptance is determined by a range of factors, from structural issues of supply, costs and access to services, as well as the more demand-side determinants. The term vaccine hesitancy is increasingly used in the investigation of demand-side determinants, moving away from the more polarized framing of pro- and anti-vaccine groups to recognizing the importance of understanding and engaging those who are delaying vaccination, accepting only some vaccines, or who are yet undecided, but reluctant. As hesitancy is a state of indecision, it is difficult to measure, but the stage of indecision is a critical time to engage and support the decision-making process. This article suggests modes of investigating the determinants of vaccine confidence and levers of vaccine acceptance toward better engagement and dialogue early in the process of decision-making. Pressure to vaccinate can be counter-productive. Listening and dialog can support individual decision-making and more effectively inform the public health community of the issues and concerns influencing vaccine hesitancy.

  1. Consumer Acceptance of Dry Dog Food Variations

    PubMed Central

    Donfrancesco, Brizio Di; Koppel, Kadri; Swaney-Stueve, Marianne; Chambers, Edgar

    2014-01-01

    Simple Summary The objectives of this study were to compare the acceptance of different dry dog food products by consumers, determine consumer clusters for acceptance, and identify the characteristics of dog food that drive consumer acceptance. Pet owners evaluated dry dog food samples available in the US market. The results indicated that appearance of the sample, especially the color, influenced pet owner’s overall liking more than the aroma of the product. Abstract The objectives of this study were to compare the acceptance of different dry dog food products by consumers, determine consumer clusters for acceptance, and identify the characteristics of dog food that drive consumer acceptance. Eight dry dog food samples available in the US market were evaluated by pet owners. In this study, consumers evaluated overall liking, aroma, and appearance liking of the products. Consumers were also asked to predict their purchase intent, their dog’s liking, and cost of the samples. The results indicated that appearance of the sample, especially the color, influenced pet owner’s overall liking more than the aroma of the product. Overall liking clusters were not related to income, age, gender, or education, indicating that general consumer demographics do not appear to play a main role in individual consumer acceptance of dog food products. PMID:26480043

  2. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  3. Error Analysis of Quadrature Rules. Classroom Notes

    ERIC Educational Resources Information Center

    Glaister, P.

    2004-01-01

    Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…

  4. Misclassification Errors and Categorical Data Analysis.

    ERIC Educational Resources Information Center

    Katz, Barry M.; McSweeney, Maryellen

    1979-01-01

    Errors of misclassification and their effects on categorical data analysis are discussed. The chi-square test for equality of two proportions is examined in the context of errorful categorical data. The effects of such errors are illustrated. A correction procedure is developed and discussed. (Author/MH)

  5. Understanding EFL Students' Errors in Writing

    ERIC Educational Resources Information Center

    Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti

    2015-01-01

    Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…

  6. Errors in Standardized Tests: A Systemic Problem.

    ERIC Educational Resources Information Center

    Rhoades, Kathleen; Madaus, George

    The nature and extent of human error in educational testing over the past 25 years were studied. In contrast to the random measurement error expected in all tests, the presence of human error is unexpected and brings unknown, often harmful, consequences for students and their schools. Using data from a variety of sources, researchers found 103…

  7. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 96.156 Section 96.156... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  8. 40 CFR 97.156 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.156 Section 97.156... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...

  9. 40 CFR 97.627 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.627 Section 97.627... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  10. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 96.156 Section 96.156... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  11. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 96.56 Section 96.56... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  12. 40 CFR 97.427 - Account error.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Account error. 97.427 Section 97.427... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  13. 40 CFR 97.427 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.427 Section 97.427... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  14. 40 CFR 97.727 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.727 Section 97.727... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  15. 40 CFR 96.156 - Account error.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Account error. 96.156 Section 96.156... Tracking System § 96.156 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within...

  16. 40 CFR 97.427 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.427 Section 97.427... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Management System account. Within 10 business days of making...

  17. 40 CFR 97.156 - Account error.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Account error. 97.156 Section 97.156... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...

  18. 40 CFR 73.37 - Account error.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Account error. 73.37 Section 73.37... ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any Allowance Tracking System account....

  19. 40 CFR 96.56 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 96.56 Section 96.56... Tracking System § 96.56 Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any NOX Allowance Tracking System account. Within 10...

  20. 40 CFR 97.156 - Account error.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Account error. 97.156 Section 97.156... Account error. The Administrator may, at his or her sole discretion and on his or her own motion, correct any error in any CAIR NOX Allowance Tracking System account. Within 10 business days of making...