Science.gov

Sample records for accuracy relative error

  1. Absolute vs. relative error characterization of electromagnetic tracking accuracy

    NASA Astrophysics Data System (ADS)

    Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet

    2010-02-01

    Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the

  2. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  3. Error awareness as evidence accumulation: effects of speed-accuracy trade-off on error signaling

    PubMed Central

    Steinhauser, Marco; Yeung, Nick

    2012-01-01

    Errors in choice tasks have been shown to elicit a cascade of characteristic components in the human event-related potential (ERPs)—the error-related negativity (Ne/ERN) and the error positivity (Pe). Despite the large number of studies concerned with these components, it is still unclear how they relate to error awareness as measured by overt error signaling responses. In the present study, we considered error awareness as a decision process in which evidence for an error is accumulated until a decision criterion is reached, and hypothesized that the Pe is a correlate of the accumulated decision evidence. To test the prediction that the amplitude of the Pe varies as a function of the strength and latency of the accumulated evidence for an error, we manipulated the speed-accuracy trade-off (SAT) in a brightness discrimination task while participants signaled the occurrence of errors. Based on a previous modeling study, we predicted that lower speed pressure should be associated with weaker evidence for an error and, thus, with smaller Pe amplitudes. As predicted, average Pe amplitude was decreased and error signaling was impaired in a low speed pressure condition compared to a high speed pressure condition. In further analyses, we derived single-trial Pe amplitudes using a logistic regression approach. Single-trial amplitudes robustly predicted the occurrence of signaling responses on a trial-by-trial basis. These results confirm the predictions of the evidence accumulation account, supporting the notion that the Pe reflects accumulated evidence for an error and that this evidence drives the emergence of error awareness. PMID:22905027

  4. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  5. Improving Localization Accuracy: Successive Measurements Error Modeling.

    PubMed

    Ali, Najah Abu; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle's future position and its past positions, and then propose a -order Gauss-Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss-Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle's future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  6. Alterations in Error-Related Brain Activity and Post-Error Behavior over Time

    ERIC Educational Resources Information Center

    Themanson, Jason R.; Rosen, Peter J.; Pontifex, Matthew B.; Hillman, Charles H.; McAuley, Edward

    2012-01-01

    This study examines the relation between the error-related negativity (ERN) and post-error behavior over time in healthy young adults (N = 61). Event-related brain potentials were collected during two sessions of an identical flanker task. Results indicated changes in ERN and post-error accuracy were related across task sessions, with more…

  7. Does naming accuracy improve through self-monitoring of errors?

    PubMed

    Schwartz, Myrna F; Middleton, Erica L; Brecher, Adelyn; Gagliardi, Maureen; Garvey, Kelly

    2016-04-01

    This study examined spontaneous self-monitoring of picture naming in people with aphasia. Of primary interest was whether spontaneous detection or repair of an error constitutes an error signal or other feedback that tunes the production system to the desired outcome. In other words, do acts of monitoring cause adaptive change in the language system? A second possibility, not incompatible with the first, is that monitoring is indicative of an item's representational strength, and strength is a causal factor in language change. Twelve PWA performed a 615-item naming test twice, in separate sessions, without extrinsic feedback. At each timepoint, we scored the first complete response for accuracy and error type and the remainder of the trial for verbalizations consistent with detection (e.g., "no, not that") and successful repair (i.e., correction). Data analysis centered on: (a) how often an item that was misnamed at one timepoint changed to correct at the other timepoint, as a function of monitoring; and (b) how monitoring impacted change scores in the Forward (Time 1 to Time 2) compared to Backward (Time 2 to Time 1) direction. The Strength hypothesis predicts significant effects of monitoring in both directions. The Learning hypothesis predicts greater effects in the Forward direction. These predictions were evaluated for three types of errors--Semantic errors, Phonological errors, and Fragments--using mixed-effects regression modeling with crossed random effects. Support for the Strength hypothesis was found for all three error types. Support for the Learning hypothesis was found for Semantic errors. All effects were due to error repair, not error detection. We discuss the theoretical and clinical implications of these novel findings. PMID:26863091

  8. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  9. Morphological Awareness and Children's Writing: Accuracy, Error, and Invention

    PubMed Central

    McCutchen, Deborah; Stull, Sara

    2014-01-01

    This study examined the relationship between children's morphological awareness and their ability to produce accurate morphological derivations in writing. Fifth-grade U.S. students (n = 175) completed two writing tasks that invited or required morphological manipulation of words. We examined both accuracy and error, specifically errors in spelling and errors of the sort we termed morphological inventions, which entailed inappropriate, novel pairings of stems and suffixes. Regressions were used to determine the relationship between morphological awareness, morphological accuracy, and spelling accuracy, as well as between morphological awareness and morphological inventions. Linear regressions revealed that morphological awareness uniquely predicted children's generation of accurate morphological derivations, regardless of whether or not accurate spelling was required. A logistic regression indicated that morphological awareness was also uniquely predictive of morphological invention, with higher morphological awareness increasing the probability of morphological invention. These findings suggest that morphological knowledge may not only assist children with spelling during writing, but may also assist with word production via generative experimentation with morphological rules during sentence generation. Implications are discussed for the development of children's morphological knowledge and relationships with writing. PMID:25663748

  10. The effects of noise masking and required accuracy on speech errors, disfluencies, and self-repairs.

    PubMed

    Postma, A; Kolk, H

    1992-06-01

    The covert repair hypothesis views disfluencies as by-products of covert self-repairs applied to internal speech errors. To test this hypothesis we examined effects of noise masking and accuracy emphasis on speech error, disfluency, and self-repair rates. Noise reduced the numbers of disfluencies and self-repairs but did not affect speech error rates significantly. With accuracy emphasis, speech error rates decreased considerably, but disfluency and self-repair rates did not. With respect to these findings, it is argued that subjects monitor errors with less scrutiny under noise and when accuracy of speaking is unimportant. Consequently, covert and overt repair tendencies drop, a fact that is reflected by changes in disfluency and self-repair rates relative to speech error rates. Self-repair occurrence may be additionally reduced under noise because the information available for error detection--that is, the auditory signal--has also decreased. A qualitative analysis of self-repair patterns revealed that phonemic errors were usually repaired immediately after their intrusion. PMID:1608244

  11. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  12. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  13. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  14. Relative error covariance analysis techniques and application

    NASA Technical Reports Server (NTRS)

    Wolff, Peter, J.; Williams, Bobby G.

    1988-01-01

    A technique for computing the error covariance of the difference between two estimators derived from different (possibly overlapping) data arcs is presented. The relative error covariance is useful for predicting the achievable consistency between Kalman-Bucy filtered estimates generated from two (not necessarily disjoint) data sets. The relative error covariance analysis technique is then applied to a Venus Orbiter simulation.

  15. The Dilemma of Error and Accuracy: An Exploration.

    ERIC Educational Resources Information Center

    Barksdale-Ladd, Mary Alice; King, James R.

    2000-01-01

    Shows teachers who self-reported they were developing constructivist approaches to classroom instruction had contradictory beliefs about dealing with reading and writing errors, and about addressing errors with more- and less-able readers and writers. Shows the teachers were convinced that students' inaccurate constructions of knowledge should be…

  16. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    NASA Astrophysics Data System (ADS)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  17. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  18. Accuracy of devices for self-monitoring of blood glucose: A stochastic error model.

    PubMed

    Vettoretti, M; Facchinetti, A; Sparacino, G; Cobelli, C

    2015-01-01

    Self-monitoring of blood glucose (SMBG) devices are portable systems that allow measuring glucose concentration in a small drop of blood obtained via finger-prick. SMBG measurements are key in type 1 diabetes (T1D) management, e.g. for tuning insulin dosing. A reliable model of SMBG accuracy would be important in several applications, e.g. in in silico design and optimization of insulin therapy. In the literature, the most used model to describe SMBG error is the Gaussian distribution, which however is simplistic to properly account for the observed variability. Here, a methodology to derive a stochastic model of SMBG accuracy is presented. The method consists in dividing the glucose range into zones in which absolute/relative error presents constant standard deviation (SD) and, then, fitting by maximum-likelihood a skew-normal distribution model to absolute/relative error distribution in each zone. The method was tested on a database of SMBG measurements collected by the One Touch Ultra 2 (Lifescan Inc., Milpitas, CA). In particular, two zones were identified: zone 1 (BG≤75 mg/dl) with constant-SD absolute error and zone 2 (BG>75mg/dl) with constant-SD relative error. Mean and SD of the identified skew-normal distributions are, respectively, 2.03 and 6.51 in zone 1, 4.78% and 10.09% in zone 2. Visual predictive check validation showed that the derived two-zone model accurately reproduces SMBG measurement error distribution, performing significantly better than the single-zone Gaussian model used previously in the literature. This stochastic model allows a more realistic SMBG scenario for in silico design and optimization of T1D insulin therapy.

  19. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion.

    PubMed

    Ricci, Luca; Taffoni, Fabrizio; Formica, Domenico

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100

  20. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion

    PubMed Central

    Ricci, Luca; Taffoni, Fabrizio

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100

  1. The movement speed-accuracy relation in space-time.

    PubMed

    Hsieh, Tsung-Yu; Liu, Yeou-Teh; Mayer-Kress, Gottfried; Newell, Karl M

    2013-02-01

    Two experiments investigated a new approach to decomposing the contributions of spatial and temporal constraints to an integrated single space-time performance score in the movement speed-accuracy relation of a line drawing task. The mean and variability of the space-time performance error score were lowest when the task space and time constraint contributions to the performance score were comparable (i.e., middle range of velocities). As the contribution of either space or time to the performance score became increasingly asymmetrical at lower and higher average velocities, the mean performance error score and its variability increased with a greater trade-off between spatial and temporal movement properties. The findings revealed a new U-shaped space-time speed-accuracy function for performance outcome in tasks that have both spatial and temporal demands. The traditional speed-accuracy functions for spatial error and temporal error considered independently map to this integrated space-time movement speed-accuracy function.

  2. Relative errors can cue absolute visuomotor mappings.

    PubMed

    van Dam, Loes C J; Ernst, Marc O

    2015-12-01

    When repeatedly switching between two visuomotor mappings, e.g. in a reaching or pointing task, adaptation tends to speed up over time. That is, when the error in the feedback corresponds to a mapping switch, fast adaptation occurs. Yet, what is learned, the relative error or the absolute mappings? When switching between mappings, errors with a size corresponding to the relative difference between the mappings will occur more often than other large errors. Thus, we could learn to correct more for errors with this familiar size (Error Learning). On the other hand, it has been shown that the human visuomotor system can store several absolute visuomotor mappings (Mapping Learning) and can use associated contextual cues to retrieve them. Thus, when contextual information is present, no error feedback is needed to switch between mappings. Using a rapid pointing task, we investigated how these two types of learning may each contribute when repeatedly switching between mappings in the absence of task-irrelevant contextual cues. After training, we examined how participants changed their behaviour when a single error probe indicated either the often-experienced error (Error Learning) or one of the previously experienced absolute mappings (Mapping Learning). Results were consistent with Mapping Learning despite the relative nature of the error information in the feedback. This shows that errors in the feedback can have a double role in visuomotor behaviour: they drive the general adaptation process by making corrections possible on subsequent movements, as well as serve as contextual cues that can signal a learned absolute mapping. PMID:26280315

  3. Error-related electrocorticographic activity in humans during continuous movements.

    PubMed

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2012-04-01

    Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects' movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.

  4. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  5. Machining Error Compensation Based on 3D Surface Model Modified by Measured Accuracy

    NASA Astrophysics Data System (ADS)

    Abe, Go; Aritoshi, Masatoshi; Tomita, Tomoki; Shirase, Keiichi

    Recently, a demand for precision machining of dies and molds with complex shapes has been increasing. Although CNC machine tools are utilized widely for machining, still machining error compensation is required to meet the increasing demand of machining accuracy. However, the machining error compensation is an operation which takes huge amount of skill, time and cost. This paper deals with a new method of the machining error compensation. The 3D surface data of the machined part is modified according to the machining error measured by CMM (Coordinate Measuring Machine). A compensated NC program is generated from the modified 3D surface data for the machining error compensation.

  6. Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors

    NASA Astrophysics Data System (ADS)

    Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping

    2016-11-01

    The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.

  7. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  8. Morphological Awareness and Children's Writing: Accuracy, Error, and Invention

    ERIC Educational Resources Information Center

    McCutchen, Deborah; Stull, Sara

    2015-01-01

    This study examined the relationship between children's morphological awareness and their ability to produce accurate morphological derivations in writing. Fifth-grade US students (n = 175) completed two writing tasks that invited or required morphological manipulation of words. We examined both accuracy and error, specifically errors in…

  9. Challenge and error: critical events and attention-related errors.

    PubMed

    Cheyne, James Allan; Carriere, Jonathan S A; Solman, Grayden J F; Smilek, Daniel

    2011-12-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error↔attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention lapses; resource-depleting cognitions interfering with attention to subsequent task challenges. Attention lapses lead to errors, and errors themselves are a potent consequence often leading to further attention lapses potentially initiating a spiral into more serious errors. We investigated this challenge-induced error↔attention-lapse model using the Sustained Attention to Response Task (SART), a GO-NOGO task requiring continuous attention and response to a number series and withholding of responses to a rare NOGO digit. We found response speed and increased commission errors following task challenges to be a function of temporal distance from, and prior performance on, previous NOGO trials. We conclude by comparing and contrasting the present theory and findings to those based on choice paradigms and argue that the present findings have implications for the generality of conflict monitoring and control models.

  10. Effects of random member length errors on the accuracy and internal loads of truss antennas

    NASA Technical Reports Server (NTRS)

    Greene, W. H.

    1983-01-01

    The effects of random member length errors on the surface accuracy, the defocus, and the residual, internal loads of tetrahedral truss antenna reflectors have been studied analytically. The analytical procedure involves performing multiple, deterministic finite element structural analyses for a particular truss. For each analysis, the normally distributed, random member length errors are selected by random number generator. A best fit paraboloid analysis is used to determine a root mean square error and defocus from each analysis. The statistical properties of these quantities as well as the internal loads are calculated from the results of many independent analyses. Results indicate that the number of members in a tetrahedral truss antenna of a given diameter has a significant effect on surface accuracy, defocus and internal loads. It was also found that the member axial stiffnesses and antenna focal length have a very small effect on reflector surface accuracy.

  11. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  12. Capturing L2 Accuracy Developmental Patterns: Insights from an Error-Tagged EFL Learner Corpus

    ERIC Educational Resources Information Center

    Thewissen, Jennifer

    2013-01-01

    The present article addresses the issue of second language accuracy developmental trajectories and shows how they can be captured via an error-tagged version of an English as a Foreign Language (EFL) learner corpus. The data used in this study were extracted from the International Corpus of Learner English (Granger et al., 2009) and consist of a…

  13. Accuracy of image-plane holographic tomography with filtered backprojection: random and systematic errors.

    PubMed

    Belashov, A V; Petrov, N V; Semenova, I V

    2016-01-01

    This paper explores the concept of image-plane holographic tomography applied to the measurements of laser-induced thermal gradients in an aqueous solution of a photosensitizer with respect to the reconstruction accuracy of three-dimensional variations of the refractive index. It uses the least-squares estimation algorithm to reconstruct refractive index variations in each holographic projection. Along with the bitelecentric optical system, transferring focused projection to the sensor plane, it facilitates the elimination of diffraction artifacts and noise suppression. This work estimates the influence of typical random and systematic errors in experiments and concludes that random errors such as accidental measurement errors or noise presence can be significantly suppressed by increasing the number of recorded digital holograms. On the contrary, even comparatively small systematic errors such as a displacement of the rotation axis projection in the course of a reconstruction procedure can significantly distort the results. PMID:26835625

  14. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  15. Influence of both angle and position error of pentaprism on accuracy of pentaprism scanning system

    NASA Astrophysics Data System (ADS)

    Xu, Kun; Han, Sen; Zhang, Qiyuan; Wu, Quanying

    2014-11-01

    Pentaprism scanning system has been widely used in the measurement of large flat and wavefront, based on its property that the deviated beam will have no motion in the pitch direction. But the manufacturing and position errors of pentaprisms will bring error to the measurement and so a good error analysis method is indispensable. In this paper, we propose a new method of building mathematic models of pentaprism and through which the size and angle errors of a pentaprism can be put into the model as parameters. 4 size parameters are selected to determine the size and 11 angle parameters are selected to determine the angles of a pentaprism. Yaw, Roll and Pitch are used to describe the position error of a pentaprism and an autocollimator. A pentaprism scanning system of wavefront test is simulated by ray tracing using matlab. We design a method of separating the constant from the measurement results which will improve the measurement accuracy and analyze the system error by Monte Carlo method. This method is simple, rapid, accurate and convenient for computer programming.

  16. The effect of clock, media, and station location errors on Doppler measurement accuracy

    NASA Technical Reports Server (NTRS)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  17. Factoring Algebraic Error for Relative Pose Estimation

    SciTech Connect

    Lindstrom, P; Duchaineau, M

    2009-03-09

    We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.

  18. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    SciTech Connect

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  19. An assessment of accuracy, error, and conflict with support values from genome-scale phylogenetic data.

    PubMed

    Taylor, Derek J; Piel, William H

    2004-08-01

    Despite the importance of molecular phylogenetics, few of its assumptions have been tested with real data. It is commonly assumed that nonparametric bootstrap values are an underestimate of the actual support, Bayesian posterior probabilities are an overestimate of the actual support, and among-gene phylogenetic conflict is low. We directly tested these assumptions by using a well-supported yeast reference tree. We found that bootstrap values were not significantly different from accuracy. Bayesian support values were, however, significant overestimates of accuracy but still had low false-positive error rates (0% to 2.8%) at the highest values (>99%). Although we found evidence for a branch-length bias contributing to conflict, there was little evidence for widespread, strongly supported among-gene conflict from bootstraps. The results demonstrate that caution is warranted concerning conclusions of conflict based on the assumption of underestimation for support values in real data. PMID:15140947

  20. Improving the accuracy of computed 13C NMR shift predictions by specific environment error correction: fragment referencing.

    PubMed

    Andrews, Keith G; Spivey, Alan C

    2013-11-15

    The accuracy of both Gauge-including atomic orbital (GIAO) and continuous set of gauge transformations (CSGT) (13)C NMR spectra prediction by Density Functional Theory (DFT) at the B3LYP/6-31G** level is shown to be usefully enhanced by employing a 'fragment referencing' method for predicting chemical shifts without recourse to empirical scaling. Fragment referencing refers to a process of reducing the error in calculating a particular NMR shift by consulting a similar molecule for which the error in the calculation is easily deduced. The absolute accuracy of the chemical shifts predicted when employing fragment referencing relative to conventional techniques (e.g., using TMS or MeOH/benzene dual referencing) is demonstrated to be improved significantly for a range of substrates, which illustrates the superiority of the technique particularly for systems with similar chemical shifts arising from different chemical environments. The technique is particularly suited to molecules of relatively low molecular weight containing 'non-standard' magnetic environments, e.g., α to halogen atoms, which are poorly predicted by other methods. The simplicity and speed of the technique mean that it can be employed to resolve routine structural assignment problems that require a degree of accuracy not provided by standard incremental or hierarchically ordered spherical description of environment (HOSE) algorithms. The approach is also demonstrated to be applicable when employing the MP2 method at 6-31G**, cc-pVDZ, aug-cc-pVDZ, and cc-pVTZ levels, although none of these offer advantage in terms of accuracy of prediction over the B3LYP/6-31G** DFT method.

  1. Monitoring memory errors: the influence of the veracity of retrieved information on the accuracy of judgements of learning.

    PubMed

    Rhodes, Matthew G; Tauber, Sarah K

    2011-11-01

    The current study examined the degree to which predictions of memory performance made immediately or at a delay are sensitive to confidently held memory illusions. Participants studied unrelated pairs of words and made judgements of learning (JOLs) for each item, either immediately or after a delay. Half of the unrelated pairs (deceptive items; e.g., nurse-dollar) had a semantically related competitor (e.g., doctor) that was easily accessible when given a test cue (e.g., nurse-do_ _ _r) and half had no semantically related competitor (control items; e.g., subject-dollar). Following the study phase, participants were administered a cued recall test. Results from Experiment 1 showed that memory performance was less accurate for deceptive compared with control items. In addition, delaying judgement improved the relative accuracy of JOLs for control items but not for deceptive items. Subsequent experiments explored the degree to which the relative accuracy of delayed JOLs for deceptive items improved as a result of a warning to ensure that retrieved memories were accurate (Experiment 2) and corrective feedback regarding the veracity of information retrieved prior to making a JOL (Experiment 3). In all, these data suggest that delayed JOLs may be largely insensitive to memory errors unless participants are provided with feedback regarding memory accuracy.

  2. The Relative Frequency of Spanish Pronunciation Errors.

    ERIC Educational Resources Information Center

    Hammerly, Hector

    Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…

  3. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results.

  4. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. PMID:26698389

  5. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  6. Dissociable correlates of response conflict and error awareness in error-related brain activity

    PubMed Central

    Hughes, Gethin; Yeung, Nick

    2010-01-01

    Errors in speeded decision tasks are associated with characteristic patterns of brain activity. In the scalp-recorded EEG, error processing is reflected in two components, the error-related negativity (ERN) and the error positivity (Pe). These components have been widely studied, but debate remains regarding the precise aspects of error processing they reflect. The present study investigated the relation between the ERN and Pe using a novel version of the flanker task to allow a comparison between errors reflecting different causes—response conflict versus stimulus masking. The conflict and mask conditions were matched for overall behavioural performance but differed in underlying response dynamics, as indexed by response time distributions and measures of lateralised motor activity. ERN amplitude varied in relation to these differing response dynamics, being significantly larger in the conflict condition compared to the mask condition. Furthermore, differences in response dynamics between participants were predictive of modulations in ERN amplitude. In contrast, Pe activity varied little between conditions, but varied across trials in relation to participants‘ awareness of their errors. Taken together, these findings suggest a dissociation between the ERN and Pe, with the former reflecting the dynamics of response selection and conflict, and the latter reflecting conscious recognition of an error. PMID:21130788

  7. Scaling Relation for Occulter Manufacturing Errors

    NASA Technical Reports Server (NTRS)

    Sirbu, Dan; Shaklan, Stuart B.; Kasdin, N. Jeremy; Vanderbei, Robert J.

    2015-01-01

    An external occulter is a spacecraft own along the line-of-sight of a space telescope to suppress starlight and enable high-contrast direct imaging of exoplanets. The shape of an external occulter must be specially designed to optimally suppress starlight and deviations from the ideal shape due to manufacturing errors can result loss of suppression in the shadow. Due to the long separation distances and large dimensions involved for a space occulter, laboratory testing is conducted with scaled versions of occulters etched on silicon wafers. Using numerical simulations for a flight Fresnel occulter design, we show how the suppression performance of an occulter mask scales with the available propagation distance for expected random manufacturing defects along the edge of the occulter petal. We derive an analytical model for predicting performance due to such manufacturing defects across the petal edges of an occulter mask and compare this with the numerical simulations. We discuss the scaling of an extended occulter test-bed.

  8. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    PubMed

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Angels

    2013-01-01

    This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  9. The feedback-related negativity signals salience prediction errors, not reward prediction errors.

    PubMed

    Talmi, Deborah; Atkinson, Ryan; El-Deredy, Wael

    2013-05-01

    Modulations of the feedback-related negativity (FRN) event-related potential (ERP) have been suggested as a potential biomarker in psychopathology. A dominant theory about this signal contends that it reflects the operation of the neural system underlying reinforcement learning in humans. The theory suggests that this frontocentral negative deflection in the ERP 230-270 ms after the delivery of a probabilistic reward expresses a prediction error signal derived from midbrain dopaminergic projections to the anterior cingulate cortex. We tested this theory by investigating whether FRN will also be observed for an inherently aversive outcome: physical pain. In another session, the outcome was monetary reward instead of pain. As predicted, unexpected reward omissions (a negative reward prediction error) yielded a more negative deflection relative to unexpected reward delivery. Surprisingly, unexpected pain omission (a positive reward prediction error) also yielded a negative deflection relative to unexpected pain delivery. Our data challenge the theory by showing that the FRN expresses aversive prediction errors with the same sign as reward prediction errors. Both FRNs were spatiotemporally and functionally equivalent. We suggest that FRN expresses salience prediction errors rather than reward prediction errors. PMID:23658166

  10. The feedback-related negativity signals salience prediction errors, not reward prediction errors.

    PubMed

    Talmi, Deborah; Atkinson, Ryan; El-Deredy, Wael

    2013-05-01

    Modulations of the feedback-related negativity (FRN) event-related potential (ERP) have been suggested as a potential biomarker in psychopathology. A dominant theory about this signal contends that it reflects the operation of the neural system underlying reinforcement learning in humans. The theory suggests that this frontocentral negative deflection in the ERP 230-270 ms after the delivery of a probabilistic reward expresses a prediction error signal derived from midbrain dopaminergic projections to the anterior cingulate cortex. We tested this theory by investigating whether FRN will also be observed for an inherently aversive outcome: physical pain. In another session, the outcome was monetary reward instead of pain. As predicted, unexpected reward omissions (a negative reward prediction error) yielded a more negative deflection relative to unexpected reward delivery. Surprisingly, unexpected pain omission (a positive reward prediction error) also yielded a negative deflection relative to unexpected pain delivery. Our data challenge the theory by showing that the FRN expresses aversive prediction errors with the same sign as reward prediction errors. Both FRNs were spatiotemporally and functionally equivalent. We suggest that FRN expresses salience prediction errors rather than reward prediction errors.

  11. Comparative study of application accuracy of two frameless neuronavigation systems: experimental error assessment quantifying registration methods and clinically influencing factors.

    PubMed

    Paraskevopoulos, Dimitrios; Unterberg, Andreas; Metzner, Roland; Dreyhaupt, Jens; Eggers, Georg; Wirtz, Christian Rainer

    2010-04-01

    This study aimed at comparing the accuracy of two commercial neuronavigation systems. Error assessment and quantification of clinical factors and surface registration, often resulting in decreased accuracy, were intended. Active (Stryker Navigation) and passive (VectorVision Sky, BrainLAB) neuronavigation systems were tested with an anthropomorphic phantom with a deformable layer, simulating skin and soft tissue. True coordinates measured by computer numerical control were compared with coordinates on image data and during navigation, to calculate software and system accuracy respectively. Comparison of image and navigation coordinates was used to evaluate navigation accuracy. Both systems achieved an overall accuracy of <1.5 mm. Stryker achieved better software accuracy, whereas BrainLAB better system and navigation accuracy. Factors with conspicuous influence (P<0.01) were imaging, instrument replacement, sterile cover drape and geometry of instruments. Precision data indicated by the systems did not reflect measured accuracy in general. Surface matching resulted in no improvement of accuracy, confirming former studies. Laser registration showed no differences compared to conventional pointers. Differences between the two systems were limited. Surface registration may improve inaccurate point-based registrations but does not in general affect overall accuracy. Accuracy feedback by the systems does not always match with true target accuracy and requires critical evaluation from the surgeon.

  12. Impacts of motivational valence on the error-related negativity elicited by full and partial errors.

    PubMed

    Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki

    2016-02-01

    Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors. PMID:26747414

  13. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals.

    PubMed

    Kelley, Ken; Lai, Keke

    2011-02-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively small range at the specified level of confidence. The accuracy in parameter estimation approach to sample size planning is developed for the RMSEA so that the confidence interval for the population RMSEA will have a width whose expectation is sufficiently narrow. Analytic developments are shown to work well with a Monte Carlo simulation study. Freely available computer software is developed so that the methods discussed can be implemented. The methods are demonstrated for a repeated measures design where the way in which social relationships and initial depression influence coping strategies and later depression are examined.

  14. An assessment of template-guided implant surgery in terms of accuracy and related factors

    PubMed Central

    Lee, Jee-Ho; Park, Ji-Man; Kim, Soung-Min; Kim, Myung-Joo; Lee, Jong-Ho

    2013-01-01

    PURPOSE Template-guided implant therapy has developed hand-in-hand with computed tomography (CT) to improve the accuracy of implant surgery and future prosthodontic treatment. In our present study, the accuracy and causative factors for computer-assisted implant surgery were assessed to further validate the stable clinical application of this technique. MATERIALS AND METHODS A total of 102 implants in 48 patients were included in this study. Implant surgery was performed with a stereolithographic template. Pre- and post-operative CTs were used to compare the planned and placed implants. Accuracy and related factors were statistically analyzed with the Spearman correlation method and the linear mixed model. Differences were considered to be statistically significant at P≤.05. RESULTS The mean errors of computer-assisted implant surgery were 1.09 mm at the coronal center, 1.56 mm at the apical center, and the axis deviation was 3.80°. The coronal and apical errors of the implants were found to be strongly correlated. The errors developed at the coronal center were magnified at the apical center by the fixture length. The case of anterior edentulous area and longer fixtures affected the accuracy of the implant template. CONCLUSION The control of errors at the coronal center and stabilization of the anterior part of the template are needed for safe implant surgery and future prosthodontic treatment. PMID:24353883

  15. Reward value enhances post-decision error-related activity in the cingulate cortex.

    PubMed

    Taylor, Jessica E; Ogawa, Akitoshi; Sakagami, Masamichi

    2016-06-01

    By saying "Anyone who has never made a mistake has never tried anything new", Albert Einstein himself allegedly implied that the making and processing of errors are essential for behavioral adaption to a new or changing environment. These essential error-related cognitive and neural processes are likely influenced by reward value. However, previous studies have not dissociated accuracy and value and so the distinct effect of reward on error processing in the brain remained unknown. Therefore, we set out to investigate this at various points in decision-making. We used functional magnetic resonance imaging to scan participants while they completed a random dot motion discrimination task where reward and non-reward were associated with stimuli via classical conditioning. Pre-error activity was found in the medial frontal cortex prior to response but this was not related to reward value. At response time, error-related activity was found to be significantly greater in reward than non-reward trials in the midcingulate cortex. Finally at outcome time, error-related activity was found in the anterior cingulate cortex in non-reward trials. These results show that reward value enhances post-decision but not pre-decision error-related activities and these results therefore have implications for theories of error correction and confidence.

  16. Reward value enhances post-decision error-related activity in the cingulate cortex.

    PubMed

    Taylor, Jessica E; Ogawa, Akitoshi; Sakagami, Masamichi

    2016-06-01

    By saying "Anyone who has never made a mistake has never tried anything new", Albert Einstein himself allegedly implied that the making and processing of errors are essential for behavioral adaption to a new or changing environment. These essential error-related cognitive and neural processes are likely influenced by reward value. However, previous studies have not dissociated accuracy and value and so the distinct effect of reward on error processing in the brain remained unknown. Therefore, we set out to investigate this at various points in decision-making. We used functional magnetic resonance imaging to scan participants while they completed a random dot motion discrimination task where reward and non-reward were associated with stimuli via classical conditioning. Pre-error activity was found in the medial frontal cortex prior to response but this was not related to reward value. At response time, error-related activity was found to be significantly greater in reward than non-reward trials in the midcingulate cortex. Finally at outcome time, error-related activity was found in the anterior cingulate cortex in non-reward trials. These results show that reward value enhances post-decision but not pre-decision error-related activities and these results therefore have implications for theories of error correction and confidence. PMID:26739226

  17. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  18. Evaluating clinical accuracy of continuous glucose monitoring systems: Continuous Glucose-Error Grid Analysis (CG-EGA).

    PubMed

    Clarke, William L; Anderson, Stacey; Kovatchev, Boris

    2008-08-01

    Continuous Glucose Sensors (CGS) generate rich and informative continuous data streams which have the potential to improve the glycemic condition of the patient with diabetes. Such data are critical to the development of closed loop systems for automated glycemic control. Thus the numerical and clinical accuracy of such must be assured. Although numerical point accuracy of these systems has been described using traditional statistics, there are no requirements, as of yet, for determining and reporting the rate (trend) accuracy of the data generated. In addition, little attention has been paid to the clinical accuracy. of these systems. Continuous Glucose-Error Grid Analysis (CG-EGA) is the only method currently available for assessing the clinical accuracy of such data and reporting this accuracy for each of the relevant glycemic ranges, - hypoglycemia, euglycemia, hyperglycemia. This manuscript reviews the development of the original Error Grid Analysis (EGA) and describes its inadequacies when used to determine point accuracy of CGS systems. The development of CG-EGA as a logical extension of EGA for use with CGS is described in detail and examples of how it can be used to describe the clinical accuracy of several CGS are shown. Information is presented on how to obtain assistance with the use of CG-EGA.

  19. The uncertainty of errors: Intolerance of uncertainty is associated with error-related brain activity.

    PubMed

    Jackson, Felicia; Nelson, Brady D; Hajcak, Greg

    2016-01-01

    Errors are unpredictable events that have the potential to cause harm. The error-related negativity (ERN) is the electrophysiological index of errors and has been posited to reflect sensitivity to threat. Intolerance of uncertainty (IU) is the tendency to perceive uncertain events as threatening. In the present study, 61 participants completed a self-report measure of IU and a flanker task designed to elicit the ERN. Results indicated that IU subscales were associated with the ERN in opposite directions. Cognitive distress in the face of uncertainty (Prospective IU) was associated with a larger ERN and slower reaction time. Inhibition in response to uncertainty (Inhibitory IU) was associated with a smaller ERN and faster reaction time. This study suggests that sensitivity to the uncertainty of errors contributes to the magnitude of the ERN. Furthermore, these findings highlight the importance of considering the heterogeneity of anxiety phenotypes in relation to measures of threat sensitivity. PMID:26607441

  20. The uncertainty of errors: Intolerance of uncertainty is associated with error-related brain activity.

    PubMed

    Jackson, Felicia; Nelson, Brady D; Hajcak, Greg

    2016-01-01

    Errors are unpredictable events that have the potential to cause harm. The error-related negativity (ERN) is the electrophysiological index of errors and has been posited to reflect sensitivity to threat. Intolerance of uncertainty (IU) is the tendency to perceive uncertain events as threatening. In the present study, 61 participants completed a self-report measure of IU and a flanker task designed to elicit the ERN. Results indicated that IU subscales were associated with the ERN in opposite directions. Cognitive distress in the face of uncertainty (Prospective IU) was associated with a larger ERN and slower reaction time. Inhibition in response to uncertainty (Inhibitory IU) was associated with a smaller ERN and faster reaction time. This study suggests that sensitivity to the uncertainty of errors contributes to the magnitude of the ERN. Furthermore, these findings highlight the importance of considering the heterogeneity of anxiety phenotypes in relation to measures of threat sensitivity.

  1. High Accuracy Acoustic Relative Humidity Measurement in Duct Flow with Air

    PubMed Central

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments. PMID:22163610

  2. High accuracy acoustic relative humidity measurement in duct flow with air.

    PubMed

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  3. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 13 2013-04-01 2013-04-01 false Accuracy-related penalty. 1.6662-2 Section 1... contained in 26 CFR part 1 revised April 1, 1995) apply to returns the due date of which (determined without....6662-3 (as contained in 26 CFR part 1 revised April 1, 1995) relating to those penalties will apply...

  4. Error-disturbance uncertainty relations studied in neutron optics

    NASA Astrophysics Data System (ADS)

    Sponar, Stephan; Sulyok, Georg; Demirel, Bulent; Hasegawa, Yuji

    2016-09-01

    Heisenberg's uncertainty principle is probably the most famous statement of quantum physics and its essential aspects are well described by a formulations in terms of standard deviations. However, a naive Heisenberg-type error-disturbance relation is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid Ozawa's relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg's original EDUR is violated, and the Ozawa's and Branciard's EDURs are valid in a wide range of experimental parameters, applying a new measurement procedure referred to as two-state method.

  5. Lexical Errors and Accuracy in Foreign Language Writing. Second Language Acquisition

    ERIC Educational Resources Information Center

    del Pilar Agustin Llach, Maria

    2011-01-01

    Lexical errors are a determinant in gaining insight into vocabulary acquisition, vocabulary use and writing quality assessment. Lexical errors are very frequent in the written production of young EFL learners, but they decrease as learners gain proficiency. Misspellings are the most common category, but formal errors give way to semantic-based…

  6. System-related factors contributing to diagnostic errors.

    PubMed

    Thammasitboon, Satid; Thammasitboon, Supat; Singhal, Geeta

    2013-10-01

    Several studies in primary care, internal medicine, and emergency departments show that rates of errors in test requests and result interpretations are unacceptably high and translate into missed, delayed, or erroneous diagnoses. Ineffective follow-up of diagnostic test results could lead to patient harm if appropriate therapeutic interventions are not delivered in a timely manner. The frequency of system-related factors that contribute directly to diagnostic errors depends on the types and sources of errors involved. Recent studies reveal that the errors and patient harm in the diagnostic testing loop have occurred mainly at the pre- and post-analytic phases, which are directed primarily by clinicians who may have limited expertise in the rapidly expanding field of clinical pathology. These errors may include inappropriate test requests, failure/delay in receiving results, and erroneous interpretation and application of test results to patient care. Efforts to address system-related factors often focus on technical errors in laboratory testing or failures in delivery of intended treatment. System-improvement strategies related to diagnostic errors tend to focus on technical aspects of laboratory medicine or delivery of treatment after completion of the diagnostic process. System failures and cognitive errors, more often than not, coexist and together contribute to the incidents of errors in diagnostic process and in laboratory testing. The use of highly structured hand-off procedures and pre-planned follow-up for any diagnostic test could improve efficiency and reliability of the follow-up process. Many feedback pathways should be established so that providers can learn if or when a diagnosis is changed. Patients can participate in the effort to reduce diagnostic errors. Providers should educate their patients about diagnostic probabilities and uncertainties. The patient-safety strategies focusing on the interface between diagnostic system and therapeutic

  7. Refractive Error, Axial Length, and Relative Peripheral Refractive Error before and after the Onset of Myopia

    PubMed Central

    Mutti, Donald O.; Hayes, John R.; Mitchell, G. Lynn; Jones, Lisa A.; Moeschberger, Melvin L.; Cotter, Susan A.; Kleinstein, Robert N.; Manny, Ruth E.; Twelker, J. Daniel; Zadnik, Karla

    2009-01-01

    Purpose To evaluate refractive error, axial length, and relative peripheral refractive error before, during the year of, and after the onset of myopia in children who became myopic compared with emmetropes. Methods Subjects were 605 children 6 to 14 years of age who became myopic (at least −0.75 D in each meridian) and 374 emmetropic (between −0.25 D and + 1.00 D in each meridian at all visits) children participating between 1995 and 2003 in the Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) Study. Axial length was measured annually by A-scan ultrasonography. Relative peripheral refractive error (the difference between the spherical equivalent cycloplegic autorefraction 30° in the nasal visual field and in primary gaze) was measured using either of two autorefractors (R-1; Canon, Lake Success, NY [no longer manufactured] or WR 5100-K; Grand Seiko, Hiroshima, Japan). Refractive error was measured with the same autorefractor with the subjects under cycloplegia. Each variable in children who became myopic was compared to age-, gender-, and ethnicity-matched model estimates of emmetrope values for each annual visit from 5 years before through 5 years after the onset of myopia. Results In the sample as a whole, children who became myopic had less hyperopia and longer axial lengths than did emmetropes before and after the onset of myopia (4 years before through 5 years after for refractive error and 3 years before through 5 years after for axial length; P < 0.0001 for each year). Children who became myopic had more hyperopic relative peripheral refractive errors than did emmetropes from 2 years before onset through 5 years after onset of myopia (P < 0.002 for each year). The fastest rate of change in refractive error, axial length, and relative peripheral refractive error occurred during the year before onset rather than in any year after onset. Relative peripheral refractive error remained at a consistent level of hyperopia each

  8. Assessing the Accuracy and Feasibility of a Refractive Error Screening Program Conducted by School Teachers in Pre-Primary and Primary Schools in Thailand

    PubMed Central

    Teerawattananon, Kanlaya; Myint, Chaw-Yin; Wongkittirux, Kwanjai; Teerawattananon, Yot; Chinkulkitnivat, Bunyong; Orprayoon, Surapong; Kusakul, Suwat; Tengtrisorn, Supaporn; Jenchitr, Watanee

    2014-01-01

    Introduction As part of the development of a system for the screening of refractive error in Thai children, this study describes the accuracy and feasibility of establishing a program conducted by teachers. Objective To assess the accuracy and feasibility of screening by teachers. Methods A cross-sectional descriptive and analytical study was conducted in 17 schools in four provinces representing four geographic regions in Thailand. A two-staged cluster sampling was employed to compare the detection rate of refractive error among eligible students between trained teachers and health professionals. Serial focus group discussions were held for teachers and parents in order to understand their attitude towards refractive error screening at schools and the potential success factors and barriers. Results The detection rate of refractive error screening by teachers among pre-primary school children is relatively low (21%) for mild visual impairment but higher for moderate visual impairment (44%). The detection rate for primary school children is high for both levels of visual impairment (52% for mild and 74% for moderate). The focus group discussions reveal that both teachers and parents would benefit from further education regarding refractive errors and that the vast majority of teachers are willing to conduct a school-based screening program. Conclusion Refractive error screening by health professionals in pre-primary and primary school children is not currently implemented in Thailand due to resource limitations. However, evidence suggests that a refractive error screening program conducted in schools by teachers in the country is reasonable and feasible because the detection and treatment of refractive error in very young generations is important and the screening program can be implemented and conducted with relatively low costs. PMID:24926993

  9. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though

  10. Individual Differences in Absolute and Relative Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Maki, Ruth H.; Shields, Micheal; Wheeler, Amanda Easton; Zacchilli, Tammy Lowery

    2005-01-01

    The authors investigated absolute and relative metacomprehension accuracy as a function of verbal ability in college students. Students read hard texts, revised texts, or a mixed set of texts. They then predicted their performance, took a multiple-choice test on the texts, and made posttest judgments about their performance. With hard texts,…

  11. The Influence of Tonal and Atonal Contexts on Error Detection Accuracy

    ERIC Educational Resources Information Center

    Groulx, Timothy J.

    2013-01-01

    Music education students ("N" = 21) at a university in the southeastern United States took an error detection test that had been designed for this study to determine the effects of tonal contexts versus atonal contexts on the ability to detect performance errors. The investigator composed 16 melodies, 8 of which were tonal and 8 of which…

  12. Radiographic and Anatomic Basis for Prostate Contouring Errors and Methods to Improve Prostate Contouring Accuracy

    SciTech Connect

    McLaughlin, Patrick W.; Evans, Cheryl M.S.; Feng, Mary; Narayana, Vrinda

    2010-02-01

    Purpose: Use of highly conformal radiation for prostate cancer can lead to both overtreatment of surrounding normal tissues and undertreatment of the prostate itself. In this retrospective study we analyzed the radiographic and anatomic basis of common errors in computed tomography (CT) contouring and suggest methods to correct them. Methods and Materials: Three hundred patients with prostate cancer underwent CT and magnetic resonance imaging (MRI). The prostate was delineated independently on the data sets. CT and MRI contours were compared by use of deformable registration. Errors in target delineation were analyzed and methods to avoid such errors detailed. Results: Contouring errors were identified at the prostatic apex, mid gland, and base on CT. At the apex, the genitourinary diaphragm, rectum, and anterior fascia contribute to overestimation. At the mid prostate, the anterior and lateral fasciae contribute to overestimation. At the base, the bladder and anterior fascia contribute to anterior overestimation. Transition zone hypertrophy and bladder neck variability contribute to errors of overestimation and underestimation at the superior base, whereas variable prostate-to-seminal vesicle relationships with prostate hypertrophy contribute to contouring errors at the posterior base. Conclusions: Most CT contouring errors can be detected by (1) inspection of a lateral view of prostate contours to detect projection from the expected globular form and (2) recognition of anatomic structures (genitourinary diaphragm) on the CT scans that are clearly visible on MRI. This study shows that many CT prostate contouring errors can be improved without direct incorporation of MRI data.

  13. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario. PMID:23564344

  14. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates.

    PubMed

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended.

  15. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    PubMed Central

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023

  16. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources

    NASA Technical Reports Server (NTRS)

    Olson, Corwin; Long, Anne; Car[emter. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  17. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  18. Implicationally Related Error Patterns and the Selection of Treatment Targets.

    ERIC Educational Resources Information Center

    Dinnsen, Daniel A.; O'Connor, Kathleen M.

    2001-01-01

    This article compares different claims that have been made concerning acquisition by transitional rule-based derivation theories and by optimality theory. Case studies of children with phonological delays are examined. Error patterns are argued to be implicationally related and optimality theory is shown to offer a principled explanation.…

  19. Accuracy of the Generalizability-Model Standard Errors for the Percents of Examinees Reaching Standards.

    ERIC Educational Resources Information Center

    Li, Yuan H.; Schafer, William D.

    An empirical study of the Yen (W. Yen, 1997) analytic formula for the standard error of a percent-above-cut [SE(PAC)] was conducted. This formula was derived from variance component information gathered in the context of generalizability theory. SE(PAC)s were estimated by different methods of estimating variance components (e.g., W. Yens…

  20. The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.

    2015-01-01

    Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…

  1. The effect of biomechanical variables on force sensitive resistor error: Implications for calibration and improved accuracy.

    PubMed

    Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P

    2016-03-21

    Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. PMID:26903413

  2. Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2011-01-01

    Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…

  3. Moving Away From Error-Related Potentials to Achieve Spelling Correction in P300 Spellers

    PubMed Central

    Mainsah, Boyla O.; Morton, Kenneth D.; Collins, Leslie M.; Sellers, Eric W.; Throckmorton, Chandra S.

    2016-01-01

    P300 spellers can provide a means of communication for individuals with severe neuromuscular limitations. However, its use as an effective communication tool is reliant on high P300 classification accuracies (>70%) to account for error revisions. Error-related potentials (ErrP), which are changes in EEG potentials when a person is aware of or perceives erroneous behavior or feedback, have been proposed as inputs to drive corrective mechanisms that veto erroneous actions by BCI systems. The goal of this study is to demonstrate that training an additional ErrP classifier for a P300 speller is not necessary, as we hypothesize that error information is encoded in the P300 classifier responses used for character selection. We perform offline simulations of P300 spelling to compare ErrP and non-ErrP based corrective algorithms. A simple dictionary correction based on string matching and word frequency significantly improved accuracy (35–185%), in contrast to an ErrP-based method that flagged, deleted and replaced erroneous characters (−47 – 0%). Providing additional information about the likelihood of characters to a dictionary-based correction further improves accuracy. Our Bayesian dictionary-based correction algorithm that utilizes P300 classifier confidences performed comparably (44–416%) to an oracle ErrP dictionary-based method that assumed perfect ErrP classification (43–433%). PMID:25438320

  4. Circumventing rain-related errors in scatterometer wind observations

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Thomas J.; Xie, Shang-Ping

    2016-08-01

    Satellite scatterometer observations of surface winds over the global oceans are critical for climate research and applications like weather forecasting. However, rain-related errors remain an important limitation, largely precluding satellite study of winds in rainy areas. Here we utilize a novel technique to compute divergence and curl from satellite observations of surface winds and surface wind stress in rainy areas. This technique circumvents rain-related errors by computing line integrals around rainy patches, using valid wind vector observations that border the rainy patches. The area-averaged divergence and wind stress curl inside each rainy patch are recovered via the divergence and curl theorems. We process the 10 year Quick Scatterometer (QuikSCAT) data set and show that the line-integral method brings the QuikSCAT winds into better agreement with an atmospheric reanalysis, largely removing both the "divergence bias" and "anticyclonic curl bias" in rainy areas noted in previous studies. The corrected QuikSCAT wind stress curl reduces the North Pacific midlatitude Sverdrup transport by 20-30%. We test several methods of computing divergence and curl on winds from an atmospheric model simulation and show that the line-integral method has the smallest errors. We anticipate that scatterometer winds processed with the line-integral method will improve ocean model simulations and help illuminate the coupling between atmospheric convection and circulation.

  5. Assessing Accuracy of Waveform Models against Numerical Relativity Waveforms

    NASA Astrophysics Data System (ADS)

    Pürrer, Michael; LVC Collaboration

    2016-03-01

    We compare currently available phenomenological and effective-one-body inspiral-merger-ringdown models for gravitational waves (GW) emitted from coalescing black hole binaries against a set of numerical relativity waveforms from the SXS collaboration. Simplifications are used in the construction of some waveform models, such as restriction to spins aligned with the orbital angular momentum, no inclusion of higher harmonics in the GW radiation, no modeling of eccentricity and the use of effective parameters to describe spin precession. In contrast, NR waveforms provide us with a high fidelity representation of the ``true'' waveform modulo small numerical errors. To focus on systematics we inject NR waveforms into zero noise for early advanced LIGO detector sensitivity at a moderately optimistic signal-to-noise ratio. We discuss where in the parameter space the above modeling assumptions lead to noticeable biases in recovered parameters.

  6. An analysis of pilot error-related aircraft accidents

    NASA Technical Reports Server (NTRS)

    Kowalsky, N. B.; Masters, R. L.; Stone, R. B.; Babcock, G. L.; Rypka, E. W.

    1974-01-01

    A multidisciplinary team approach to pilot error-related U.S. air carrier jet aircraft accident investigation records successfully reclaimed hidden human error information not shown in statistical studies. New analytic techniques were developed and applied to the data to discover and identify multiple elements of commonality and shared characteristics within this group of accidents. Three techniques of analysis were used: Critical element analysis, which demonstrated the importance of a subjective qualitative approach to raw accident data and surfaced information heretofore unavailable. Cluster analysis, which was an exploratory research tool that will lead to increased understanding and improved organization of facts, the discovery of new meaning in large data sets, and the generation of explanatory hypotheses. Pattern recognition, by which accidents can be categorized by pattern conformity after critical element identification by cluster analysis.

  7. Computerised physician order entry-related medication errors: analysis of reported errors and vulnerability testing of current systems

    PubMed Central

    Schiff, G D; Amato, M G; Eguale, T; Boehne, J J; Wright, A; Koppel, R; Rashidee, A H; Elson, R B; Whitney, D L; Thach, T-T; Bates, D W; Seger, A C

    2015-01-01

    Importance Medication computerised provider order entry (CPOE) has been shown to decrease errors and is being widely adopted. However, CPOE also has potential for introducing or contributing to errors. Objectives The objectives of this study are to (a) analyse medication error reports where CPOE was reported as a ‘contributing cause’ and (b) develop ‘use cases’ based on these reports to test vulnerability of current CPOE systems to these errors. Methods A review of medication errors reported to United States Pharmacopeia MEDMARX reporting system was made, and a taxonomy was developed for CPOE-related errors. For each error we evaluated what went wrong and why and identified potential prevention strategies and recurring error scenarios. These scenarios were then used to test vulnerability of leading CPOE systems, asking typical users to enter these erroneous orders to assess the degree to which these problematic orders could be entered. Results Between 2003 and 2010, 1.04 million medication errors were reported to MEDMARX, of which 63 040 were reported as CPOE related. A review of 10 060 CPOE-related cases was used to derive 101 codes describing what went wrong, 67 codes describing reasons why errors occurred, 73 codes describing potential prevention strategies and 21 codes describing recurring error scenarios. Ability to enter these erroneous order scenarios was tested on 13 CPOE systems at 16 sites. Overall, 298 (79.5%) of the erroneous orders were able to be entered including 100 (28.0%) being ‘easily’ placed, another 101 (28.3%) with only minor workarounds and no warnings. Conclusions and relevance Medication error reports provide valuable information for understanding CPOE-related errors. Reports were useful for developing taxonomy and identifying recurring errors to which current CPOE systems are vulnerable. Enhanced monitoring, reporting and testing of CPOE systems are important to improve CPOE safety. PMID:25595599

  8. TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW

    PubMed Central

    Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

    2012-01-01

    Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

  9. Short-term solar pressure effect and GM uncertainty on TDRS orbital accuracy: A study of the interaction of modeling error with tracking and orbit determination

    NASA Technical Reports Server (NTRS)

    Fang, B. T.

    1979-01-01

    The TDRS was modeled as a combination of a sun-pointing solar panel and earth-pointing plate. Based on this model, explanations are given for the following orbit determination error characteristics: inherent limits in orbital accuracy, the variation of solar pressure induced orbital error with time of the day of epoch, the insensitivity of range-rate orbits to GM error, and optimum bilateration baseline.

  10. Error-Related Negativities During Spelling Judgments Expose Orthographic Knowledge

    PubMed Central

    Harris, Lindsay N.; Perfetti, Charles A.; Rickles, Benjamin

    2014-01-01

    In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects’ spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. PMID:24389506

  11. Error-related EEG potentials generated during simulated brain-computer interaction.

    PubMed

    Ferrez, Pierre W; del R Millan, José

    2008-03-01

    Brain-computer interfaces (BCIs) are prone to errors in the recognition of subject's intent. An elegant approach to improve the accuracy of BCIs consists in a verification procedure directly based on the presence of error-related potentials (ErrP) in the electroencephalogram (EEG) recorded right after the occurrence of an error. Several studies show the presence of ErrP in typical choice reaction tasks. However, in the context of a BCI, the central question is: "Are ErrP also elicited when the error is made by the interface during the recognition of the subject's intent?"; We have thus explored whether ErrP also follow a feedback indicating incorrect responses of the simulated BCI interface. Five healthy volunteer subjects participated in a new human-robot interaction experiment, which seem to confirm the previously reported presence of a new kind of ErrP. However, in order to exploit these ErrP, we need to detect them in each single trial using a short window following the feedback associated to the response of the BCI. We have achieved an average recognition rate of correct and erroneous single trials of 83.5% and 79.2%, respectively, using a classifier built with data recorded up to three months earlier.

  12. Statistical evaluation of design-error related accidents

    SciTech Connect

    Ott, K.O.; Marchaterre, J.F.

    1980-01-01

    In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy.

  13. Accuracy of genomic prediction when combining two related crossbred populations.

    PubMed

    Vallée, A; van Arendonk, J A M; Bovenhuis, H

    2014-10-01

    Charolais bulls are selected for their crossbreed performance when mated to Montbéliard or Holstein dams. To implement genomic prediction, one could build a reference population for each crossbred population independently. An alternative could be to combine both crossbred populations into a single reference population to increase size and accuracy of prediction. The objective of this study was to investigate the accuracy of genomic prediction by combining different crossbred populations. Three scenarios were considered: 1) using 1 crossbred population as reference to predict phenotype of animals from the same crossbred population, 2) combining the 2 crossbred populations into 1 reference to predict phenotype of animals from 1 crossbred population, and 3) using 1 crossbred population as reference to predict phenotype of animals from the other crossbred population. Traits studied were bone thinness, height, and muscular development. Phenotypes and 45,117 SNP genotypes were available for 1,764 Montbéliard × Charolais calves and 447 Holstein × Charolais calves. The population was randomly spilt into 10 subgroups, which were assigned to the validation one by one. To allow fair comparison between scenarios, size of the reference population was kept constant for all scenarios. Breeding values were estimated with BLUP and genomic BLUP. Accuracy of prediction was calculated as the correlation between the EBV and the phenotypic values of the calves in the validation divided by the square root of the heritability. Genomic BLUP showed higher accuracies (between 0.281 and 0.473) than BLUP (between 0.197 and 0.452). Accuracies tended to be highest when prediction was within 1 crossbred population, intermediate when populations were combined into the reference population, and lowest when prediction was across populations. Decrease in accuracy from a prediction within 1 population to a prediction across populations was more pronounced for bone thinness (-27%) and height (-29

  14. Punishing an error improves learning: the influence of punishment magnitude on error-related neural activity and subsequent learning.

    PubMed

    Hester, Robert; Murphy, Kevin; Brown, Felicity L; Skilleter, Ashley J

    2010-11-17

    Punishing an error to shape subsequent performance is a major tenet of individual and societal level behavioral interventions. Recent work examining error-related neural activity has identified that the magnitude of activity in the posterior medial frontal cortex (pMFC) is predictive of learning from an error, whereby greater activity in this region predicts adaptive changes in future cognitive performance. It remains unclear how punishment influences error-related neural mechanisms to effect behavior change, particularly in key regions such as pMFC, which previous work has demonstrated to be insensitive to punishment. Using an associative learning task that provided monetary reward and punishment for recall performance, we observed that when recall errors were categorized by subsequent performance--whether the failure to accurately recall a number-location association was corrected at the next presentation of the same trial--the magnitude of error-related pMFC activity predicted future correction. However, the pMFC region was insensitive to the magnitude of punishment an error received and it was the left insula cortex that predicted learning from the most aversive outcomes. These findings add further evidence to the hypothesis that error-related pMFC activity may reflect more than a prediction error in representing the value of an outcome. The novel role identified here for the insular cortex in learning from punishment appears particularly compelling for our understanding of psychiatric and neurologic conditions that feature both insular cortex dysfunction and a diminished capacity for learning from negative feedback or punishment.

  15. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model

    PubMed Central

    Song, Jin-Myoung; Cho, Jin-Hyoung

    2016-01-01

    Purpose The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Materials and Methods Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Results Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Conclusions Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement. PMID:27065238

  16. Native Speakers' Perceptions of Nonnative Speakers: Related to Phonetic Errors and Spoken Grammatical Errors.

    ERIC Educational Resources Information Center

    Johnson, Ruth; Jenks, Frederick L.

    A study investigated the perceptions of native English-speakers concerning the spoken grammatical and phonetic (accent) errors of non-native speakers. Speech samples were collected from three non-native speakers of English of varied linguistic backgrounds (German, Spanish, and Arabic) and one speaker of North American English. Each of the four…

  17. A new nano-accuracy AFM system for minimizing Abbe errors and the evaluation of its measuring uncertainty.

    PubMed

    Kim, Dongmin; Lee, Dong Yeon; Gweon, Dae Gab

    2007-01-01

    A new AFM system was designed for the establishment of a standard technique of nano-length measurement in a 2D plane. In a long range (about several tens of micrometers), measurement uncertainty is dominantly affected by the Abbe error of the XY scanning stage. No linear stage is perfectly straight; in other words, every scanning stage is subject to tilting, pitch and yaw motions. In this paper, an AFM system with minimum offsets of XY sensing is designed. Moreover, the XY scanning stage is designed to minimize the rotation angle, as Abbe errors occur through multiple combination of the offset and the rotation angle. To minimize the rotation angle, an optimal design is performed by maximizing the ratio of the stiffness of the parasitic direction to the motion direction of each stage. This paper describes a design scheme of a full AFM system, in particular, the XY scanner. The full range of a fabricated XY scanner is 100 microm x 100 microm. The tilting, pitch and yaw motions are measured by an autocollimator to evaluate the performance of the XY stage. The results show that the XY scanner have a 0.75 arcsec parasitic rotation about the maximum range, thus the uncertainty in terms of the Abbe errors are very small relative to other standard equipment. Using this AFM system, a 3mum pitch specimen was measured. The measurement uncertainty of the total system was evaluated especially about pitch length. For a 1D evaluation, Abbe errors are the most dominant factor, and the expanded combined uncertainty (k = 2) of system was square root (4.13)(2)+(5.07 x 10(-5)xp)(2)(nm). For a 2D evaluation, mirror non-orthogonality and Abbe errors are dominant factors, and expanded combined uncertainty (k = 2) of the system was square root (4.13)(2)+(1.228 x 10(-4)xp)(2) in the X direction, and square root (6.28)(2)+(1.266 x 10(-4)xp)(2) in the Y direction (the unit is nanometers), where p is the measured length in nm.

  18. Involvement of human internal globus pallidus in the early modulation of cortical error-related activity.

    PubMed

    Herrojo Ruiz, María; Huebl, Julius; Schönecker, Thomas; Kupsch, Andreas; Yarrow, Kielan; Krauss, Joachim K; Schneider, Gerd-Helge; Kühn, Andrea A

    2014-06-01

    The detection and assessment of errors are a prerequisite to adapt behavior and improve future performance. Error monitoring is afforded by the interplay between cortical and subcortical neural systems. Ample evidence has pointed to a specific cortical error-related evoked potential, the error-related negativity (ERN), during the detection and evaluation of response errors. Recent models of reinforcement learning implicate the basal ganglia (BG) in early error detection following the learning of stimulus-response associations and in the modulation of the cortical ERN. To investigate the influence of the human BG motor output activity on the cortical ERN during response errors, we recorded local field potentials from the sensorimotor area of the internal globus pallidus and scalp electroencephalogram representing activity from the posterior medial frontal cortex in patients with idiopathic dystonia (hands not affected) during a flanker task. In error trials, a specific pallidal error-related potential arose 60 ms prior to the cortical ERN. The error-related changes in pallidal activity-characterized by theta oscillations-were predictive of the cortical error-related activity as assessed by Granger causality analysis. Our findings show an early modulation of error-related activity in the human pallidum, suggesting that pallidal output influences the cortex at an early stage of error detection.

  19. Gallbladder wall thickness: sonographic accuracy and relation to disease.

    PubMed

    Engel, J M; Deitch, E A; Sikkema, W

    1980-05-01

    A prospective study was performed in two parts after sonographic determination of gallbladder wall thickness in 110 consecutive patients. The first part was designed to evaluate accuracy of sonographic measurements in 40 patients on whom intraoperative measurements of wall thickness were obtained. Second, the significance of wall thickness as an indicator of disease was explored by comparing the 40 surgical patients and 44 controls. Sonography was found to be accurate in determining wall thickness to within 1 mm in 93% of patients and 1.5 mm in 100%. Wall thickness greater than 3.5 mm is highly accurate in predicting disease; however, a wall thickness 3 mm or less does not rule out cholecystitis. PMID:6768264

  20. Improving the accuracy of maternal mortality and pregnancy related death.

    PubMed

    Schaible, Burk

    2014-01-01

    Comparing abortion-related death and pregnancy-related death remains difficult due to the limitations within the Abortion Mortality Surveillance System and the International Statistical Classification of Diseases and Related Health Problems (ICD). These methods lack a systematic and comprehensive method of collecting complete records regarding abortion outcomes in each state and fail to properly identify longitudinal cause of death related to induced abortion. This article seeks to analyze the current method of comparing abortion-related death with pregnancy-related death and provide solutions to improve data collection regarding these subjects.

  1. Error-Related Functional Connectivity of the Habenula in Humans

    PubMed Central

    Ide, Jaime S.; Li, Chiang-Shan R.

    2011-01-01

    Error detection is critical to the shaping of goal-oriented behavior. Recent studies in non-human primates delineated a circuit involving the lateral habenula (LH) and ventral tegmental area (VTA) in error detection. Neurons in the LH increased activity, preceding decreased activity in the VTA, to a missing reward, indicating a feedforward signal from the LH to VTA. In the current study we used connectivity analyses to reveal this pathway in humans. In 59 adults performing a stop signal task during functional magnetic resonance imaging, we identified brain regions showing greater psychophysiological interaction with the habenula during stop error as compared to stop success trials. These regions included a cluster in the VTA/substantia nigra (SN), internal segment of globus pallidus, bilateral amygdala, and insula. Furthermore, using Granger causality and mediation analyses, we showed that the habenula Granger caused the VTA/SN, establishing the direction of this interaction, and that the habenula mediated the functional connectivity between the amygdala and VTA/SN during error processing. To our knowledge, these findings are the first to demonstrate a feedforward influence of the habenula on the VTA/SN during error detection in humans. PMID:21441989

  2. Second Language Learning: Contrastive Analysis, Error Analysis, and Related Aspects.

    ERIC Educational Resources Information Center

    Robinett, Betty Wallace, Ed.; Schachter, Jacquelyn, Ed.

    This graduate level text on second language learning is divided into three sections. The first two sections provide a survey of the historical underpinnings of second language research in contrastive analysis and error analysis. The third section includes discussions of recent developments in the field. The first section contains articles on the…

  3. [Event-related potentials and performance errors during falling asleep].

    PubMed

    Dorokhov, V B; Verbitskaia, Iu S; Lavrova, T P

    2009-01-01

    Sound is the most adequate external stimulus for studying information processes in the brain during falling asleep and at different sleep stages. Common procedure of analysis of the event-related potentials (ERPs) averaged for a group of subjects has some drawbacks because of the ERP interindividual variability. Therefore in our work, we determined parameters of the auditory ERP components selectively summed up for individual subjects in different series of a psychomotor test with their subsequent group analysis. Search for the ERP parameters which would allow us to quantitatively estimate brain functional states during performance errors associated with a decrease in the level of wakefulness and falling asleep was the aim of our work. The ERPs were recorded in healthy volunteers (n = 41) in the evening from eight EEG derivations (F3, F4, C3, C4, P3, P4, O1, O2) in reference to a linked mastoid electrode. The analysis was performed in 14 subjects with a sufficient number of falling asleep episodes. A monotonous psychomotor test was performed in a supine position with the eyes closed. The test consisted of two alternating series: calculation of sound stimuli from 1 up to 10 with simultaneous pressing the button and calculation from 1 up to 5 without pressing the button and so on. Computer-generated sound stimuli (50-ms pulses with the frequency of 1000 Hz, 60 dB HL) were presented binaurally through earphones with interstimulus intervals in 2.4-2.7 s. Comparison of the ERP parameters (latency and amplitude of components N1, P2, N, and P3) during correct and erroneous performance of the psychomotor test showed that a decrease in the level of wakefulness caused a statistically significant increase in the amplitude of components of vertex complex N1-P2-N2 in series without pressing the button. The greatest changes in the ERPs in different series of the psychomotor test were observed for component N2 (latency 330-360 ms), which has the common origin with the EEG theta

  4. Activation of the human sensorimotor cortex during error-related processing: a magnetoencephalography study.

    PubMed

    Stemmer, Brigitte; Vihla, Minna; Salmelin, Riitta

    2004-05-13

    We studied error-related processing using magnetoencephalography (MEG). Previous event-related potential studies have documented error negativity or error-related negativity after incorrect responses, with a suggested source in the anterior cingulate cortex or supplementary motor area. We compared activation elicited by correct and incorrect trials using auditory and visual choice-reaction time tasks. Source areas showing different activation patterns in correct and error conditions were mainly located in sensorimotor areas, both ipsi- and contralateral to the response, suggesting that activation of sensorimotor circuits accompanies error processing. Additional activation at various other locations suggests a distributed network of brain regions active during error-related processing. Activation specific to incorrect trials tended to occur later in MEG than EEG data, possibly indicating that EEG and MEG detect different neural networks involved in error-related processes.

  5. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... in 26 CFR part 1 revised April 1, 1995) apply to returns the due date of which (determined without....6662-3 (as contained in 26 CFR part 1 revised April 1, 1995) relating to those penalties will apply to...)(1) and 1.6662-4(g)(4) (as contained in 26 CFR part 1 revised April 1, 1995) apply to returns the...

  6. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... contained in 26 CFR part 1 revised April 1, 1995) apply to returns the due date of which (determined without....6662-3 (as contained in 26 CFR part 1 revised April 1, 1995) relating to those penalties will apply to...)(1) and 1.6662-4(g)(4) (as contained in 26 CFR part 1 revised April 1, 1995) apply to returns the...

  7. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  8. A high-accuracy roundness measurement for cylindrical components by a morphological filter considering eccentricity, probe offset, tip head radius and tilt error

    NASA Astrophysics Data System (ADS)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Zhou, Tong; Kuang, Ye

    2016-08-01

    A morphological filter is proposed to obtain a high-accuracy roundness measurement based on the four-parameter roundness measurement model, which takes into account eccentricity, probe offset, probe tip head radius and tilt error. This paper analyses the sample angle deviations caused by the four systematic errors to design a morphological filter based on the distribution of the sample angle. The effectiveness of the proposed method is verified through simulations and experiments performed with a roundness measuring machine. Compared to the morphological filter with the uniform sample angle, the accuracy of the roundness measurement can be increased by approximately 0.09 μm using the morphological filter with a non-uniform sample angle based on the four-parameter roundness measurement model, when eccentricity is above 16 μm, probe offset is approximately 1000 μm, tilt error is approximately 1″, the probe tip head radius is 1 mm and the cylindrical component radius is approximately 37 mm. The accuracy and reliability of roundness measurements are improved by using the proposed method for cylindrical components with a small radius, especially if the eccentricity and probe offset are large, and the tilt error and probe tip head radius are small. The proposed morphological filter method can be used for precision and ultra-precision roundness measurements, especially for functional assessments of roundness profiles.

  9. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  10. Relation between minimum-error discrimination and optimum unambiguous discrimination

    SciTech Connect

    Qiu Daowen; Li Lvjun

    2010-09-15

    In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating some states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.

  11. Decreasing Errors in Reading-Related Matching to Sample Using a Delayed-Sample Procedure

    ERIC Educational Resources Information Center

    Doughty, Adam H.; Saunders, Kathryn J.

    2009-01-01

    Two men with intellectual disabilities initially demonstrated intermediate accuracy in two-choice matching-to-sample (MTS) procedures. A printed-letter identity MTS procedure was used with 1 participant, and a spoken-to-printed-word MTS procedure was used with the other participant. Errors decreased substantially under a delayed-sample procedure,…

  12. Research Into the Collimation and Horizontal Axis Errors Influence on the Z+F Laser Scanner Accuracy of Verticality Measurement

    NASA Astrophysics Data System (ADS)

    Sawicki, J.; Kowalczyk, M.

    2016-06-01

    Aim of this study was to appoint values of collimation and horizontal axis errors of the laser scanner ZF 5006h owned by Department of Geodesy and Cartography, Warsaw University of Technology, and then to determine the effect of those errors on the results of measurements. An experiment has been performed, involving measurement of the test field , founded in the Main Hall of the Main Building of the Warsaw University of Technology, during which values of instrumental errors of interest were determined. Then, an universal computer program that automates the proposed algorithm and capable of applying corrections to measured target coordinates or even entire point clouds from individual stations, has been developed.

  13. Does ADHD in Adults Affect the Relative Accuracy of Metamemory Judgments?

    ERIC Educational Resources Information Center

    Knouse, Laura E.; Paradise, Matthew J.; Dunlosky, John

    2006-01-01

    Objective: Prior research suggests that individuals with ADHD overestimate their performance across domains despite performing more poorly in these domains. The authors introduce measures of accuracy from the larger realm of judgment and decision making--namely, relative accuracy and calibration--to the study of self-evaluative judgment accuracy…

  14. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-06-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  15. 47 CFR 1.1167 - Error claims related to regulatory fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Error claims related to regulatory fees. 1.1167... of Statutory Charges and Procedures for Payment § 1.1167 Error claims related to regulatory fees. (a.... (1) Failure to submit the fee by the date required will result in the assessment of a 25...

  16. 47 CFR 1.1167 - Error claims related to regulatory fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Error claims related to regulatory fees. 1.1167... of Statutory Charges and Procedures for Payment § 1.1167 Error claims related to regulatory fees. (a.... (1) Failure to submit the fee by the date required will result in the assessment of a 25...

  17. The influence of orbit selection on the accuracy of the Stanford Relativity gyroscope experiment

    NASA Technical Reports Server (NTRS)

    Vassar, R.; Everitt, C. W. F.; Vanpatten, R. A.; Breakwell, J. V.

    1980-01-01

    This paper discusses an error analysis for the Stanford Relativity experiment, designed to measure the precession of a gyroscope's spin-axis predicted by general relativity. Measurements will be made of the spin-axis orientations of 4 superconducting spherical gyroscopes carried by an earth-satellite. Two relativistic precessions are predicted: a 'geodetic' precession associated with the satellite's orbital motion and a 'motional' precession due to the earth's rotation. Using a Kalman filter covariance analysis with a realistic error model we have computed the error in determining the relativistic precession rates. Studies show that a slightly off-polar orbit is better than a polar orbit for determining the 'motional' drift.

  18. Using brain potentials to understand prism adaptation: the error-related negativity and the P300.

    PubMed

    MacLean, Stephane J; Hassall, Cameron D; Ishigami, Yoko; Krigolson, Olav E; Eskes, Gail A

    2015-01-01

    Prism adaptation (PA) is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN)-a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuomotor responding is shifted to the opposite direction of that initially induced by the prisms. This visuomotor aftereffect has been used to study visuomotor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided) space. In order to optimize PA's use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs) were recorded at the termination of each reach (screen-touch), then binned according to accuracy (hit vs. miss) and phase of exposure block (early, middle, late). Results show that two ERP components were evoked by screen-touch: an error-related negativity (ERN), and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN. PMID:26124715

  19. Using brain potentials to understand prism adaptation: the error-related negativity and the P300

    PubMed Central

    MacLean, Stephane J.; Hassall, Cameron D.; Ishigami, Yoko; Krigolson, Olav E.; Eskes, Gail A.

    2015-01-01

    Prism adaptation (PA) is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN)—a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuomotor responding is shifted to the opposite direction of that initially induced by the prisms. This visuomotor aftereffect has been used to study visuomotor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided) space. In order to optimize PA's use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs) were recorded at the termination of each reach (screen-touch), then binned according to accuracy (hit vs. miss) and phase of exposure block (early, middle, late). Results show that two ERP components were evoked by screen-touch: an error-related negativity (ERN), and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN. PMID:26124715

  20. The Relation of Spelling Errors to Cognitive Variables and Word Type

    ERIC Educational Resources Information Center

    Goyen, J. D.; Martin, M.

    1977-01-01

    Attempts to relate the spelling errors of secondary school students to visual and auditory sequential memory, intelligence, reading, and writing speed. The relation of spelling ability to the frequency and regularity of words is also examined. (Author/RK)

  1. Is your error my concern? An event-related potential study on own and observed error detection in cooperation and competition.

    PubMed

    de Bruijn, Ellen R A; von Rhein, Daniel T

    2012-01-01

    Electroencephalogram studies have identified an error-related event-related potential (ERP) component known as the error-related negativity or ERN, thought to result from the detection of a loss of reward during performance monitoring. However, as own errors are always associated with a loss of reward, disentangling whether the ERN is error- or reward-dependent has proven to be a difficult endeavor. Recently, an ERN has also been demonstrated following the observation of other's errors. Importantly, other people's errors can be associated with loss or gain depending on the cooperative or competitive context in which they are made. The aim of the current ERP study was to disentangle the error- or reward-dependency of performance monitoring. Twelve pairs (N = 24) of participants performed and observed a speeded-choice-reaction task in two contexts. Own errors were always associated with a loss of reward. Observed errors in the cooperative context also yielded a loss of reward, but observed errors in the competitive context resulted in a gain. The results showed that the ERN was present following all types of errors independent of who made the error and the outcome of the action. Consequently, the current study demonstrates that performance monitoring as reflected by the ERN is error-specific and not directly dependent on reward.

  2. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Lai, Keke

    2011-01-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

  3. [Learning from errors after a care-related adverse event].

    PubMed

    Richard, Christian; Pibarot, Marie-Laure; Zantman, Françoise

    2016-04-01

    The mobilisation of all health professionals with regard to the detection and analysis of care-related adverse events is an essential element in the improvement of the safety of care. This approach is required by the authorities and justifiably expected by users. PMID:27085926

  4. Error-Related Activity and Correlates of Grammatical Plasticity

    PubMed Central

    Davidson, Doug J.; Indefrey, Peter

    2011-01-01

    Cognitive control involves not only the ability to manage competing task demands, but also the ability to adapt task performance during learning. This study investigated how violation-, response-, and feedback-related electrophysiological (EEG) activity changes over time during language learning. Twenty-two Dutch learners of German classified short prepositional phrases presented serially as text. The phrases were initially presented without feedback during a pre-test phase, and then with feedback in a training phase on two separate days spaced 1 week apart. The stimuli included grammatically correct phrases, as well as grammatical violations of gender and declension. Without feedback, participants’ classification was near chance and did not improve over trials. During training with feedback, behavioral classification improved and violation responses appeared to both types of violation in the form of a P600. Feedback-related negative and positive components were also present from the first day of training. The results show changes in the electrophysiological responses in concert with improving behavioral discrimination, suggesting that the activity is related to grammar learning. PMID:21960979

  5. Bivalent separation into univalents precedes age-related meiosis I errors in oocytes

    PubMed Central

    Sakakibara, Yogo; Hashimoto, Shu; Nakaoka, Yoshiharu; Kouznetsova, Anna; Höög, Christer; Kitajima, Tomoya S.

    2015-01-01

    The frequency of chromosome segregation errors during meiosis I (MI) in oocytes increases with age. The two-hit model suggests that errors are caused by the combination of a first hit that creates susceptible crossover configurations and a second hit comprising an age-related reduction in chromosome cohesion. This model predicts an age-related increase in univalents, but direct evidence of this phenomenon as a major cause of segregation errors has been lacking. Here, we provide the first live analysis of single chromosomes undergoing segregation errors during MI in the oocytes of naturally aged mice. Chromosome tracking reveals that 80% of the errors are preceded by bivalent separation into univalents. The set of the univalents is biased towards balanced and unbalanced predivision of sister chromatids during MI. Moreover, we find univalents predisposed to predivision in human oocytes. This study defines premature bivalent separation into univalents as the primary defect responsible for age-related aneuploidy. PMID:26130582

  6. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    EPA Science Inventory

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...

  7. 47 CFR 1.1167 - Error claims related to regulatory fees.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Error claims related to regulatory fees. 1.1167 Section 1.1167 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Schedule of Statutory Charges and Procedures for Payment § 1.1167 Error...

  8. 47 CFR 1.1167 - Error claims related to regulatory fees.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Error claims related to regulatory fees. 1.1167 Section 1.1167 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Schedule of Statutory Charges and Procedures for Payment § 1.1167 Error...

  9. Developmental Changes in Error Monitoring: An Event-Related Potential Study

    ERIC Educational Resources Information Center

    Wiersema, Jan R.; van der Meere, Jacob J.; Roeyers, Herbert

    2007-01-01

    The aim of the study was to investigate the developmental trajectory of error monitoring. For this purpose, children (age 7-8), young adolescents (age 13-14) and adults (age 23-24) performed a Go/No-Go task and were compared on overt reaction time (RT) performance and on event-related potentials (ERPs), thought to reflect error detection…

  10. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  11. Adaptation of hybrid human-computer interaction systems using EEG error-related potentials.

    PubMed

    Chavarriaga, Ricardo; Biasiucci, Andrea; Forster, Killian; Roggen, Daniel; Troster, Gerhard; Millan, Jose Del R

    2010-01-01

    Performance improvement in both humans and artificial systems strongly relies in the ability of recognizing erroneous behavior or decisions. This paper, that builds upon previous studies on EEG error-related signals, presents a hybrid approach for human computer interaction that uses human gestures to send commands to a computer and exploits brain activity to provide implicit feedback about the recognition of such commands. Using a simple computer game as a case study, we show that EEG activity evoked by erroneous gesture recognition can be classified in single trials above random levels. Automatic artifact rejection techniques are used, taking into account that subjects are allowed to move during the experiment. Moreover, we present a simple adaptation mechanism that uses the EEG signal to label newly acquired samples and can be used to re-calibrate the gesture recognition system in a supervised manner. Offline analysis show that, although the achieved EEG decoding accuracy is far from being perfect, these signals convey sufficient information to significantly improve the overall system performance.

  12. Shoulder proprioception is not related to throwing speed or accuracy in elite adolescent male baseball players.

    PubMed

    Freeston, Jonathan; Adams, Roger D; Rooney, Kieron

    2015-01-01

    Understanding factors that influence throwing speed and accuracy is critical to performance in baseball. Shoulder proprioception has been implicated in the injury risk of throwing athletes, but no such link has been established with performance outcomes. The purpose of this study was to describe any relationship between shoulder proprioception acuity and throwing speed or accuracy. Twenty healthy elite adolescent male baseball players (age, 19.6 ± 2.6 years), who had represented the state of New South Wales in the past 18 months, were assessed for bilateral active shoulder proprioception (shoulder rotation in 90° of arm abduction moving toward external rotation using the active movement extent discrimination apparatus), maximal throwing speed (MTS, meters per second measured via a radar gun), and accuracy (total error in centimeters determined by video analysis) at 80 and 100% of MTS. Although proprioception in the dominant and nondominant arms was significantly correlated with each other (r = 0.54, p < 0.01), no relationship was found between shoulder proprioception and performance. Shoulder proprioception was not a significant determinant of throwing performance such that high levels of speed and accuracy were achieved without a high degree of proprioception. There is no evidence to suggest therefore that this particular method of shoulder proprioception measurement should be implemented in clinical practice. Consequently, clinicians are encouraged to consider proprioception throughout the entire kinetic chain rather than the shoulder joint in isolation as a determining factor of performance in throwing athletes. PMID:24936898

  13. Shoulder proprioception is not related to throwing speed or accuracy in elite adolescent male baseball players.

    PubMed

    Freeston, Jonathan; Adams, Roger D; Rooney, Kieron

    2015-01-01

    Understanding factors that influence throwing speed and accuracy is critical to performance in baseball. Shoulder proprioception has been implicated in the injury risk of throwing athletes, but no such link has been established with performance outcomes. The purpose of this study was to describe any relationship between shoulder proprioception acuity and throwing speed or accuracy. Twenty healthy elite adolescent male baseball players (age, 19.6 ± 2.6 years), who had represented the state of New South Wales in the past 18 months, were assessed for bilateral active shoulder proprioception (shoulder rotation in 90° of arm abduction moving toward external rotation using the active movement extent discrimination apparatus), maximal throwing speed (MTS, meters per second measured via a radar gun), and accuracy (total error in centimeters determined by video analysis) at 80 and 100% of MTS. Although proprioception in the dominant and nondominant arms was significantly correlated with each other (r = 0.54, p < 0.01), no relationship was found between shoulder proprioception and performance. Shoulder proprioception was not a significant determinant of throwing performance such that high levels of speed and accuracy were achieved without a high degree of proprioception. There is no evidence to suggest therefore that this particular method of shoulder proprioception measurement should be implemented in clinical practice. Consequently, clinicians are encouraged to consider proprioception throughout the entire kinetic chain rather than the shoulder joint in isolation as a determining factor of performance in throwing athletes.

  14. The error-related negativity (ERN) and psychopathology: Toward an Endophenotype

    PubMed Central

    Olvet, Doreen M.; Hajcak, Greg

    2008-01-01

    The ERN is a negative deflection in the event-related potential that peaks approximately 50 ms after the commission of an error. The ERN is thought to reflect early error-processing activity of the anterior cingulate cortex (ACC). First, we review current functional, neurobiological, and developmental data on the ERN. Next, the ERN is discussed in terms of three psychiatric disorders characterized by abnormal response monitoring: anxiety disorders, depression, and substance abuse. These data indicate that increased and decreased error-related brain activity is associated with the internalizing and externalizing dimensions of psychopathology, respectively. Recent data further suggest that abnormal error-processing indexed by the ERN indexes trait- but not state-related symptoms, especially related to anxiety. Overall, these data point to utility of ERN in studying risk for psychiatric disorders, and are discussed in terms of the endophenotype construct. PMID:18694617

  15. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Geneviève; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  16. Confidence-Accuracy Calibration in Absolute and Relative Face Recognition Judgments

    ERIC Educational Resources Information Center

    Weber, Nathan; Brewer, Neil

    2004-01-01

    Confidence-accuracy (CA) calibration was examined for absolute and relative face recognition judgments as well as for recognition judgments from groups of stimuli presented simultaneously or sequentially (i.e., simultaneous or sequential mini-lineups). When the effect of difficulty was controlled, absolute and relative judgments produced…

  17. Evaluation of Relative Geometric Accuracy of Terrasar-X by Pixel Matching Methodology

    NASA Astrophysics Data System (ADS)

    Nonaka, T.; Asaka, T.; Iwashita, K.

    2016-06-01

    Recently, high-resolution commercial SAR satellites with several meters of resolutions are widely utilized for various applications and disaster monitoring is one of the commonly applied areas. The information about the flooding situation and ground displacement was rapidly announced to the public after the Great East Japan Earthquake 2011. One of the studies reported the displacement in Tohoku region by the pixel matching methodology using both pre- and post- event TerraSAR-X data, and the validated accuracy was about 30 cm at the GEONET reference points. In order to discuss the spatial distribution of the displacement, we need to evaluate the relative accuracy of the displacement in addition to the absolute accuracy. In the previous studies, our study team evaluated the absolute 2D geo-location accuracy of the TerraSAR-X ortho-rectified EEC product for both flat and mountain areas. Therefore, the purpose of the current study was to evaluate the spatial and temporal relative geo-location accuracies of the product by considering the displacement of the fixed point as the relative geo-location accuracy. Firstly, by utilizing TerraSAR-X StripMap dataset, the pixel matching method for estimating the displacement with sub-pixel level was developed. Secondly, the validity of the method was confirmed by comparing with GEONET data. We confirmed that the accuracy of the displacement for X and Y direction was in agreement with the previous studies. Subsequently, the methodology was applied to 20 pairs of data set for areas of Tokyo Ota-ku and Kawasaki-shi, and the displacement of each pair was evaluated. It was revealed that the time series displacement rate had the seasonal trend and seemed to be related to atmospheric delay.

  18. Punishment has a lasting impact on error-related brain activity.

    PubMed

    Riesel, Anja; Weinberg, Anna; Endrass, Tanja; Kathmann, Norbert; Hajcak, Greg

    2012-02-01

    The current study examined whether punishment has direct and lasting effects on error-related brain activity, and whether this effect is larger with increasing trait anxiety. Participants were told that errors on a flanker task would be punished in some blocks but not others. Punishment was applied following 50% of errors in punished blocks during the first half of the experiment (i.e., acquisition), but never in the second half (i.e., extinction). The ERN was enhanced in the punished blocks in both experimental phases--this enhancement remained stable throughout the extinction phase. More anxious individuals were characterized by larger punishment-related modulations in the ERN. The study reveals evidence for lasting, punishment-based modulations of the ERN that increase with anxiety. These data suggest avenues for research to examine more specific learning-related mechanisms that link anxiety to overactive error monitoring. PMID:22092041

  19. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  20. Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder

    ERIC Educational Resources Information Center

    Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.

    2012-01-01

    Objective: The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes…

  1. SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

    SciTech Connect

    Lee, Khee-Gan

    2012-07-10

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

  2. Maximizing Accuracy in Half Life Measurements, by Minimizing Error, with Application to BISMUTH-212 and POLONIUM-218

    NASA Astrophysics Data System (ADS)

    Poupaki, Irene

    1990-01-01

    Radon and short-lived progeny existing in all three primordial series, namely uranium, thorium and actinium, are of most significance for human exposure, since their inhalation is implicated in bronchogenic carcinoma. Because the dosimetric calculations utilize half-life, it is important to know this parameter with the maximum possible accuracy. The half-lives of Po-218 and Bi-212, radon-222 and radon-220 progeny, were measured as 3.078 +/- 0.01 min and 59.81 +/- 0.23 min respectively. Experimental data collected by alpha-counting included background from both the counter and the intrinsic radioactivity. A comparison of all mathematical methods presently employed in the analysis of experimental radioactivity decay is presented. The most commonly used methods are, the least squares, the weighted least squares with different weighting factors and the maximum likelihood (Peierls) method. Artificial data corresponding to three different nuclides, different total experimental duration, and different counting time intervals were generated. Testing these data showed that the WLSQ with the correct weighting factor gives the higher accuracy and precision. Without spectrometry, it is impossible to measure the quantity or half-life of Po-218 unless correction is made for the Po-214 daughter. For the measurement of Po-218, samples containing the short-lived radon-222 daughters were collected electrostatically. A method used to estimate the initial radon daughters concentration in air based on regression analysis is proposed. The comparison of this with the well known Thomas method using artificial data showed that the regression analysis method leads to more accurate results. The Bi-212 samples were prepared from a Th-228 solution by spontaneous electrodeposition on platinum and nickel disks. Corrections for the curve stripping have been applied. The methods developed here have general application to the measurement of any radioactive nuclides. A review of the measured half

  3. Crying tapir: the functionality of errors and accuracy in predator recognition in two neotropical high-canopy primates.

    PubMed

    Mourthé, Ítalo; Barnett, Adrian A

    2014-01-01

    Predation is often considered to be a prime driver in primate evolution, but, as predation is rarely observed in nature, little is known of primate antipredator responses. Time-limited primates should be highly discerning when responding to predators, since time spent in vigilance and avoidance behaviour may supplant other activities. We present data from two independent studies describing and quantifying the frequency, nature and duration of predator-linked behaviours in 2 high-canopy primates, Ateles belzebuth and Cacajao ouakary. We introduce the concept of 'pseudopredators' (harmless species whose appearance is sufficiently similar to that of predators to elicit antipredator responses) and predict that changes in behaviour should increase with risk posed by a perceived predator. We studied primate group encounters with non-primate vertebrates across 14 (Ateles) and 19 (Cacajao) months in 2 undisturbed Amazonian forests. Although preliminary, data on both primates revealed that they distinguished the potential predation capacities of other species, as predicted. They appeared to differentiate predators from non-predators and distinguished when potential predators were not an immediate threat, although they reacted erroneously to pseudopredators, on average in about 20% of the responses given toward other vertebrates. Reacting to pseudopredators would be interesting since, in predation, one error can be fatal to the prey. PMID:25791040

  4. Crying tapir: the functionality of errors and accuracy in predator recognition in two neotropical high-canopy primates.

    PubMed

    Mourthé, Ítalo; Barnett, Adrian A

    2014-01-01

    Predation is often considered to be a prime driver in primate evolution, but, as predation is rarely observed in nature, little is known of primate antipredator responses. Time-limited primates should be highly discerning when responding to predators, since time spent in vigilance and avoidance behaviour may supplant other activities. We present data from two independent studies describing and quantifying the frequency, nature and duration of predator-linked behaviours in 2 high-canopy primates, Ateles belzebuth and Cacajao ouakary. We introduce the concept of 'pseudopredators' (harmless species whose appearance is sufficiently similar to that of predators to elicit antipredator responses) and predict that changes in behaviour should increase with risk posed by a perceived predator. We studied primate group encounters with non-primate vertebrates across 14 (Ateles) and 19 (Cacajao) months in 2 undisturbed Amazonian forests. Although preliminary, data on both primates revealed that they distinguished the potential predation capacities of other species, as predicted. They appeared to differentiate predators from non-predators and distinguished when potential predators were not an immediate threat, although they reacted erroneously to pseudopredators, on average in about 20% of the responses given toward other vertebrates. Reacting to pseudopredators would be interesting since, in predation, one error can be fatal to the prey.

  5. Precision error in dual-photon absorptiometry related to source age

    SciTech Connect

    Ross, P.D.; Wasnich, R.D.; Vogel, J.M.

    1988-02-01

    An average, variable precision error of up to 6% related to source age was observed for dual-photon absorptiometry of the spine in a longitudinal study of bone mineral content involving 393 women. Application of a software correction for source decay compensated for only a portion of this error. The authors conclude that measurement of bone-loss rates using serial dual-photon bone mineral measurements must be interpreted with caution.

  6. CORRECTED ERROR VIDEO VERSUS A PHYSICAL THERAPIST INSTRUCTED HOME EXERCISE PROGRAM: ACCURACY OF PERFORMING THERAPEUTIC SHOULDER EXERCISES

    PubMed Central

    Krishnamurthy, Kamesh; Hopp, Jennifer; Stanley, Laura; Spores, Ken; Braunreiter, David

    2016-01-01

    Background and Purpose The accurate performance of physical therapy exercises can be difficult. In this evolving healthcare climate it is important to continually look for better methods to educate patients. The use of handouts, in-person demonstration, and video instruction are all potential avenues used to teach proper exercise form. The purpose of this study was to examine if a corrected error video (CEV) would be as effective as a single visit with a physical therapist (PT) to teach healthy subjects how to properly perform four different shoulder rehabilitation exercises. Study Design This was a prospective, single-blinded interventional trial. Methods Fifty-eight subjects with no shoulder complaints were recruited from two institutions and randomized into one of two groups: the CEV group (30 subjects) was given a CEV comprised of four shoulder exercises, while the physical therapy group (28 subjects) had one session with a PT as well as a handout of how to complete the exercises. Each subject practiced the exercises for one week and was then videotaped performing them during a return visit. Videos were scored with the shoulder exam assessment tool (SEAT) created by the authors. Results There was no difference between the groups on total SEAT score (13.66 ± 0.29 vs 13.46 ± 0.30 for CEV vs PT, p = 0.64, 95% CI [−0.06, 0.037]). Average scores for individual exercises also showed no significant difference. Conclusion/Clinical Relevance These results demonstrate that the inexpensive and accessible CEV is as beneficial as direct instruction in teaching subjects to properly perform shoulder rehabilitation exercises. Level of Evidence 1b PMID:27757288

  7. SCIAMACHY WFM-DOAS XCO2: reduction of scattering related errors

    NASA Astrophysics Data System (ADS)

    Heymann, J.; Bovensmann, H.; Buchwitz, M.; Burrows, J. P.; Deutscher, N. M.; Notholt, J.; Rettinger, M.; Reuter, M.; Schneising, O.; Sussmann, R.; Warneke, T.

    2012-10-01

    Global observations of column-averaged dry air mole fractions of carbon dioxide (CO2), denoted by XCO2 , retrieved from SCIAMACHY on-board ENVISAT can provide important and missing global information on the distribution and magnitude of regional CO2 surface fluxes. This application has challenging precision and accuracy requirements. In a previous publication (Heymann et al., 2012), it has been shown by analysing seven years of SCIAMACHY WFM-DOAS XCO2 (WFMDv2.1) that unaccounted thin cirrus clouds can result in significant errors. In order to enhance the quality of the SCIAMACHY XCO2 data product, we have developed a new version of the retrieval algorithm (WFMDv2.2), which is described in this manuscript. It is based on an improved cloud filtering and correction method using the 1.4 μm strong water vapour absorption and 0.76 μm O2-A bands. The new algorithm has been used to generate a SCIAMACHY XCO2 data set covering the years 2003-2009. The new XCO2 data set has been validated using ground-based observations from the Total Carbon Column Observing Network (TCCON). The validation shows a significant improvement of the new product (v2.2) in comparison to the previous product (v2.1). For example, the standard deviation of the difference to TCCON at Darwin, Australia, has been reduced from 4 ppm to 2 ppm. The monthly regional-scale scatter of the data (defined as the mean intra-monthly standard deviation of all quality filtered XCO2 retrievals within a radius of 350 km around various locations) has also been reduced, typically by a factor of about 1.5. Overall, the validation of the new WFMDv2.2 XCO2 data product can be summarised by a single measurement precision of 3.8 ppm, an estimated regional-scale (radius of 500 km) precision of monthly averages of 1.6 ppm and an estimated regional-scale relative accuracy of 0.8 ppm. In addition to the comparison with the limited number of TCCON sites, we also present a comparison with NOAA's global CO2 modelling and

  8. SCIAMACHY WFM-DOAS XCO2: reduction of scattering related errors

    NASA Astrophysics Data System (ADS)

    Heymann, J.; Bovensmann, H.; Buchwitz, M.; Burrows, J. P.; Deutscher, N. M.; Notholt, J.; Rettinger, M.; Reuter, M.; Schneising, O.; Sussmann, R.; Warneke, T.

    2012-06-01

    Global observations of column-averaged dry air mole fractions of carbon dioxide (CO2), denoted by XCO2, retrieved from passive remote sensing instruments on Earth orbiting satellites can provide important and missing global information on the distribution and magnitude of regional CO2 surface fluxes. This application has challenging precision and accuracy requirements. SCIAMACHY on-board ENVISAT is the first satellite instrument, which measures the upwelling electromagnetic radiation in the near and short wave infrared at an adequate spectral and spatial resolution to yield near-surface sensitive XCO2. In a previous publication (Heymann et al., 2012), it has been shown by analysing seven years of SCIAMACHY WFM-DOAS XCO2 (WFMDv2.1) that unaccounted thin cirrus clouds can result in significant errors. In order to enhance the quality of the SCIAMACHY XCO2 data product, we have developed a new version of the retrieval algorithm (WFMDv2.2), which is described in this manuscript. It is based on an improved cloud filtering and correction method using the 1.4 μm strong water vapour absorption and 0.76 μm O2-A bands. The new algorithm has been used to generate a SCIAMACHY XCO2 data set covering the years 2003-2009. The new XCO2 data set has been validated using ground-based observations from the Total Carbon Column Observing Network (TCCON). The validation shows a significant improvement of the new product (v2.2) in comparison to the previous product (v2.1). For example, the standard deviation of the difference to TCCON at Darwin, Australia, has been reduced from 4 ppm to 2 ppm. The monthly regional-scale scatter of the data (defined as the mean inner monthly standard deviation of all quality filtered XCO2 retrievals within a radius of 350 km around various locations) has also been reduced, typically by a factor of about 1.5. Overall, the validation of the new WFMDv2.2 XCO2 data product can be summarised by a single measurement precision of 3.8 ppm, an estimated regional

  9. Spatial and temporal characteristics of error-related activity in the human brain.

    PubMed

    Neta, Maital; Miezin, Francis M; Nelson, Steven M; Dubis, Joseph W; Dosenbach, Nico U F; Schlaggar, Bradley L; Petersen, Steven E

    2015-01-01

    A number of studies have focused on the role of specific brain regions, such as the dorsal anterior cingulate cortex during trials on which participants make errors, whereas others have implicated a host of more widely distributed regions in the human brain. Previous work has proposed that there are multiple cognitive control networks, raising the question of whether error-related activity can be found in each of these networks. Thus, to examine error-related activity broadly, we conducted a meta-analysis consisting of 12 tasks that included both error and correct trials. These tasks varied by stimulus input (visual, auditory), response output (button press, speech), stimulus category (words, pictures), and task type (e.g., recognition memory, mental rotation). We identified 41 brain regions that showed a differential fMRI BOLD response to error and correct trials across a majority of tasks. These regions displayed three unique response profiles: (1) fast, (2) prolonged, and (3) a delayed response to errors, as well as a more canonical response to correct trials. These regions were found mostly in several control networks, each network predominantly displaying one response profile. The one exception to this "one network, one response profile" observation is the frontoparietal network, which showed prolonged response profiles (all in the right hemisphere), and fast profiles (all but one in the left hemisphere). We suggest that, in the place of a single localized error mechanism, these findings point to a large-scale set of error-related regions across multiple systems that likely subserve different functions.

  10. Spatial and Temporal Characteristics of Error-Related Activity in the Human Brain

    PubMed Central

    Miezin, Francis M.; Nelson, Steven M.; Dubis, Joseph W.; Dosenbach, Nico U.F.; Schlaggar, Bradley L.; Petersen, Steven E.

    2015-01-01

    A number of studies have focused on the role of specific brain regions, such as the dorsal anterior cingulate cortex during trials on which participants make errors, whereas others have implicated a host of more widely distributed regions in the human brain. Previous work has proposed that there are multiple cognitive control networks, raising the question of whether error-related activity can be found in each of these networks. Thus, to examine error-related activity broadly, we conducted a meta-analysis consisting of 12 tasks that included both error and correct trials. These tasks varied by stimulus input (visual, auditory), response output (button press, speech), stimulus category (words, pictures), and task type (e.g., recognition memory, mental rotation). We identified 41 brain regions that showed a differential fMRI BOLD response to error and correct trials across a majority of tasks. These regions displayed three unique response profiles: (1) fast, (2) prolonged, and (3) a delayed response to errors, as well as a more canonical response to correct trials. These regions were found mostly in several control networks, each network predominantly displaying one response profile. The one exception to this “one network, one response profile” observation is the frontoparietal network, which showed prolonged response profiles (all in the right hemisphere), and fast profiles (all but one in the left hemisphere). We suggest that, in the place of a single localized error mechanism, these findings point to a large-scale set of error-related regions across multiple systems that likely subserve different functions. PMID:25568119

  11. Accuracy of Noncycloplegic Retinoscopy, Retinomax Autorefractor, and SureSight Vision Screener for Detecting Significant Refractive Errors

    PubMed Central

    Kulp, Marjean Taylor; Ying, Gui-shuang; Huang, Jiayan; Maguire, Maureen; Quinn, Graham; Ciner, Elise B.; Cyert, Lynn A.; Orel-Bixler, Deborah A.; Moore, Bruce D.

    2014-01-01

    Purpose. To evaluate, by receiver operating characteristic (ROC) analysis, the ability of noncycloplegic retinoscopy (NCR), Retinomax Autorefractor (Retinomax), and SureSight Vision Screener (SureSight) to detect significant refractive errors (RE) among preschoolers. Methods. Refraction results of eye care professionals using NCR, Retinomax, and SureSight (n = 2588) and of nurse and lay screeners using Retinomax and SureSight (n = 1452) were compared with masked cycloplegic retinoscopy results. Significant RE was defined as hyperopia greater than +3.25 diopters (D), myopia greater than 2.00 D, astigmatism greater than 1.50 D, and anisometropia greater than 1.00 D interocular difference in hyperopia, greater than 3.00 D interocular difference in myopia, or greater than 1.50 D interocular difference in astigmatism. The ability of each screening test to identify presence, type, and/or severity of significant RE was summarized by the area under the ROC curve (AUC) and calculated from weighted logistic regression models. Results. For detection of each type of significant RE, AUC of each test was high; AUC was better for detecting the most severe levels of RE than for all REs considered important to detect (AUC 0.97–1.00 vs. 0.92–0.93). The area under the curve of each screening test was high for myopia (AUC 0.97–0.99). Noncycloplegic retinoscopy and Retinomax performed better than SureSight for hyperopia (AUC 0.92–0.99 and 0.90–0.98 vs. 0.85–0.94, P ≤ 0.02), Retinomax performed better than NCR for astigmatism greater than 1.50 D (AUC 0.95 vs. 0.90, P = 0.01), and SureSight performed better than Retinomax for anisometropia (AUC 0.85–1.00 vs. 0.76–0.96, P ≤ 0.07). Performance was similar for nurse and lay screeners in detecting any significant RE (AUC 0.92–1.00 vs. 0.92–0.99). Conclusions. Each test had a very high discriminatory power for detecting children with any significant RE. PMID:24481262

  12. Error-Related Brain Activity in Young Children: Associations with Parental Anxiety and Child Temperamental Negative Emotionality

    ERIC Educational Resources Information Center

    Torpey, Dana C.; Hajcak, Greg; Kim, Jiyon; Kujawa, Autumn J.; Dyson, Margaret W.; Olino, Thomas M.; Klein, Daniel N.

    2013-01-01

    Background: There is increasing interest in error-related brain activity in anxiety disorders. The error-related negativity (ERN) is a negative deflection in the event-related potential approximately 50 [milliseconds] after errors compared to correct responses. Recent studies suggest that the ERN may be a biomarker for anxiety, as it is positively…

  13. Senior High School Students' Errors on the Use of Relative Words

    ERIC Educational Resources Information Center

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  14. Chemical modification of proteins to improve the accuracy of their relative molecular mass determination by electrophoresis.

    PubMed

    Dolnik, Vladislav; Gurske, William A

    2011-10-01

    We studied the electrophoretic behavior of basic proteins (cytochrome c and histone III) and developed a carbamylation method that normalizes their electrophoretic size separation and improves the accuracy of their relative molecular mass determined electrophoretically. In capillary zone electrophoresis with cationic hitchhiking, native cytochrome c does not sufficiently bind cationic surfactants due to electrostatic repulsion between the basic protein and cationic surfactant. Carbamylation suppresses the strong positive charge of the basic proteins and results in more accurate relative molecular masses.

  15. A cerebellar thalamic cortical circuit for error-related cognitive control

    PubMed Central

    Ide, Jaime S.; Li, Chiang-shan Ray

    2010-01-01

    Error detection and behavioral adjustment are core components of cognitive control. Numerous studies have focused on the anterior cingulate cortex (ACC) as a critical locus of this executive function. Our previous work showed greater activation in the dorsal ACC and subcortical structures during error detection, and activation in the ventrolateral prefrontal cortex (VLPFC) during post-error slowing (PES) in a stop signal task (SST). However, the extent of error-related cortical or subcortical activation across subjects was not correlated with VLPFC activity during PES. So then, what causes VLPFC activation during PES? To address this question, we employed Granger causality mapping (GCM) and identified regions that Granger caused VLPFC activation in 54 adults performing the SST during fMRI. These brain regions, including the supplementary motor area (SMA), cerebellum, a pontine region, and medial thalamus, represent potential targets responding to errors in a way that could influence VLPFC activation. In confirmation of this hypothesis, the error-related activity of these regions correlated with VLPFC activation during PES, with the cerebellum showing the strongest association. The finding that cerebellar activation Granger causes prefrontal activity during behavioral adjustment supports a cerebellar function in cognitive control. Furthermore, multivariate GCA described the “flow of information” across these brain regions. Through connectivity with the thalamus and SMA, the cerebellum mediates error and post-error processing in accord with known anatomical projections. Taken together, these new findings highlight the role of the cerebello-thalamo-cortical pathway in an executive function that has heretofore largely been ascribed to the anterior cingulate-prefrontal cortical circuit. PMID:20656038

  16. Accuracy of velocities from repeated GPS surveys: relative positioning is concerned

    NASA Astrophysics Data System (ADS)

    Duman, Huseyin; Ugur Sanli, D.

    2016-04-01

    Over more than a decade, researchers have been interested in studying the accuracy of GPS positioning solutions. Recently, reporting the accuracy of GPS velocities has been added to this. Researchers studying landslide motion, tectonic motion, uplift, sea level rise, and subsidence still report results from GPS experiments in which repeated GPS measurements from short sessions are used. This motivated some other researchers to study the accuracy of GPS deformation rates/velocities from various repeated GPS surveys. In one of the efforts, the velocity accuracy was derived from repeated GPS static surveys using short observation sessions and Precise Point Positioning mode of GPS software. Velocities from short GPS sessions were compared with the velocities from 24 h sessions. The accuracy of velocities was obtained using statistical hypothesis testing and quantifying the accuracy of least squares estimation models. The results reveal that 45-60 % of the horizontal and none of the vertical solutions comply with the results from 24 h solutions. We argue that this case in which the data was evaluated using PPP should also apply to the case in which the data belonging to long GPS base lengths is processed using fundamental relative point positioning. To test this idea we chose the two IGS stations ANKR and NICO and derive their velocities from the reference stations held fixed in the stable EURASIAN plate. The University of Bern's GNSS software BERNESE was used to produce relative positioning solutions, and the results are compared with those of GIPSY/OASIS II PPP results. First impressions indicate that it is worth designing a global experiment and test these ideas in detail.

  17. Detecting malingered pain-related disability: classification accuracy of the Portland Digit Recognition Test.

    PubMed

    Greve, Kevin W; Bianchini, Kevin J; Etherton, Joseph L; Ord, Jonathan S; Curtis, Kelly L

    2009-07-01

    This study used criterion groups validation to determine the classification accuracy of the Portland Digit Recognition Test (PDRT) at a range of cutting scores in chronic pain patients undergoing psychological evaluation (n = 318), college student simulators (n = 29), and patients with brain damage (n = 120). PDRT scores decreased and failure rates increased as a function of greater independent evidence of intentional underperformance. There were no differences between patients classified as malingering and college student simulators. The PDRT detected from 33% to nearly 60% of malingering chronic pain patients, depending on the cutoff used. False positive error rates ranged from 3% to 6%. Scores higher than the original cutoffs may be interpreted as indicating negative response bias in patients with pain, increasing the usefulness and facilitating the clinical application of the PDRT in the detection of malingering in pain.

  18. Age-Related Differences in the Accuracy of Web Query-Based Predictions of Influenza-Like Illness

    PubMed Central

    Domnich, Alexander; Panatto, Donatella; Signori, Alessio; Lai, Piero Luigi; Gasparini, Roberto; Amicizia, Daniela

    2015-01-01

    Background Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI). However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age. Methods Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed. Results Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929). However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0–4 and 25–44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15–24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing. Conclusions The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI. PMID:26011418

  19. Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits

    PubMed Central

    Maurer, J. Michael; Steele, Vaughn R.; Cope, Lora M.; Vincent, Gina M.; Stephen, Julia M.; Calhoun, Vince D.; Kiehl, Kent A.

    2016-01-01

    Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n = 100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170

  20. Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits.

    PubMed

    Maurer, J Michael; Steele, Vaughn R; Cope, Lora M; Vincent, Gina M; Stephen, Julia M; Calhoun, Vince D; Kiehl, Kent A

    2016-06-01

    Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n=100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170

  1. Impact of Uncertainties and Errors in Converting NWS Radiosonde Hygristor Resistances to Relative Humidity

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.; Russell, Philip B. (Technical Monitor)

    1994-01-01

    A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD ) brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. in addition, a comparison of observations of RH during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.

  2. Impact of Uncertainties and Errors in Converting NWS Radiosonde Hygristor Resistances to Relative Humidity

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.; Russell, Philip (Technical Monitor)

    1994-01-01

    A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. In addition, a comparison of observations of RE during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.

  3. Errors in reward prediction are reflected in the event-related brain potential.

    PubMed

    Holroyd, Clay B; Nieuwenhuis, Sander; Yeung, Nick; Cohen, Jonathan D

    2003-12-19

    The error-related negativity (ERN) is a negative deflection in the event-related brain potential associated with error processing. A recent theory holds that the ERN is elicited by the impact of a reward prediction error signal carried by the mesencephalic dopamine system on anterior cingulate cortex. The theory predicts that larger ERNs should be elicited by unexpected unfavorable outcomes than by expected unfavorable outcomes. We tested the theory in an experiment in which the frequency of occurrence of reward was varied by condition, reasoning that the system that produces the ERN would come to expect non-reward when rewards were infrequent. Consistent with the theory, we found that larger ERNs were elicited by unexpected absences of reward.

  4. Investigation of technology needs for avoiding helicopter pilot error related accidents

    NASA Technical Reports Server (NTRS)

    Chais, R. I.; Simpson, W. E.

    1985-01-01

    Pilot error which is cited as a cause or related factor in most rotorcraft accidents was examined. Pilot error related accidents in helicopters to identify areas in which new technology could reduce or eliminate the underlying causes of these human errors were investigated. The aircraft accident data base at the U.S. Army Safety Center was studied as the source of data on helicopter accidents. A randomly selected sample of 110 aircraft records were analyzed on a case-by-case basis to assess the nature of problems which need to be resolved and applicable technology implications. Six technology areas in which there appears to be a need for new or increased emphasis are identified.

  5. Experimental violation and reformulation of the Heisenberg's error-disturbance uncertainty relation

    PubMed Central

    Baek, So-Young; Kaneda, Fumihiro; Ozawa, Masanao; Edamatsu, Keiichi

    2013-01-01

    The uncertainty principle formulated by Heisenberg in 1927 describes a trade-off between the error of a measurement of one observable and the disturbance caused on another complementary observable such that their product should be no less than the limit set by Planck's constant. However, Ozawa in 1988 showed a model of position measurement that breaks Heisenberg's relation and in 2003 revealed an alternative relation for error and disturbance to be proven universally valid. Here, we report an experimental test of Ozawa's relation for a single-photon polarization qubit, exploiting a more general class of quantum measurements than the class of projective measurements. The test is carried out by linear optical devices and realizes an indirect measurement model that breaks Heisenberg's relation throughout the range of our experimental parameter and yet validates Ozawa's relation. PMID:23860715

  6. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  7. Tracing Error-Related Knowledge in Interview Data: Negative Knowledge in Elder Care Nursing

    ERIC Educational Resources Information Center

    Gartmeier, Martin; Gruber, Hans; Heid, Helmut

    2010-01-01

    This paper empirically investigates elder care nurses' negative knowledge. This form of experiential knowledge is defined as the outcome of error-related learning processes, focused on how something is not, on what not to do in certain situations or on deficits in one's knowledge or skills. Besides this definition, we presume the existence of…

  8. EEG-based decoding of error-related brain activity in a real-world driving task

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Chavarriaga, R.; Khaliliardali, Z.; Gheorghe, L.; Iturrate, I.; Millán, J. d. R.

    2015-12-01

    Objectives. Recent studies have started to explore the implementation of brain-computer interfaces (BCI) as part of driving assistant systems. The current study presents an EEG-based BCI that decodes error-related brain activity. Such information can be used, e.g., to predict driver’s intended turning direction before reaching road intersections. Approach. We executed experiments in a car simulator (N = 22) and a real car (N = 8). While subject was driving, a directional cue was shown before reaching an intersection, and we classified the presence or not of an error-related potentials from EEG to infer whether the cued direction coincided with the subject’s intention. In this protocol, the directional cue can correspond to an estimation of the driving direction provided by a driving assistance system. We analyzed ERPs elicited during normal driving and evaluated the classification performance in both offline and online tests. Results. An average classification accuracy of 0.698 ± 0.065 was obtained in offline experiments in the car simulator, while tests in the real car yielded a performance of 0.682 ± 0.059. The results were significantly higher than chance level for all cases. Online experiments led to equivalent performances in both simulated and real car driving experiments. These results support the feasibility of decoding these signals to help estimating whether the driver’s intention coincides with the advice provided by the driving assistant in a real car. Significance. The study demonstrates a BCI system in real-world driving, extending the work from previous simulated studies. As far as we know, this is the first online study in real car decoding driver’s error-related brain activity. Given the encouraging results, the paradigm could be further improved by using more sophisticated machine learning approaches and possibly be combined with applications in intelligent vehicles.

  9. Spatial reconstruction by patients with hippocampal damage is dominated by relational memory errors

    PubMed Central

    Watson, Patrick D.; Voss, Joel L.; Warren, David E.; Tranel, Daniel; Cohen, Neal J.

    2013-01-01

    Hippocampal damage causes profound yet circumscribed memory impairment across diverse stimulus types and testing formats. Here, within a single test format involving a single class of stimuli, we identified different performance errors to better characterize the specifics of the underlying deficit. The task involved study and reconstruction of object arrays across brief retention intervals. The most striking feature of patients’ with hippocampal damage performance was that they tended to reverse the relative positions of item pairs within arrays of any size, effectively “swapping” pairs of objects. These “swap errors” were the primary error type in amnesia, almost never occurred in healthy comparison participants, and actually contributed to poor performance on more traditional metrics (such as distance between studied and reconstructed location). Patients made swap errors even in trials involving only a single pair of objects. The selectivity and severity of this particular deficit creates serious challenges for theories of memory and hippocampus. PMID:23418096

  10. Error-related ERP components and individual differences in punishment and reward sensitivity.

    PubMed

    Boksem, Maarten A S; Tops, Mattie; Wester, Anne E; Meijman, Theo F; Lorist, Monicque M

    2006-07-26

    Although the focus of the discussion regarding the significance of the error related negatively (ERN/Ne) has been on the cognitive factors reflected in this component, there is now a growing body of research that describes influences of motivation, affective style and other factors of personality on ERN/Ne amplitude. The present study was conducted to further evaluate the relationship between affective style, error related ERP components and their neural basis. Therefore, we had our subjects fill out the Behavioral Activation System/Behavioral Inhibition System (BIS/BAS) scales, which are based on Gray's (1987, 1989) biopsychological theory of personality. We found that subjects scoring high on the BIS scale displayed larger ERN/Ne amplitudes, while subjects scoring high on the BAS scale displayed larger error positivity (Pe) amplitudes. No correlations were found between BIS and Pe amplitude or between BAS and ERN/Ne amplitude. Results are discussed in terms of individual differences in reward and punishment sensitivity that are reflected in error related ERP components.

  11. Error-related ERP components and individual differences in punishment and reward sensitivity.

    PubMed

    Boksem, Maarten A S; Tops, Mattie; Wester, Anne E; Meijman, Theo F; Lorist, Monicque M

    2006-07-26

    Although the focus of the discussion regarding the significance of the error related negatively (ERN/Ne) has been on the cognitive factors reflected in this component, there is now a growing body of research that describes influences of motivation, affective style and other factors of personality on ERN/Ne amplitude. The present study was conducted to further evaluate the relationship between affective style, error related ERP components and their neural basis. Therefore, we had our subjects fill out the Behavioral Activation System/Behavioral Inhibition System (BIS/BAS) scales, which are based on Gray's (1987, 1989) biopsychological theory of personality. We found that subjects scoring high on the BIS scale displayed larger ERN/Ne amplitudes, while subjects scoring high on the BAS scale displayed larger error positivity (Pe) amplitudes. No correlations were found between BIS and Pe amplitude or between BAS and ERN/Ne amplitude. Results are discussed in terms of individual differences in reward and punishment sensitivity that are reflected in error related ERP components. PMID:16784728

  12. Error-Related Brain Activity in Extraverts: Evidence for Altered Response Monitoring in Social Context

    PubMed Central

    Fishman, Inna; Ng, Rowena

    2013-01-01

    While the personality trait of extraversion has been linked to enhanced reward sensitivity and its putative neural correlates, little is known about whether extraverts’ neural circuits are particularly sensitive to social rewards, given their preference for social engagement and social interactions. Using event-related potentials (ERPs), this study examined the relationship between the variation on the extraversion spectrum and a feedback-related ERP component (the error-related negativity or ERN) known to be sensitive to the value placed on errors and reward. Participants completed a forced-choice task, in which either rewarding or punitive feedback regarding their performance was provided, through either social (facial expressions) or non-social (verbal written) mode. The ERNs elicited by error trials in the social – but not in non-social – blocks were found to be associated with the extent of one’s extraversion. However, the directionality of the effect was in contrast with the original prediction: namely, extraverts exhibited smaller ERNs than introverts during social blocks, whereas all participants produced similar ERNs in the non-social, verbal feedback condition. This finding suggests that extraverts exhibit diminished engagement in response monitoring – or find errors to be less salient – in the context of social feedback, perhaps because they find social contexts more predictable and thus more pleasant and less anxiety provoking. PMID:23454520

  13. The NIMH Research Domain Criteria initiative and error-related brain activity.

    PubMed

    Hanna, Gregory L; Gehring, William J

    2016-03-01

    Research on the neural response to errors has an important role in the Research Domain Criteria (RDoC) project, since it is likely to link psychopathology to the dysfunction of neural systems underlying basic behavioral functions, with the error-related negativity (ERN) appearing as a unit of measurement in three RDoC domains. A recent report builds on previous research by examining the ERN as a measure of the sustained threat construct and providing evidence that the ERN may reflect sensitivity more specifically to endogenous threat. Data from 515 adolescent females indicate that the ERN was enlarged primarily in older adolescents with self-reported checking behaviors, although it was blunted in adolescents with depressive symptoms regardless of age. Potential future studies for replicating and extending the research on the ERN and obsessive-compulsive (OC) behaviors are discussed, including studies that more fully characterize OC symptom dimensions, studies that integrate other measures of error-related brain activity and use computational modeling, studies that combine longitudinal, family, and molecular genetic measures, and interventional studies that specifically modulate error-related brain activity in individuals with OC behaviors. PMID:26877130

  14. Increasing the saliency of behavior-consequence relations for children with autism who exhibit persistent errors.

    PubMed

    Fisher, Wayne W; Pawich, Tamara L; Dickes, Nitasha; Paden, Amber R; Toussaint, Karen

    2014-01-01

    Some children with autism spectrum disorders (ASD) display persistent errors that are not responsive to commonly used prompting or error-correction strategies; one possible reason for this is that the behavior-consequence relations are not readily discriminable (Davison & Nevin, 1999). In this study, we increased the discriminability of the behavior-consequence relations in conditional-discrimination acquisition tasks for 3 children with ASD using schedule manipulations in concert with a unique visual display designed to increase the saliency of the differences between consequences in effect for correct responding and for errors. A multiple baseline design across participants was used to show that correct responding increased for all participants, and, after 1 or more exposures to the intervention, correct responding persisted to varying degrees across participants when the differential reinforcement baseline was reintroduced to assess maintenance. These findings suggest that increasing the saliency of behavior-consequence relations may help to increase correct responding in children with ASD who exhibit persistent errors.

  15. Laparoscopic cholecystectomy: device-related errors revealed through a national database.

    PubMed

    Panesar, Sukhmeet S; Salvilla, Sarah A; Patel, Bhavesh; Donaldson, Sir Liam

    2011-09-01

    Laparoscopic techniques represent a key milestone in the development of modern surgery, offering a step change in quality of care, patient satisfaction and efficiency in use of health service resources. Laparoscopy is most widely used for gall bladder surgery. As would be expected with the introduction of any new technology, the early phase of development was accompanied by complications in its use. Arguably some of these should have been anticipated, but nevertheless standards and training programs were subsequently put in place to secure a more consistent standard of care across the UK. Now that this early learning curve has largely been negotiated, we wanted to examine the nature of the errors associated with laparoscopic gall bladder surgery, particularly in relation to equipment. We used data from the largest error-reporting system in the world to examine the problem of equipment-related incidents amongst patients who had laparoscopic cholecystectomy. Over the 6-year period 2004-2010, the number of such reports increased 15-fold, whilst the growth in use of the procedure itself increased 1.3-fold. The majority of the increase was in device-related errors. User-related errors constituted a smaller proportion of errors. Whilst most surgeons appear to carry out laparoscopic surgery with a low level of harm to their patients, problems with their equipment remains a risk for many procedures. In some ways, this is an easier problem to address than one associated with competency. A risk associated with faulty, substandard or misused equipment is one that should be minimized in a 21st Century surgical service. PMID:22026620

  16. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  17. Mediofrontal event-related potentials in response to positive, negative and unsigned prediction errors.

    PubMed

    Sambrook, Thomas D; Goslin, Jeremy

    2014-08-01

    Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an expected and obtained reward. There is evidence that the brain computes RPEs, but an outstanding question is whether positive RPEs ("better than expected") and negative RPEs ("worse than expected") are represented in a single integrated system. An electrophysiological component, feedback related negativity, has been claimed to encode an RPE but its relative sensitivity to the utility of positive and negative RPEs remains unclear. This study explored the question by varying the utility of positive and negative RPEs in a design that controlled for other closely related properties of feedback and could distinguish utility from salience. It revealed a mediofrontal sensitivity to utility, for positive RPEs at 275-310ms and for negative RPEs at 310-390ms. These effects were preceded and succeeded by a response consistent with an unsigned prediction error, or "salience" coding.

  18. Distinguishing the influence of task difficulty on error-related ERPs using surface Laplacian transformation.

    PubMed

    Van der Borght, Liesbet; Houtman, Femke; Burle, Boris; Notebaert, Wim

    2016-03-01

    Electrophysiologically, errors are characterized by a negative deflection, the error related negativity (ERN), which is followed by the error positivity (Pe). However, it has been suggested that this latter component consists of two subcomponents, with an early frontocentral Pe reflecting a continuation of the ERN, and a centro-parietal Pe reflecting error awareness. Using Laplacian transformed averages, a correct-related negativity (CRN; similar to the ERN), can be found on correct trials. As this technique allows for the decomposition of the recorded scalp potentials resulting in a better dissociation of the underlying brain activities, Laplacian transformation was used in the present study to differentiate between both the ERN/CRN and both Pe components. Additionally, task difficulty was manipulated. Our results show a clearly distinguishable early and late Pe. Both the ERN/CRN and the early Pe varied with task difficulty, showing decreased ERN/early Pe in the difficult condition. However, the late Pe was not influenced by our difficulty manipulation. This suggests that the early and the late Pe reflect qualitatively different processes.

  19. Error processing and response inhibition in excessive computer game players: an event-related potential study.

    PubMed

    Littel, Marianne; van den Berg, Ivo; Luijten, Maartje; van Rooij, Antonius J; Keemink, Lianne; Franken, Ingmar H A

    2012-09-01

    Excessive computer gaming has recently been proposed as a possible pathological illness. However, research on this topic is still in its infancy and underlying neurobiological mechanisms have not yet been identified. The determination of underlying mechanisms of excessive gaming might be useful for the identification of those at risk, a better understanding of the behavior and the development of interventions. Excessive gaming has been often compared with pathological gambling and substance use disorder. Both disorders are characterized by high levels of impulsivity, which incorporates deficits in error processing and response inhibition. The present study aimed to investigate error processing and response inhibition in excessive gamers and controls using a Go/NoGo paradigm combined with event-related potential recordings. Results indicated that excessive gamers show reduced error-related negativity amplitudes in response to incorrect trials relative to correct trials, implying poor error processing in this population. Furthermore, excessive gamers display higher levels of self-reported impulsivity as well as more impulsive responding as reflected by less behavioral inhibition on the Go/NoGo task. The present study indicates that excessive gaming partly parallels impulse control and substance use disorders regarding impulsivity measured on the self-reported, behavioral and electrophysiological level. Although the present study does not allow drawing firm conclusions on causality, it might be that trait impulsivity, poor error processing and diminished behavioral response inhibition underlie the excessive gaming patterns observed in certain individuals. They might be less sensitive to negative consequences of gaming and therefore continue their behavior despite adverse consequences.

  20. Automated measurement of centering errors and relative surface distances for the optimized assembly of micro-optics

    NASA Astrophysics Data System (ADS)

    Langehanenberg, Patrik; Dumitrescu, Eugen; Heinisch, Josef; Krey, Stefan; Ruprecht, Aiko K.

    2011-03-01

    For any kind of optical compound systems the precise geometric alignment of every single element according to the optical design is essential to obtain the desired imaging properties. In this contribution we present a measurement system for the determination of the complete set of geometric alignment parameters in assembled systems. The deviation of each center or curvature with respect to a reference axis is measured with an autocollimator system. These data are further processed in order to provide the shift and tilt of an individual lens or group of lenses with respect to a defined reference axis. Previously it was shown that such an instrument can measure the centering errors of up to 40 surfaces within a system under test with accuracies in the range of an arc second. In addition, the relative distances of the optical surfaces (center thicknesses of lens elements, air gaps in between) are optically determined in the same measurement system by means of low coherent interferometry. Subsequently, the acquired results can be applied for the compensation of the detected geometric alignment errors before the assembly is finally bonded (e.g., glued). The presented applications mainly include measurements of miniaturized lens systems like mobile phone optics. However, any type of objective lens from endoscope imaging systems up to very complex objective lenses used in microlithography can be analyzed with the presented measurement system.

  1. Anatomy of an error: a bidirectional state model of task engagement/disengagement and attention-related errors.

    PubMed

    Allan Cheyne, J; Solman, Grayden J F; Carriere, Jonathan S A; Smilek, Daniel

    2009-04-01

    We present arguments and evidence for a three-state attentional model of task engagement/disengagement. The model postulates three states of mind-wandering: occurrent task inattention, generic task inattention, and response disengagement. We hypothesize that all three states are both causes and consequences of task performance outcomes and apply across a variety of experimental and real-world tasks. We apply this model to the analysis of a widely used GO/NOGO task, the Sustained Attention to Response Task (SART). We identify three performance characteristics of the SART that map onto the three states of the model: RT variability, anticipations, and omissions. Predictions based on the model are tested, and largely corroborated, via regression and lag-sequential analyses of both successful and unsuccessful withholding on NOGO trials as well as self-reported mind-wandering and everyday cognitive errors. The results revealed theoretically consistent temporal associations among the state indicators and between these and SART errors as well as with self-report measures. Lag analysis was consistent with the hypotheses that temporal transitions among states are often extremely abrupt and that the association between mind-wandering and performance is bidirectional. The bidirectional effects suggest that errors constitute important occasions for reactive mind-wandering. The model also enables concrete phenomenological, behavioral, and physiological predictions for future research.

  2. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action.

    PubMed

    Bissonette, Gregory B; Roesch, Matthew R

    2016-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum. PMID:26276036

  3. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action.

    PubMed

    Bissonette, Gregory B; Roesch, Matthew R

    2016-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum.

  4. Field Independence/Dependence, Hemispheric Specialization, and Attitude in Relation to Pronunciation Accuracy in Spanish as a Foreign Language.

    ERIC Educational Resources Information Center

    Elliott, A. Raymond

    1995-01-01

    Sixty-six college students enrolled in an intermediate Spanish course were measured on 12 variables believed to be related to pronunciation accuracy. Variables that related most to pronunciation accuracy included individual concern for pronunciation, subject's degree of field independence, and subject's degree of right hemispheric specialization…

  5. Experimental Test of Residual Error-Disturbance Uncertainty Relations for Mixed Spin-1/2 States

    NASA Astrophysics Data System (ADS)

    Demirel, Bülent; Sponar, Stephan; Sulyok, Georg; Ozawa, Masanao; Hasegawa, Yuji

    2016-09-01

    The indeterminacy inherent in quantum measurements is an outstanding character of quantum theory, which manifests itself typically in the uncertainty principle. In the last decade, several universally valid forms of error-disturbance uncertainty relations were derived for completely general quantum measurements for arbitrary states. Subsequently, Branciard established a form that is optimal for spin measurements for some pure states. However, the bound in his inequality is not stringent for mixed states. One of the present authors recently derived a new bound tight in the corresponding mixed state case. Here, a neutron-optical experiment is carried out to investigate this new relation: it is tested whether error and disturbance of quantum measurements disappear or persist in mixing up the measured ensemble. The attainability of the new bound is experimentally observed, falsifying the tightness of Branciard's bound for mixed spin states.

  6. The Relative Effectiveness of Signaling Systems: Relying on External Items Reduces Signaling Accuracy while Leks Increase Accuracy

    PubMed Central

    Leighton, Gavin M.

    2014-01-01

    Multiple evolutionary phenomena require individual animals to assess conspecifics based on behaviors, morphology, or both. Both behavior and morphology can provide information about individuals and are often used as signals to convey information about quality, motivation, or energetic output. In certain cases, conspecific receivers of this information must rank these signaling individuals based on specific traits. The efficacy of information transfer associated within a signal is likely related to the type of trait used to signal, though few studies have investigated the relative effectiveness of contrasting signaling systems. I present a set of models that represent a large portion of signaling systems and compare them in terms of the ability of receivers to rank signalers accurately. Receivers more accurately assess signalers if the signalers use traits that do not require non-food resources; similarly, receivers more accurately ranked signalers if all the signalers could be observed simultaneously, similar to leks. Surprisingly, I also found that receivers are only slightly better at ranking signaler effort if the effort results in a cumulative structure. This series of findings suggests that receivers may attend to specific traits because the traits provide more information relative to others; and similarly, these results may explain the preponderance of morphological and behavioral display signals. PMID:24626221

  7. Writing errors in ALS related to loss of neuronal integrity in the anterior cingulate gyrus.

    PubMed

    Yabe, Ichiro; Tsuji-Akimoto, Sachiko; Shiga, Tohru; Hamada, Shinsuke; Hirata, Kenji; Otsuki, Mika; Kuge, Yuji; Tamaki, Nagara; Sasaki, Hidenao

    2012-04-15

    Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disorder characterized by loss of motor neuron and various cognitive deficits including writing errors. (11)C-flumazenil (FMZ), the positron emission tomography (PET) GABA(A) receptor ligand, is a marker of cortical dysfunction. The objective of this study was to investigate the relationship between cognitive deficits and loss of neuronal integrity in ALS patients using (11)C-FMZ PET. Ten patients with ALS underwent both neuropsychological tests and (11)C-FMZ-PET. The binding potential (BP) of FMZ was calculated from (11)C-FMZ PET images. There were no significant correlations between the BP and most test scores except for the writing error index (WEI), which was measured by the modified Western Aphasia Battery - VB (WAB-IVB) test. The severity of writing error was associated with loss of neuronal integrity in the bilateral anterior cingulate gyrus with mild right predominance (n=9; x=4 mm, y=36 mm, z=4 mm, Z=5.1). The results showed that writing errors in our patients with ALS were related to dysfunction in the anterior cingulate gyrus.

  8. Rectification of General Relativity, Experimental Verifications, and Errors of the Wheeler School

    NASA Astrophysics Data System (ADS)

    Lo, C. Y.

    2013-09-01

    General relativity is not yet consistent. Pauli has misinterpreted Einstein's 1916 equivalence principle that can derive a valid field equation. The Wheeler School has distorted Einstein's 1916 principle to be his 1911 assumption of equivalence, and created new errors. Moreover, errors on dynamic solutions have allowed the implicit assumption of a unique coupling sign that violates the principle of causality. This leads to the space-time singularity theorems of Hawking and Penrose who "refute" applications for microscopic phenomena, and obstruct efforts to obtain a valid equation for the dynamic case. These errors also explain the mistakes in the press release of the 1993 Nobel Committee, who was unaware of the non-existence of dynamic solutions. To illustrate the damages to education, the MIT Open Course Phys. 8.033 is chosen. Rectification of errors confirms that E = mc2 is only conditionally valid, and leads to the discovery of the charge-mass interaction that is experimentally confirmed and subsequently the unification of gravitation and electromagnetism. The charge-mass interaction together with the unification predicts the weight reduction (instead of increment) of charged capacitors and heated metals, and helps to explain NASA's Pioneer anomaly and potentially other anomalies as well.

  9. Errare machinale est: the use of error-related potentials in brain-machine interfaces

    PubMed Central

    Chavarriaga, Ricardo; Sobolewski, Aleksander; Millán, José del R.

    2014-01-01

    The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches. PMID:25100937

  10. Software platform for managing the classification of error- related potentials of observers

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  11. A localized orbital analysis of the thermochemical errors in hybrid density functional theory: achieving chemical accuracy via a simple empirical correction scheme.

    PubMed

    Friesner, Richard A; Knoll, Eric H; Cao, Yixiang

    2006-09-28

    This paper describes an empirical localized orbital correction model which improves the accuracy of density functional theory (DFT) methods for the prediction of thermochemical properties for molecules of first and second row elements. The B3LYP localized orbital correction version of the model improves B3LYP DFT atomization energy calculations on the G3 data set of 222 molecules from a mean absolute deviation (MAD) from experiment of 4.8 to 0.8 kcal/mol. The almost complete elimination of large outliers and the substantial reduction in MAD yield overall results comparable to the G3 wave-function-based method; furthermore, the new model has zero additional computational cost beyond standard DFT calculations. The following four classes of correction parameters are applied to a molecule based on standard valence bond assignments: corrections to atoms, corrections to individual bonds, corrections for neighboring bonds of a given bond, and radical environmental corrections. Although the model is heuristic and is based on a 22 parameter multiple linear regression to experimental errors, each of the parameters is justified on physical grounds, and each provides insight into the fundamental limitations of DFT, most importantly the failure of current DFT methods to accurately account for nondynamical electron correlation.

  12. Using Simulation to Address Hierarchy-Related Errors in Medical Practice

    PubMed Central

    Calhoun, Aaron William; Boone, Megan C; Porter, Melissa B; Miller, Karen H

    2014-01-01

    Objective: Hierarchy, the unavoidable authority gradients that exist within and between clinical disciplines, can lead to significant patient harm in high-risk situations if not mitigated. High-fidelity simulation is a powerful means of addressing this issue in a reproducible manner, but participant psychological safety must be assured. Our institution experienced a hierarchy-related medication error that we subsequently addressed using simulation. The purpose of this article is to discuss the implementation and outcome of these simulations. Methods: Script and simulation flowcharts were developed to replicate the case. Each session included the use of faculty misdirection to precipitate the error. Care was taken to assure psychological safety via carefully conducted briefing and debriefing periods. Case outcomes were assessed using the validated Team Performance During Simulated Crises Instrument. Gap analysis was used to quantify team self-insight. Session content was analyzed via video review. Results: Five sessions were conducted (3 in the pediatric intensive care unit and 2 in the Pediatric Emergency Department). The team was unsuccessful at addressing the error in 4 (80%) of 5 cases. Trends toward lower communication scores (3.4/5 vs 2.3/5), as well as poor team self-assessment of communicative ability, were noted in unsuccessful sessions. Learners had a positive impression of the case. Conclusions: Simulation is a useful means to replicate hierarchy error in an educational environment. This methodology was viewed positively by learner teams, suggesting that psychological safety was maintained. Teams that did not address the error successfully may have impaired self-assessment ability in the communication skill domain. PMID:24867545

  13. Order of accuracy of QUICK and related convection-diffusion schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.

  14. Micro movements of the upper limb in fibromyalgia: The relation to proprioceptive accuracy and visual feedback.

    PubMed

    Bardal, Ellen Marie; Roeleveld, Karin; Ihlen, Espen; Mork, Paul Jarle

    2016-02-01

    The purpose of this study was to explore the role of visual and proprioceptive feedback in upper limb posture control in fibromyalgia (FM) and to assess the coherence between acceleration measurements of upper limb micro movements and surface electromyography (sEMG) of shoulder muscle activity (upper trapezius and deltoid). Twenty-five female FM patients and 25 age- and sex-matched healthy controls (HCs) performed three precision motor tasks: (1) maintain a steady shoulder abduction angle of 45° while receiving visual feedback about upper arm position and supporting external loads (0.5, 1, or 2kg), (2) maintain the same shoulder abduction angle without visual feedback (eyes closed) and no external loading, and (3) a joint position sense test (i.e., assessment of proprioceptive accuracy). Patients had more extensive increase in movement variance than HCs when visual feedback was removed (P<0.03). Proprioceptive accuracy was related to movement variance in HCs (R⩾0.59, P⩽0.002), but not in patients (R⩽0.25, P⩾0.24). There was no difference between patients and HCs in coherence between sEMG and acceleration data. These results may indicate that FM patients are more dependent on visual feedback and less reliant on proprioceptive information for upper limb posture control compared to HCs.

  15. Errors can be related to pre-stimulus differences in ERP topography and their concomitant sources.

    PubMed

    Britz, Juliane; Michel, Christoph M

    2010-02-01

    Much of the variation in both neuronal and behavioral responses to stimuli can be explained by pre-stimulus fluctuations in brain activity. We hypothesized that also errors are the result of stochastic fluctuations in pre-stimulus activity and investigated the temporal dynamics of the scalp topography and their concomitant intracranial generators of stimulus- and response-locked high-density event-related potentials (ERPs) to errors and correct trials in a Stroop task. We found significant differences in ERP map topography and intracranial sources before the onset of the stimulus and after the initiation of the response but not as a function of stimulus-induced conflict. Before the stimulus, topographic differences were accompanied by differential activity in lateral frontal, parietal and temporal areas known to be involved in voluntary reorientation of attention and cognitive control. Differential post-response activity propagated both medially and laterally on a rostral-caudal axis of a network typically involved in performance monitoring. Analysis of the statistical properties of error occurrences revealed their stochasticity. PMID:19850140

  16. Errors Related to Medication Reconciliation: A Prospective Study in Patients Admitted to the Post CCU

    PubMed Central

    Haji Aghajani, Mohammad; Ghazaeian, Monireh; Mehrazin, Hamid Reza; Sistanizad, Mohammad; Miri, Mirmohammad

    2016-01-01

    Medication errors are one of the important factors that increase fatal injuries to the patients and burden significant economic costs to the health care. An appropriate medical history could reduce errors related to omission of the previous drugs at the time of hospitalization. The aim of this study, as first one in Iran, was evaluating the discrepancies between medication histories obtained by pharmacists and physicians/nurses and first order of physician. From September 2012 until March 2013, patients admitted to the post CCU of a 550 bed university hospital, were recruited in the study. As a part of medication reconciliation on admission, the physicians/nurses obtained medication history from all admitted patients. For patients included in the study, medication history was obtained by both physician/nurse and a pharmacy student (after training by a faculty clinical pharmacist) during the first 24 h of admission. 250 patients met inclusion criteria. The mean age of patients was 61.19 ± 14.41 years. Comparing pharmacy student drug history with medication lists obtained by nurses/physicians revealed 3036 discrepancies. On average, 12.14 discrepancies, ranged from 0 to 68, were identified per patient. Only in 20 patients (8%) there was 100 % agreement among medication lists obtained by pharmacist and physician/nurse. Comparing the medications by list of drugs ordered by physician at first visit showed 12.1 discrepancies on average ranging 0 to 72. According to the results, omission errors in our setting are higher than other countries. Pharmacy-based medication reconciliation could be recommended to decrease this type of error. PMID:27642331

  17. Abnormal error-related antisaccade activation in premanifest and early manifest Huntington disease

    PubMed Central

    Rupp, J.; Dzemidzic, M.; Blekher, T.; Bragulat, V.; West, J.; Jackson, J.; Hui, S.; Wojcieszek, J.; Saykin, A.J.; Kareken, D.; Foroud, T.

    2010-01-01

    Objective Individuals with the trinucleotide CAG expansion (CAG+) that causes Huntington disease (HD) have impaired performance on antisaccade (AS) tasks that require directing gaze in the mirror opposite direction of visual targets. This study aimed to identify the neural substrates underlying altered antisaccadic performance. Method Three groups of participants were recruited: 1) Imminent and early manifest HD (early HD, n=8); 2) premanifest (presymptomatic) CAG+ (preHD, n=10); and 3) CAG unexpanded (CAG−) controls (n=12). All participants completed a uniform study visit that included a neurological evaluation, neuropsychological battery, molecular testing, and functional magnetic resonance imaging during an AS task. The blood oxygenation level dependent (BOLD) response was obtained during saccade preparation and saccade execution for both correct and incorrect responses using regression analysis. Results Significant group differences in BOLD response were observed when comparing incorrect AS to correct AS execution. Specifically, as the percentage of incorrect AS increased, BOLD responses in the CAG− group decreased progressively in a well-documented reward detection network that includes the pre-supplementary motor area and dorsal anterior cingulate cortex. In contrast, AS errors in the preHD and early HD groups lacked this relationship with BOLD signal in the error detection network, and BOLD responses to AS errors were smaller in the two CAG+ groups as compared with the CAG− group. Conclusions These results are the first to suggest that abnormalities in an error-related response network may underlie early changes in AS eye movements in premanifest and early manifest HD. PMID:21401260

  18. Errors Related to Medication Reconciliation: A Prospective Study in Patients Admitted to the Post CCU.

    PubMed

    Haji Aghajani, Mohammad; Ghazaeian, Monireh; Mehrazin, Hamid Reza; Sistanizad, Mohammad; Miri, Mirmohammad

    2016-01-01

    Medication errors are one of the important factors that increase fatal injuries to the patients and burden significant economic costs to the health care. An appropriate medical history could reduce errors related to omission of the previous drugs at the time of hospitalization. The aim of this study, as first one in Iran, was evaluating the discrepancies between medication histories obtained by pharmacists and physicians/nurses and first order of physician. From September 2012 until March 2013, patients admitted to the post CCU of a 550 bed university hospital, were recruited in the study. As a part of medication reconciliation on admission, the physicians/nurses obtained medication history from all admitted patients. For patients included in the study, medication history was obtained by both physician/nurse and a pharmacy student (after training by a faculty clinical pharmacist) during the first 24 h of admission. 250 patients met inclusion criteria. The mean age of patients was 61.19 ± 14.41 years. Comparing pharmacy student drug history with medication lists obtained by nurses/physicians revealed 3036 discrepancies. On average, 12.14 discrepancies, ranged from 0 to 68, were identified per patient. Only in 20 patients (8%) there was 100 % agreement among medication lists obtained by pharmacist and physician/nurse. Comparing the medications by list of drugs ordered by physician at first visit showed 12.1 discrepancies on average ranging 0 to 72. According to the results, omission errors in our setting are higher than other countries. Pharmacy-based medication reconciliation could be recommended to decrease this type of error. PMID:27642331

  19. Accuracy of relative positioning by interferometry with GPS Double-blind test results

    NASA Technical Reports Server (NTRS)

    Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.

    1983-01-01

    MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.

  20. Error-Related Negativity and the Misattribution of State-Anxiety Following Errors: On the Reproducibility of Inzlicht and Al-Khindi (2012)

    PubMed Central

    Cano Rodilla, Carmen; Beauducel, André; Leue, Anja

    2016-01-01

    In their innovative study, Inzlicht and Al-Khindi (2012) demonstrated that participants who were allowed to misattribute their arousal and negative affect induced by errors to a placebo beverage had a reduced error-related negativity (ERN/Ne) compared to controls not being allowed to misattribute their arousal following errors. These results contribute to the ongoing debate that affect and motivation are interwoven with the cognitive processing of errors. Evidence that the misattribution of negative affect modulates the ERN/Ne is essential for understanding the mechanisms behind ERN/Ne. Therefore, and because of the growing debate on reproducibility of empirical findings, we aimed at replicating the misattribution effects on the ERN/Ne in a go/nogo task. Students were randomly assigned to a misattribution group (n = 48) or a control group (n = 51). Participants of the misattribution group consumed a beverage said to have side effects that would increase their physiological arousal, so that they could misattribute the negative affect induced by errors to the beverage. Participants of the control group correctly believed that the beverage had no side effects. As Inzlicht and Al-Khindi (2012), we did not observe performance differences between both groups. However, ERN/Ne differences between misattribution and control group could not be replicated, although the statistical power of the replication study was high. Evidence regarding the replication of performance and the non-replication of ERN/Ne findings was confirmed by Bayesian statistics. PMID:27708571

  1. Task-dependent signal variations in EEG error-related potentials for brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Iturrate, I.; Montesano, L.; Minguez, J.

    2013-04-01

    Objective. A major difficulty of brain-computer interface (BCI) technology is dealing with the noise of EEG and its signal variations. Previous works studied time-dependent non-stationarities for BCIs in which the user’s mental task was independent of the device operation (e.g., the mental task was motor imagery and the operational task was a speller). However, there are some BCIs, such as those based on error-related potentials, where the mental and operational tasks are dependent (e.g., the mental task is to assess the device action and the operational task is the device action itself). The dependence between the mental task and the device operation could introduce a new source of signal variations when the operational task changes, which has not been studied yet. The aim of this study is to analyse task-dependent signal variations and their effect on EEG error-related potentials.Approach. The work analyses the EEG variations on the three design steps of BCIs: an electrophysiology study to characterize the existence of these variations, a feature distribution analysis and a single-trial classification analysis to measure the impact on the final BCI performance.Results and significance. The results demonstrate that a change in the operational task produces variations in the potentials, even when EEG activity exclusively originated in brain areas related to error processing is considered. Consequently, the extracted features from the signals vary, and a classifier trained with one operational task presents a significant loss of performance for other tasks, requiring calibration or adaptation for each new task. In addition, a new calibration for each of the studied tasks rapidly outperforms adaptive techniques designed in the literature to mitigate the EEG time-dependent non-stationarities.

  2. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    SciTech Connect

    Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.

    2010-09-15

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article

  3. Neuroimaging measures of error-processing: Extracting reliable signals from event-related potentials and functional magnetic resonance imaging.

    PubMed

    Steele, Vaughn R; Anderson, Nathaniel E; Claus, Eric D; Bernat, Edward M; Rao, Vikram; Assaf, Michal; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A

    2016-05-15

    Error-related brain activity has become an increasingly important focus of cognitive neuroscience research utilizing both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Given the significant time and resources required to collect these data, it is important for researchers to plan their experiments such that stable estimates of error-related processes can be achieved efficiently. Reliability of error-related brain measures will vary as a function of the number of error trials and the number of participants included in the averages. Unfortunately, systematic investigations of the number of events and participants required to achieve stability in error-related processing are sparse, and none have addressed variability in sample size. Our goal here is to provide data compiled from a large sample of healthy participants (n=180) performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages. We examine ERP measures of error-related negativity (ERN/Ne) and error positivity (Pe), as well as event-related fMRI measures locked to False Alarms. We find that achieving stable estimates of ERP measures required four to six error trials and approximately 30 participants; fMRI measures required six to eight trials and approximately 40 participants. Fewer trials and participants were required for measures where additional data reduction techniques (i.e., principal component analysis and independent component analysis) were implemented. Ranges of reliability statistics for various sample sizes and numbers of trials are provided. We intend this to be a useful resource for those planning or evaluating ERP or fMRI investigations with tasks designed to measure error-processing.

  4. Investigating the epidemiology of medication errors and error-related adverse drug events (ADEs) in primary care, ambulatory care and home settings: a systematic review protocol

    PubMed Central

    Assiri, Ghadah Asaad; Grant, Liz; Aljadhey, Hisham; Sheikh, Aziz

    2016-01-01

    Introduction There is a need to better understand the epidemiology of medication errors and error-related adverse events in community care contexts. Methods and analysis We will systematically search the following databases: Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Eastern Mediterranean Regional Office of the WHO (EMRO), MEDLINE, PsycINFO and Web of Science. In addition, we will search Google Scholar and contact an international panel of experts to search for unpublished and in progress work. The searches will cover the time period January 1990–December 2015 and will yield data on the incidence or prevalence of and risk factors for medication errors and error-related adverse drug events in adults living in community settings (ie, primary care, ambulatory and home). Study quality will be assessed using the Critical Appraisal Skills Program quality assessment tool for cohort and case–control studies, and cross-sectional studies will be assessed using the Joanna Briggs Institute Critical Appraisal Checklist for Descriptive Studies. Meta-analyses will be undertaken using random-effects modelling using STATA (V.14) statistical software. Ethics and dissemination This protocol will be registered with PROSPERO, an international prospective register of systematic reviews, and the systematic review will be reported in the peer-reviewed literature using Preferred Reporting Items for Systematic Reviews and Meta-Analyses. PMID:27580826

  5. System performance and performance enhancement relative to element position location errors for distributed linear antenna arrays

    NASA Astrophysics Data System (ADS)

    Adrian, Andrew

    For the most part, antenna phased arrays have traditionally been comprised of antenna elements that are very carefully and precisely placed in very periodic grid structures. Additionally, the relative positions of the elements to each other are typically mechanically fixed as best as possible. There is never an assumption the relative positions of the elements are a function of time or some random behavior. In fact, every array design is typically analyzed for necessary element position tolerances in order to meet necessary performance requirements such as directivity, beamwidth, sidelobe level, and beam scanning capability. Consider an antenna array that is composed of several radiating elements, but the position of each of the elements is not rigidly, mechanically fixed like a traditional array. This is not to say that the element placement structure is ignored or irrelevant, but each element is not always in its relative, desired location. Relative element positioning would be analogous to a flock of birds in flight or a swarm of insects. They tend to maintain a near fixed position with the group, but not always. In the antenna array analog, it would be desirable to maintain a fixed formation, but due to other random processes, it is not always possible to maintain perfect formation. This type of antenna array is referred to as a distributed antenna array. A distributed antenna array's inability to maintain perfect formation causes degradations in the antenna factor pattern of the array. Directivity, beamwidth, sidelobe level and beam pointing error are all adversely affected by element relative position error. This impact is studied as a function of element relative position error for linear antenna arrays. The study is performed over several nominal array element spacings, from lambda to lambda, several sidelobe levels (20 to 50 dB) and across multiple array illumination tapers. Knowing the variation in performance, work is also performed to utilize a minimum

  6. Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity.

    PubMed

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L

    2016-03-23

    The error-related negativity (ERN or Ne) is a negative event-related brain potential that peaks about 20-100 ms after people perform an incorrect response in choice reaction time tasks. Prior research has shown that the ERN may be enhanced by situational and dispositional factors that promote intrinsic motivation. Building on and extending this work the authors hypothesized that simulated interpersonal touch may increase task engagement and thereby increase ERN amplitude. To test this notion, 20 participants performed a Go/No-Go task while holding a teddy bear or a same-sized cardboard box. As expected, the ERN was significantly larger when participants held a teddy bear rather than a cardboard box. This effect was most pronounced for people high (rather than low) in trait intrinsic motivation, who may depend more on intrinsically motivating task cues to maintain task engagement. These findings highlight the potential benefits of simulated interpersonal touch in stimulating attention to errors, especially among people who are intrinsically motivated.

  7. Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity.

    PubMed

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L

    2016-03-23

    The error-related negativity (ERN or Ne) is a negative event-related brain potential that peaks about 20-100 ms after people perform an incorrect response in choice reaction time tasks. Prior research has shown that the ERN may be enhanced by situational and dispositional factors that promote intrinsic motivation. Building on and extending this work the authors hypothesized that simulated interpersonal touch may increase task engagement and thereby increase ERN amplitude. To test this notion, 20 participants performed a Go/No-Go task while holding a teddy bear or a same-sized cardboard box. As expected, the ERN was significantly larger when participants held a teddy bear rather than a cardboard box. This effect was most pronounced for people high (rather than low) in trait intrinsic motivation, who may depend more on intrinsically motivating task cues to maintain task engagement. These findings highlight the potential benefits of simulated interpersonal touch in stimulating attention to errors, especially among people who are intrinsically motivated. PMID:26876476

  8. Delay accuracy in bat sonar is related to the reciprocal of normalized echo bandwidth, or Q.

    PubMed

    Simmons, James A; Neretti, Nicola; Intrator, Nathan; Altes, Richard A; Ferragamo, Michael J; Sanderson, Mark I

    2004-03-01

    Big brown bats (Eptesicus fuscus) emit wideband, frequency-modulated biosonar sounds and perceive the distance to objects from the delay of echoes. Bats remember delays and patterns of delay from one broadcast to the next, and they may rely on delays to perceive target scenes. While emitting a series of broadcasts, they can detect very small changes in delay based on their estimates of delay for successive echoes, which are derived from an auditory time/frequency representation of frequency-modulated sounds. To understand how bats perceive objects, we need to know how information distributed across the time/frequency surface is brought together to estimate delay. To assess this transformation, we measured how alteration of the frequency content of echoes affects the sharpness of the bat's delay estimates from the distribution of errors in a psychophysical task for detecting changes in delay. For unrestricted echo frequency content and high echo signal-to-noise ratio, bats can detect extremely small changes in delay of about 10 ns. When echo bandwidth is restricted by filtering out low or high frequencies, the bat's delay acuity declines in relation to the reciprocal of relative echo bandwidth, expressed as Q, which also is the relative width of the target impulse response in cycles rather than time. This normalized-time dimension may be efficient for target classification if it leads to target shape being displayed independent of size. This relation may originate from cochlear transduction by parallel frequency channels with active amplification, which creates the auditory time/frequency representation itself.

  9. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  10. A 2 x 2 Taxonomy of Multilevel Latent Contextual Models: Accuracy-Bias Trade-Offs in Full and Partial Error Correction Models

    ERIC Educational Resources Information Center

    Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich

    2011-01-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…

  11. Impacts of visuomotor sequence learning methods on speed and accuracy: Starting over from the beginning or from the point of error.

    PubMed

    Tanaka, Kanji; Watanabe, Katsumi

    2016-02-01

    The present study examined whether sequence learning led to more accurate and shorter performance time if people who are learning a sequence start over from the beginning when they make an error (i.e., practice the whole sequence) or only from the point of error (i.e., practice a part of the sequence). We used a visuomotor sequence learning paradigm with a trial-and-error procedure. In Experiment 1, we found fewer errors, and shorter performance time for those who restarted their performance from the beginning of the sequence as compared to those who restarted from the point at which an error occurred, indicating better learning of spatial and motor representations of the sequence. This might be because the learned elements were repeated when the next performance started over from the beginning. In subsequent experiments, we increased the occasions for the repetitions of learned elements by modulating the number of fresh start points in the sequence after errors. The results showed that fewer fresh start points were likely to lead to fewer errors and shorter performance time, indicating that the repetitions of learned elements enabled participants to develop stronger spatial and motor representations of the sequence. Thus, a single or two fresh start points in the sequence (i.e., starting over only from the beginning or from the beginning or midpoint of the sequence after errors) is likely to lead to more accurate and faster performance.

  12. Type I Error Inflation in the Traditional By-Participant Analysis to Metamemory Accuracy: A Generalized Mixed-Effects Model Perspective

    ERIC Educational Resources Information Center

    Murayama, Kou; Sakaki, Michiko; Yan, Veronica X.; Smith, Garry M.

    2014-01-01

    In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are…

  13. Interpolation in waveform space: Enhancing the accuracy of gravitational waveform families using numerical relativity

    NASA Astrophysics Data System (ADS)

    Cannon, Kipp; Emberson, J. D.; Hanna, Chad; Keppel, Drew; Pfeiffer, Harald P.

    2013-02-01

    Matched filtering for the identification of compact object mergers in gravitational wave antenna data involves the comparison of the data stream to a bank of template gravitational waveforms. Typically the template bank is constructed from phenomenological waveform models, since these can be evaluated for an arbitrary choice of physical parameters. Recently it has been proposed that singular value decomposition (SVD) can be used to reduce the number of templates required for detection. As we show here, another benefit of SVD is its removal of biases from the phenomenological templates along with a corresponding improvement in their ability to represent waveform signals obtained from numerical relativity (NR) simulations. Using these ideas, we present a method that calibrates a reduced SVD basis of phenomenological waveforms against NR waveforms in order to construct a new waveform approximant with improved accuracy and faithfulness compared to the original phenomenological model. The new waveform family is given numerically through the interpolation of the projection coefficients of NR waveforms expanded onto the reduced basis and provides a generalized scheme for enhancing phenomenological models.

  14. Diagnostic Accuracy of Mucosal Biopsy versus Endoscopic Mucosal Resection in Barrett's Esophagus and Related Superficial Lesions.

    PubMed

    Elsadek, Hany M; Radwan, Mamdouh M

    2015-01-01

    Background. Endoscopic surveillance for early detection of dysplastic or neoplastic changes in patients with Barrett's esophagus (BE) depends usually on biopsy. The diagnostic and therapeutic role of endoscopic mucosal resection (EMR) in BE is rapidly growing. Objective. The aim of this study was to check the accuracy of biopsy for precise histopathologic diagnosis of dysplasia and neoplasia, compared to EMR in patients having BE and related superficial esophageal lesions. Methods. A total of 48 patients with previously diagnosed BE (36 men, 12 women, mean age 49.75 ± 13.3 years) underwent routine surveillance endoscopic examination. Biopsies were taken from superficial lesions, if present, and otherwise from BE segments. Then, EMR was performed within three weeks. Results. Biopsy based histopathologic diagnoses were nondysplastic BE (NDBE), 22 cases; low-grade dysplasia (LGD), 14 cases; high-grade dysplasia (HGD), 8 cases; intramucosal carcinoma (IMC), two cases; and invasive adenocarcinoma (IAC), two cases. EMR based diagnosis differed from biopsy based diagnosis (either upgrading or downgrading) in 20 cases (41.67%), (Kappa = 0.43, 95% CI: 0.170-0.69). Conclusions. Biopsy is not a satisfactory method for accurate diagnosis of dysplastic or neoplastic changes in BE patients with or without suspicious superficial lesions. EMR should therefore be the preferred diagnostic method in such patients.

  15. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-14

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID

  16. Harsh Parenting and Fearfulness in Toddlerhood Interact to Predict Amplitudes of Preschool Error-Related Negativity

    PubMed Central

    Brooker, Rebecca J.; Buss, Kristin A.

    2014-01-01

    Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children’s risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN), an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems. PMID:24721466

  17. Harsh parenting and fearfulness in toddlerhood interact to predict amplitudes of preschool error-related negativity.

    PubMed

    Brooker, Rebecca J; Buss, Kristin A

    2014-07-01

    Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children's risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN), an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems.

  18. Motivation and semantic context affect brain error-monitoring activity: an event-related brain potentials study.

    PubMed

    Ganushchak, Lesya Y; Schiller, Niels O

    2008-01-01

    During speech production, we continuously monitor what we say. In situations in which speech errors potentially have more severe consequences, e.g. during a public presentation, our verbal self-monitoring system may pay special attention to prevent errors than in situations in which speech errors are more acceptable, such as a casual conversation. In an event-related potential study, we investigated whether or not motivation affected participants' performance using a picture naming task in a semantic blocking paradigm. Semantic context of to-be-named pictures was manipulated; blocks were semantically related (e.g., cat, dog, horse, etc.) or semantically unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated independently by monetary reward. The motivation manipulation did not affect error rate during picture naming. However, the high-motivation condition yielded increased amplitude and latency values of the error-related negativity (ERN) compared to the low-motivation condition, presumably indicating higher monitoring activity. Furthermore, participants showed semantic interference effects in reaction times and error rates. The ERN amplitude was also larger during semantically related than unrelated blocks, presumably indicating that semantic relatedness induces more conflict between possible verbal responses. PMID:17920932

  19. Accuracy of MicroRNA Discovery Pipelines in Non-Model Organisms Using Closely Related Species Genomes

    PubMed Central

    Etebari, Kayvan; Asgari, Sassan

    2014-01-01

    Mapping small reads to genome reference is an essential and more common approach to identify microRNAs (miRNAs) in an organism. Using closely related species genomes as proxy references can facilitate miRNA expression studies in non-model species that their genomes are not available. However, the level of error this introduces is mostly unknown, as this is the result of evolutionary distance between the proxy reference and the species of interest. To evaluate the accuracy of miRNA discovery pipelines in non-model organisms, small RNA library data from a mosquito, Aedes aegypti, were mapped to three well annotated insect genomes as proxy references using miRanalyzer with two strict and loose mapping criteria. In addition, another web-based miRNA discovery pipeline (DSAP) was used as a control for program performance. Using miRanalyzer, more than 80% reduction was observed in the number of mapped reads using strict criterion when proxy genome references were used; however, only 20% reduction was recorded for mapped reads to other species known mature miRNA datasets. Except a few changes in ranking, mapping criteria did not make any significant differences in the profile of the most abundant miRNAs in A. aegypti when its original or a proxy genome was used as reference. However, more variation was observed in miRNA ranking profile when DSAP was used as analysing tool. Overall, the results also suggested that using a proxy reference did not change the most abundant miRNAs’ differential expression profiles when infected or non-infected libraries were compared. However, usage of a proxy reference could provide about 67% of the original outcome from more extremely up- or down-regulated miRNA profiles. Although using closely related species genome incurred some losses in the number of miRNAs, the most abundant miRNAs along with their differential expression profile would be acceptable based on the sensitivity level of each project. PMID:24404190

  20. Theta and Alpha Band Modulations Reflect Error-Related Adjustments in the Auditory Condensation Task

    PubMed Central

    Novikov, Nikita A.; Bryzgalov, Dmitri V.; Chernyshev, Boris V.

    2015-01-01

    Error commission leads to adaptive adjustments in a number of brain networks that subserve goal-directed behavior, resulting in either enhanced stimulus processing or increased motor threshold depending on the nature of errors committed. Here, we studied these adjustments by analyzing post-error modulations of alpha and theta band activity in the auditory version of the two-choice condensation task, which is highly demanding for sustained attention while involves no inhibition of prepotent responses. Errors were followed by increased frontal midline theta (FMT) activity, as well as by enhanced alpha band suppression in the parietal and the left central regions; parietal alpha suppression correlated with the task performance, left central alpha suppression correlated with the post-error slowing, and FMT increase correlated with both behavioral measures. On post-error correct trials, left-central alpha band suppression started earlier before the response, and the response was followed by weaker FMT activity, as well as by enhanced alpha band suppression distributed over the entire scalp. These findings indicate that several separate neuronal networks are involved in post-error adjustments, including the midfrontal performance monitoring network, the parietal attentional network, and the sensorimotor network. Supposedly, activity within these networks is rapidly modulated after errors, resulting in optimization of their functional state on the subsequent trials, with corresponding changes in behavioral measures. PMID:26733266

  1. Lexical and Child-Related Factors in Word Variability and Accuracy in Infants

    ERIC Educational Resources Information Center

    Macrae, Toby

    2013-01-01

    The present study investigated the effects of lexical age of acquisition (AoA), phonological complexity, age and expressive vocabulary on spoken word variability and accuracy in typically developing infants, aged 1;9-3;1. It was hypothesized that later-acquired words and those with more complex speech sounds would be produced more variably and…

  2. How Does Speed and Accuracy in Reading Relate to Reading Comprehension in Arabic?

    ERIC Educational Resources Information Center

    Abu-Leil, Aula Khateeb; Share, David L.; Ibrahim, Raphiq

    2014-01-01

    The purpose of this study was to investigate the potential contribution of decoding efficiency to the development of reading comprehension among skilled adult native Arabic speakers. In addition, we tried to investigate the influence of Arabic vowels on reading accuracy, reading speed, and therefore to reading comprehension. Seventy-five Arabic…

  3. Reduced Error-Related Activation in Two Anterior Cingulate Circuits Is Related to Impaired Performance in Schizophrenia

    ERIC Educational Resources Information Center

    Polli, Frida E.; Barton, Jason J. S.; Thakkar, Katharine N.; Greve, Douglas N.; Goff, Donald C.; Rauch, Scott L.; Manoach, Dara S.

    2008-01-01

    To perform well on any challenging task, it is necessary to evaluate your performance so that you can learn from errors. Recent theoretical and experimental work suggests that the neural sequellae of error commission in a dorsal anterior cingulate circuit index a type of contingency- or reinforcement-based learning, while activation in a rostral…

  4. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    PubMed Central

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835

  5. Correcting a fundamental error in greenhouse gas accounting related to bioenergy.

    PubMed

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-06-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835

  6. Uncertainty forecasts improve weather-related decisions and attenuate the effects of forecast error.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2012-03-01

    Although uncertainty is inherent in weather forecasts, explicit numeric uncertainty estimates are rarely included in public forecasts for fear that they will be misunderstood. Of particular concern are situations in which precautionary action is required at low probabilities, often the case with severe events. At present, a categorical weather warning system is used. The work reported here tested the relative benefits of several forecast formats, comparing decisions made with and without uncertainty forecasts. In three experiments, participants assumed the role of a manager of a road maintenance company in charge of deciding whether to pay to salt the roads and avoid a potential penalty associated with icy conditions. Participants used overnight low temperature forecasts accompanied in some conditions by uncertainty estimates and in others by decision advice comparable to categorical warnings. Results suggested that uncertainty information improved decision quality overall and increased trust in the forecast. Participants with uncertainty forecasts took appropriate precautionary action and withheld unnecessary action more often than did participants using deterministic forecasts. When error in the forecast increased, participants with conventional forecasts were reluctant to act. However, this effect was attenuated by uncertainty forecasts. Providing categorical decision advice alone did not improve decisions. However, combining decision advice with uncertainty estimates resulted in the best performance overall. The results reported here have important implications for the development of forecast formats to increase compliance with severe weather warnings as well as other domains in which one must act in the face of uncertainty.

  7. Uncertainty forecasts improve weather-related decisions and attenuate the effects of forecast error.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2012-03-01

    Although uncertainty is inherent in weather forecasts, explicit numeric uncertainty estimates are rarely included in public forecasts for fear that they will be misunderstood. Of particular concern are situations in which precautionary action is required at low probabilities, often the case with severe events. At present, a categorical weather warning system is used. The work reported here tested the relative benefits of several forecast formats, comparing decisions made with and without uncertainty forecasts. In three experiments, participants assumed the role of a manager of a road maintenance company in charge of deciding whether to pay to salt the roads and avoid a potential penalty associated with icy conditions. Participants used overnight low temperature forecasts accompanied in some conditions by uncertainty estimates and in others by decision advice comparable to categorical warnings. Results suggested that uncertainty information improved decision quality overall and increased trust in the forecast. Participants with uncertainty forecasts took appropriate precautionary action and withheld unnecessary action more often than did participants using deterministic forecasts. When error in the forecast increased, participants with conventional forecasts were reluctant to act. However, this effect was attenuated by uncertainty forecasts. Providing categorical decision advice alone did not improve decisions. However, combining decision advice with uncertainty estimates resulted in the best performance overall. The results reported here have important implications for the development of forecast formats to increase compliance with severe weather warnings as well as other domains in which one must act in the face of uncertainty. PMID:21875244

  8. Data on simulated interpersonal touch, individual differences and the error-related negativity

    PubMed Central

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J.; Koole, Sander L.

    2016-01-01

    The dataset includes data from the electroencephalogram study reported in our paper: ‘Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity’ (doi:10.1016/j.neulet.2016.01.044) (Tjew-A-Sin et al., 2016) [1]. The data was collected at the psychology laboratories at the Vrije Universiteit Amsterdam in 2012 among a Dutch-speaking student sample. The dataset consists of the measures described in the paper, as well as additional (exploratory) measures including the Five-Factor Personality Inventory, the Connectedness to Nature Scale, the Rosenberg Self-esteem Scale and a scale measuring life stress. The data can be used for replication purposes, meta-analyses, and exploratory analyses, as well as cross-cultural comparisons of touch and/or ERN effects. The authors also welcome collaborative research based on re-analyses of the data. The data described is available at a data repository called the DANS archive: http://persistent-identifier.nl/?identifier=urn:nbn:nl:ui:13-tzbk-gg. PMID:27158644

  9. Hysteresis and Related Error Mechanisms in the NIST Watt Balance Experiment

    PubMed Central

    Schwarz, Joshua P.; Liu, Ruimin; Newell, David B.; Steiner, Richard L.; Williams, Edwin R.; Smith, Douglas; Erdemir, Ali; Woodford, John

    2001-01-01

    The NIST watt balance experiment is being completely rebuilt after its 1998 determination of the Planck constant. That measurement yielded a result with an approximately 1×10−7 relative standard uncertainty. Because the goal of the new incarnation of the experiment is a ten-fold decrease in uncertainty, it has been necessary to reexamine many sources of systematic error. Hysteresis effects account for a substantial portion of the projected uncertainty budget. They arise from mechanical, magnetic, and thermal sources. The new experiment incorporates several improvements in the apparatus to address these issues, including stiffer components for transferring the mass standard on and off the balance, better servo control of the balance, better pivot materials, and the incorporation of erasing techniques into the mass transfer servo system. We have carried out a series of tests of hysteresis sources on a separate system, and apply their results to the watt apparatus. The studies presented here suggest that our improvements can be expected to reduce hysteresis signals by at least a factor of 10—perhaps as much as a factor of 50—over the 1998 experiment. PMID:27500039

  10. Research concerning the geophysical causes and measurement accuracies related to the irregularities in the rotation of the earth

    NASA Technical Reports Server (NTRS)

    Currie, D. G.

    1978-01-01

    The primary objective of this effort consisted of a detailed study of the history of the motion of the moon. Several analyses were developed which are related to the determination of the effect of various refractive phenomena on the accuracy of measurements made through the earth's atmosphere.

  11. Children's Age-Related Speed--Accuracy Strategies in Intercepting Moving Targets in Two Dimensions

    ERIC Educational Resources Information Center

    Rothenberg-Cunningham, Alek; Newell, Karl M.

    2013-01-01

    Purpose: This study investigated the age-related speed--accuracy strategies of children, adolescents, and adults in performing a rapid striking task that allowed the self-selection of the interception position in a virtual, two-dimensional environment. Method: The moving target had curvilinear trajectories that were determined by combinations of…

  12. Maternal Accuracy and Behavior in Anticipating Children's Responses to Novelty: Relations to Fearful Temperament and Implications for Anxiety Development

    ERIC Educational Resources Information Center

    Kiel, Elizabeth J.; Buss, Kristin A.

    2010-01-01

    Previous research has suggested that mothers' behaviors may serve as a mechanism in the development from toddler fearful temperament to childhood anxiety. The current study examined the maternal characteristic of accuracy in predicting toddlers' distress reactions to novelty in relation to temperament, parenting, and anxiety development.…

  13. Soil maps as data input for soil erosion models: errors related to map scales

    NASA Astrophysics Data System (ADS)

    van Dijk, Paul; Sauter, Joëlle; Hofstetter, Elodie

    2010-05-01

    Soil erosion rates depend in many ways on soil and soil surface characteristics which vary in space and in time. To account for spatial variations of soil features, most distributed soil erosion models require data input derived from soil maps. Ideally, the level of spatial detail contained in the applied soil map should correspond to the objective of the modelling study. However, often the model user has only one soil map available which is then applied without questioning its suitability. The present study seeks to determine in how far soil map scale can be a source of error in erosion model output. The study was conducted on two different spatial scales, with for each of them a convenient soil erosion model: a) the catchment scale using the physically-based Limbourg Soil Erosion Model (LISEM), and b) the regional scale using the decision-tree expert model MESALES. The suitability of the applied soil map was evaluated with respect to an imaginary though realistic study objective for both models: the definition of erosion control measures at strategic locations at the catchment scale; the identification of target areas for the definition of control measures strategies at the regional scale. Two catchments were selected to test the sensitivity of LISEM to the spatial detail contained in soil maps: one catchment with relatively little contrast in soil texture, dominated by loess-derived soil (south of the Alsace), and one catchment with strongly contrasted soils at the limit between the Alsatian piedmont and the loess-covered hills of the Kochersberg. LISEM was run for both catchments using different soil maps ranging in scale from 1/25 000 to 1/100 000 to derive soil related input parameters. The comparison of the output differences was used to quantify the map scale impact on the quality of the model output. The sensitivity of MESALES was tested on the Haut-Rhin county for which two soil maps are available for comparison: 1/50 000 and 1/100 000. The order of

  14. Error-related brain activity in the age of RDoC: A review of the literature.

    PubMed

    Weinberg, Anna; Dieterich, Raoul; Riesel, Anja

    2015-11-01

    The ability to detect and respond to errors is critical to successful adaptation to a changing environment. The error-related negativity (ERN), an event-related potential (ERP) component, is a well-validated neural response to errors and reflects the error monitoring activity of the anterior cingulate cortex (ACC). Additionally, the ERN is implicated in several processes key to adaptive functioning. Abnormalities in error-related brain activity have been linked to multiple forms of psychopathology and individual differences. As such, the component is likely to be useful in NIMH's Research Domain Criteria (RDoC) initiative to establish biologically-meaningful dimensions of psychological dysfunction, and currently appears as a unit of measurement in three RDoC domains: Positive Valence Systems, Negative Valence Systems, and Cognitive Systems. In this review paper, we introduce the ERN and discuss evidence related to its psychometric properties, as well as important task differences. Following this, we discuss evidence linking the ERN to clinically diverse forms of psychopathology, as well as the implications of one unit of measurement appearing in multiple RDoC dimensions. And finally, we discuss important future directions, as well as research pathways by which the ERN might be leveraged to track the ways in which dysfunctions in multiple neural systems interact to influence psychological well-being.

  15. Increased error-related thalamic activity during early compared to late cocaine abstinence

    PubMed Central

    Li, Chiang-shan R.; Luo, Xi; Sinha, Rajita; Rounsaville, Bruce J.; Carroll, Kathleen M.; Malison, Robert T.; Ding, Yu-Shin; Zhang, Sheng; Ide, Jaime S.

    2010-01-01

    Altered cognitive control is implicated in the shaping of cocaine dependence. One of the key component processes of cognitive control is error monitoring. Our previous imaging work highlighted greater activity in distinct cortical and subcortical regions including the dorsal anterior cingulate cortex (dACC), thalamus and insula when participants committed an error during the stop signal task (Li et al., 2008b). Importantly, dACC, thalamic and insular activity has been associated with drug craving. One hypothesis is that the intense interoceptive activity during craving prevents these cerebral structures from adequately registering error and/or monitoring performance. Alternatively, the dACC, thalamus and insula show abnormally heightened responses to performance errors, suggesting that excessive responses to salient stimuli such as drug cues could precipitate craving. The two hypotheses would each predict decreased and increased activity during stop error (SE) as compared to stop success (SS) trials in the SST. Here we showed that cocaine dependent patients (PCD) experienced greater subjective feeling of loss of control and cocaine craving during early (average of day 6) compared to late (average of day 18) abstinence. Furthermore, compared to PCD during late abstinence, PCD scanned during early abstinence showed increased thalamic as well as insular but not dACC responses to errors (SE>SS). These findings support the hypothesis that heightened thalamic reactivity to salient stimuli co-occur with cocaine craving and loss of self control. PMID:20163923

  16. Comparison of Diagnostic Accuracy of Clinical Measures of Breast Cancer–Related Lymphedema: Area Under the Curve

    PubMed Central

    Smoot, Betty J.; Wong, Josephine F.; Dodd, Marylin J.

    2013-01-01

    Objective To compare diagnostic accuracy of measures of breast cancer–related lymphedema (BCRL). Design Cross-sectional design comparing clinical measures with the criterion standard of previous diagnosis of BCRL. Setting University of California San Francisco Translational Science Clinical Research Center. Participants Women older than 18 years and more than 6 months posttreatment for breast cancer (n=141; 70 with BCRL, 71 without BCRL). Interventions Not applicable. Main Outcome Measures Sensitivity, specificity, receiver operator characteristic curve, and area under the curve (AUC) were used to evaluate accuracy. Results A total of 141 women were categorized as having (n=70) or not having (n=71) BCRL based on past diagnosis by a health care provider, which was used as the reference standard. Analyses of ROC curves for the continuous outcomes yielded AUC of .68 to .88 (P<.001); of the physical measures bioimpedance spectroscopy yielded the highest accuracy with an AUC of .88 (95% confidence interval, .80–.96) for women whose dominant arm was the affected arm. The lowest accuracy was found using the 2-cm diagnostic cutoff score to identify previously diagnosed BCRL (AUC, .54–.65). Conclusions Our findings support the use of bioimpedance spectroscopy in the assessment of existing BCRL. Refining diagnostic cutoff values may improve accuracy of diagnosis and warrant further investigation. PMID:21440706

  17. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    SciTech Connect

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  18. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  19. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  20. Matching post-Newtonian and numerical relativity waveforms: Systematic errors and a new phenomenological model for nonprecessing black hole binaries

    SciTech Connect

    Santamaria, L.; Ohme, F.; Dorband, N.; Moesta, P.; Robinson, E. L.; Krishnan, B.; Ajith, P.; Bruegmann, B.; Hannam, M.; Husa, S.; Pollney, D.; Reisswig, C.; Seiler, J.

    2010-09-15

    We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational-wave searches.

  1. On the validity of 3D polymer gel dosimetry: III. MRI-related error sources

    NASA Astrophysics Data System (ADS)

    Vandecasteele, Jan; De Deene, Yves

    2013-01-01

    In MRI (PAGAT) polymer gel dosimetry, there exists some controversy on the validity of 3D dose verifications of clinical treatments. The relative contribution of important sources of uncertainty in MR scanning to the overall accuracy and precision of 3D MRI polymer gel dosimetry is quantified in this study. The performance in terms of signal-to-noise and imaging artefacts was evaluated on three different MR scanners (two 1.5 T and a 3 T scanner). These include: (1) B0-field inhomogeneity, (2) B1-field inhomogeneity, (3) dielectric effects (losses and standing waves) and (4) temperature inhomogeneity during scanning. B0-field inhomogeneities that amount to maximum 5 ppm result in dose deviations of up to 4.3% and deformations of up to 5 pixels. Compensation methods are proposed. B1-field inhomogeneities were found to induce R2 variations in large anthropomorphic phantoms both at 1.5 and 3 T. At 1.5 T these effects are mainly caused by the coil geometry resulting in dose deviations of up to 25%. After the correction of the R2 maps using a heuristic flip angle-R2 relation, these dose deviations are reduced to 2.4%. At 3 T, the dielectric properties of the gel phantoms are shown to strongly influence B1-field homogeneity, hence R2 homogeneity, especially of large anthropomorphic phantoms. The low electrical conductivity of polymer gel dosimeters induces standing wave patterns resulting in dose deviations up to 50%. Increasing the conductivity of the gel by adding NaCl reduces the dose deviation to 25% after which the post-processing is successful in reducing the remaining inhomogeneities caused by the coil geometry to within 2.4%. The measurements are supported by computational modelling of the B1-field. Finally, temperature fluctuations of 1 °C frequently encountered in clinical MRI scanners result in dose deviations up to 15%. It is illustrated that with adequate temperature stabilization, the dose uncertainty is reduced to within 2.58%. Both authors contributed

  2. General Relativity Accuracy Test (TEPEE/GReAT): new configuration for the differential accelerometer

    NASA Astrophysics Data System (ADS)

    Iafolla, V.; Lucchesi, D.; Nozzoli, S.; Santoli, F.; Shapiro, I. I.; Lorenzioni, E. C.; Cosmo, M. L.; Ashenberg, J.; Cheimets, P. N.; Glashow, S.; GReAT

    A key component of experiments to test the Weak Equivalence Principle (WEP), or the universality of free fall, is the differential acceleration detector. The detector must have a very high accuracy in measuring the difference of acceleration, possibly caused by a violation of the Equivalence Principle, acting on a pair of proof masses of different materials. At the same time, the detector must be able to reject common-mode external accelerations and gravity gradients. In this paper, we report the progress in the development of a differential accelerometer that must be able to test the WEP with an accuracy of several parts in 10^15. The detector will be released to free fall inside an evacuated capsule (Einstein elevator) which has been previously dropped from a stratospheric balloon. In order to reach the accuracy goal of the experiment, the accelerometer must attain a sensitivity close to 10^-14 g/Hz^1/2 in a 25 s integration time. The free-fall time is determined by the time that the detector takes to span the co-falling capsule. The detector will be slowly rotated about a horizontal axis to modulate the gravity signal and then released inside the capsule, immediately after the capsule's release from the balloon. First, we describe briefly the overall experiment. Then, we present experimental results obtained with a differential accelerometer prototype, by stressing experimental tests of the sensitivity of the accelerometer read-out system with very-weak signals. Due to the fact that the accelerometer has a resonant frequency as low as 3 Hz and because of the difficulty to attenuate the external noise at such low frequencies, we have carried out acceleration measurements in the laboratory in a region of the seismic noise spectrum, i.e., at frequencies around 10^-1 Hz, where the noise is very low. In addition, we have exploited the ability of the sensor to reject common-mode noise components. Finally, we present a new configuration of the differential

  3. Plasma components affect accuracy of circulating cancer-related microRNA quantitation.

    PubMed

    Kim, Dong-Ja; Linnstaedt, Sarah; Palma, Jaime; Park, Joon Cheol; Ntrivalas, Evangelos; Kwak-Kim, Joanne Y H; Gilman-Sachs, Alice; Beaman, Kenneth; Hastings, Michelle L; Martin, Jeffrey N; Duelli, Dominik M

    2012-01-01

    Circulating microRNAs (miRNAs) have emerged as candidate biomarkers of various diseases and conditions including malignancy and pregnancy. This approach requires sensitive and accurate quantitation of miRNA concentrations in body fluids. Herein we report that enzyme-based miRNA quantitation, which is currently the mainstream approach for identifying differences in miRNA abundance among samples, is skewed by endogenous serum factors that co-purify with miRNAs and anticoagulant agents used during collection. Of importance, different miRNAs were affected to varying extent among patient samples. By developing measures to overcome these interfering activities, we increased the accuracy, and improved the sensitivity of miRNA detection up to 30-fold. Overall, the present study outlines key factors that prevent accurate miRNA quantitation in body fluids and provides approaches that enable faithful quantitation of miRNA abundance in body fluids. PMID:22154918

  4. ERN and the Placebo: A Misattribution Approach to Studying the Arousal Properties of the Error-Related Negativity

    ERIC Educational Resources Information Center

    Inzlicht, Michael; Al-Khindi, Timour

    2012-01-01

    Performance monitoring in the anterior cingulate cortex (ACC) has largely been viewed as a cognitive, computational process devoid of emotion. A growing body of research, however, suggests that performance is moderated by motivational engagement and that a signal generated by the ACC, the error-related negativity (ERN), may partially reflect a…

  5. The Relation between Content and Structure in Language Production: An Analysis of Speech Errors in Semantic Dementia

    ERIC Educational Resources Information Center

    Meteyard, Lotte; Patterson, Karalyn

    2009-01-01

    In order to explore the impact of a degraded semantic system on the structure of language production, we analysed transcripts from autobiographical memory interviews to identify naturally-occurring speech errors by eight patients with semantic dementia (SD) and eight age-matched normal speakers. Relative to controls, patients were significantly…

  6. A highly efficient error analysis program for the evaluation of spacecraft tests of general relativity with application to solar probes

    NASA Technical Reports Server (NTRS)

    Anderson, J. D.; Lau, E. K.; Georgevic, R. M.

    1973-01-01

    A computer program is described which can be used to study the feasibility of conducting relativity experiments on a wide range of hypothetical space missions, and a few applications are presented for solar probes which approach the Sun within 0.25 to 0.35 AU. It is assumed that radio ranging data are available from these spacecraft, and that accuracies on the order of 15 meters can be achieved. This is compatible with current accuracies of ranging to Mariner spacecraft. At this level of accuracy, the range data are sensitive to a number of effects, and for this reason it has been necessary to include a total of up to 23 parameters in the feasibility studies, even though there are only two parameters of real interest in the relativity experiments.

  7. Individual Differences in Working Memory Capacity Predict Action Monitoring and the Error-Related Negativity

    ERIC Educational Resources Information Center

    Miller, A. Eve; Watson, Jason M.; Strayer, David L.

    2012-01-01

    Neuroscience suggests that the anterior cingulate cortex (ACC) is responsible for conflict monitoring and the detection of errors in cognitive tasks, thereby contributing to the implementation of attentional control. Though individual differences in frontally mediated goal maintenance have clearly been shown to influence outward behavior in…

  8. GReAT (General Relativity Accuracy Test): a free fall test of Weak Equivalence Principle from stratospheric balloon altitude .

    NASA Astrophysics Data System (ADS)

    Iafolla, V.; Fiorenza, E.; Lefevre, C.; Nozzoli, S.; Peron, R.; Persichini, M.; Reale, A.; Santoli, F.; Lorenzini, E. C.; Shapiro, I. I.; Ashenberg, J.; Bombardelli, C.; Glashow, S.

    GReAT (General Relativity Accuracy Test) is a free fall experiment from stratospheric balloon altitude to test the Weak Equivalence Principle (WEP) with an accuracy of (5 \\cdot 10^{-15}). The key components of the experiments are a very high accuracy (sensitivity close to (10^{-14}) g/(\\sqrt{Hz}) in a 25-s integration time) differential acceleration detector to detect a possible violation of the WEP and the facility necessary to perform the experiment. The detector will be released to free fall inside an evacuated capsule (Einstein elevator) which has been previously dropped from a stratospheric balloon, and will be slowly rotated about a horizontal axis to modulate the gravity signal and then released inside the capsule, immediately after the capsule's release from the balloon. In this paper, we report the progress in the development of the differential accelerometer that must be able to test the WEP with the declared accuracy. Following a brief description of the overall experiment, we present experimental results obtained with a differential accelerometer prototype, in particular the ability of the sensor to reject common-mode noise components. Finally, we present a new configuration of the differential accelerometer which is less sensitive to higher-order mass moments generated by nearby masses.

  9. Error-related brain activity in youth and young adults before and after treatment for generalized or social anxiety disorder.

    PubMed

    Kujawa, Autumn; Weinberg, Anna; Bunford, Nora; Fitzgerald, Kate D; Hanna, Gregory L; Monk, Christopher S; Kennedy, Amy E; Klumpp, Heide; Hajcak, Greg; Phan, K Luan

    2016-11-01

    Increased error monitoring, as measured by the error-related negativity (ERN), has been shown to persist after treatment for obsessive-compulsive disorder in youth and adults; however, no previous studies have examined the ERN following treatment for related anxiety disorders. We used a flanker task to elicit the ERN in 28 youth and young adults (8-26years old) with primary diagnoses of generalized anxiety disorder (GAD) or social anxiety disorder (SAD) and 35 healthy controls. Patients were assessed before and after treatment with cognitive-behavioral therapy (CBT) or selective serotonin reuptake inhibitors (SSRI), and healthy controls were assessed at a comparable interval. The ERN increased across assessments in the combined sample. Patients with SAD exhibited an enhanced ERN relative to healthy controls prior to and following treatment, even when analyses were limited to SAD patients who responded to treatment. Patients with GAD did not significantly differ from healthy controls at either assessment. Results provide preliminary evidence that enhanced error monitoring persists following treatment for SAD in youth and young adults, and support conceptualizations of increased error monitoring as a trait-like vulnerability that may contribute to risk for recurrence and impaired functioning later in life. Future work is needed to further evaluate the ERN in GAD across development, including whether an enhanced ERN develops in adulthood or is most apparent when worries focus on internal sources of threat.

  10. Error-related brain activity in youth and young adults before and after treatment for generalized or social anxiety disorder.

    PubMed

    Kujawa, Autumn; Weinberg, Anna; Bunford, Nora; Fitzgerald, Kate D; Hanna, Gregory L; Monk, Christopher S; Kennedy, Amy E; Klumpp, Heide; Hajcak, Greg; Phan, K Luan

    2016-11-01

    Increased error monitoring, as measured by the error-related negativity (ERN), has been shown to persist after treatment for obsessive-compulsive disorder in youth and adults; however, no previous studies have examined the ERN following treatment for related anxiety disorders. We used a flanker task to elicit the ERN in 28 youth and young adults (8-26years old) with primary diagnoses of generalized anxiety disorder (GAD) or social anxiety disorder (SAD) and 35 healthy controls. Patients were assessed before and after treatment with cognitive-behavioral therapy (CBT) or selective serotonin reuptake inhibitors (SSRI), and healthy controls were assessed at a comparable interval. The ERN increased across assessments in the combined sample. Patients with SAD exhibited an enhanced ERN relative to healthy controls prior to and following treatment, even when analyses were limited to SAD patients who responded to treatment. Patients with GAD did not significantly differ from healthy controls at either assessment. Results provide preliminary evidence that enhanced error monitoring persists following treatment for SAD in youth and young adults, and support conceptualizations of increased error monitoring as a trait-like vulnerability that may contribute to risk for recurrence and impaired functioning later in life. Future work is needed to further evaluate the ERN in GAD across development, including whether an enhanced ERN develops in adulthood or is most apparent when worries focus on internal sources of threat. PMID:27495356

  11. An Error-Related Negativity Potential Investigation of Response Monitoring Function in Individuals with Internet Addiction Disorder

    PubMed Central

    Zhou, Zhenhe; Li, Cui; Zhu, Hongmei

    2013-01-01

    Internet addiction disorder (IAD) is an impulse disorder or at least related to impulse control disorder. Deficits in executive functioning, including response monitoring, have been proposed as a hallmark feature of impulse control disorders. The error-related negativity (ERN) reflects individual’s ability to monitor behavior. Since IAD belongs to a compulsive-impulsive spectrum disorder, theoretically, it should present response monitoring functional deficit characteristics of some disorders, such as substance dependence, ADHD, or alcohol abuse, testing with an Erikson flanker task. Up to now, no studies on response monitoring functional deficit in IAD were reported. The purpose of the present study was to examine whether IAD displays response monitoring functional deficit characteristics in a modified Erikson flanker task. Twenty-three subjects were recruited as IAD group. Twenty-three matched age, gender, and education healthy persons were recruited as control group. All participants completed the modified Erikson flanker task while measured with event-related potentials. IAD group made more total error rates than did controls (p < 0.01); Reactive times for total error responses in IAD group were shorter than did controls (p < 0.01). The mean ERN amplitudes of total error response conditions at frontal electrode sites and at central electrode sites of IAD group were reduced compared with control group (all p < 0.01). These results revealed that IAD displays response monitoring functional deficit characteristics and shares ERN characteristics of compulsive-impulsive spectrum disorder. PMID:24093009

  12. Determination of parameters and research autoreflection scheme to measurement errors relative position of the optical elements of the Space Telescope

    NASA Astrophysics Data System (ADS)

    Molev, Fedor; Konyakhin, Igor; Ezhova, Kseniia

    2014-05-01

    The main advantages and disadvantages of using autoreflection and autocollimation schemes for constructing the measuring channel, which is designed to control the relative position of the elements of the optical system Space Telescope are described in this paper. Results of modeling in the Zemax software complex are given. Methods of determining the autocollimation images coordinates for calculate the error relative position of the optical system are described.

  13. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  14. Investigation of Reversal Errors in Reading in Normal and Poor Readers as Related to Critical Factors in Reading Materials. Final Report.

    ERIC Educational Resources Information Center

    Liberman, Isabelle Y.; Shankweiler, Donald

    Reversals in poor and normal second-grade readers were studied in relation to their whole phonological error pattern in reading real words and nonsense syllables. Error categories included sequence and orientation reversals, other consonants, vowels, and total error. Reversals occurred in quantity only in poor readers, with large individual…

  15. Prospective Relations among Fearful Temperament, Protective Parenting, and Social Withdrawal: The Role of Maternal Accuracy in a Moderated Mediation Framework

    PubMed Central

    Kiel, Elizabeth J.; Buss, Kristin A.

    2011-01-01

    Early social withdrawal and protective parenting predict a host of negative outcomes, warranting examination of their development. Mothers’ accurate anticipation of their toddlers’ fearfulness may facilitate transactional relations between toddler fearful temperament and protective parenting, leading to these outcomes. Currently, we followed 93 toddlers (42 female; on average 24.76 months) and their mothers (9% underrepresented racial/ethnic backgrounds) over 3 years. We gathered laboratory observation of fearful temperament, maternal protective behavior, and maternal accuracy during toddlerhood and a multi-method assessment of children’s social withdrawal and mothers’ self-reported protective behavior at kindergarten entry. When mothers displayed higher accuracy, toddler fearful temperament significantly related to concurrent maternal protective behavior and indirectly predicted kindergarten social withdrawal and maternal protective behavior. These results highlight the important role of maternal accuracy in linking fearful temperament and protective parenting, which predict further social withdrawal and protection, and point to toddlerhood for efforts of prevention of anxiety-spectrum outcomes. PMID:21537895

  16. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  17. Acute low back pain information online: an evaluation of quality, content accuracy and readability of related websites.

    PubMed

    Hendrick, Paul A; Ahmed, Osman H; Bankier, Shane S; Chan, Tze Jieh; Crawford, Sarah A; Ryder, Catherine R; Welsh, Lisa J; Schneiders, Anthony G

    2012-08-01

    The internet is increasingly being used as a source of health information by the general public. Numerous websites exist that provide advice and information on the diagnosis and management of acute low back pain (ALBP), however, the accuracy and utility of this information has yet to be established. The aim of this study was to establish the quality, content and readability of online information relating to the treatment and management of ALBP. The internet was systematically searched using Google search engines from six major English-speaking countries. In addition, relevant national and international low back pain-related professional organisations were also searched. A total of 22 relevant websites was identified. The accuracy of the content of the ALBP information was established using a 13 point guide developed from international guidelines. Website quality was evaluated using the HONcode, and the Flesch-Kincaid Grade level (FKGL) was used to establish readability. The majority of websites lacked accurate information, resulting in an overall mean content accuracy score of 6.3/17. Only 3 websites had a high content accuracy score (>14/17) along with an acceptable readability score (FKGL 6-8) with the majority of websites providing information which exceeded the recommended level for the average person to comprehend. The most accurately reported category was, "Education and reassurance" (98%) while information regarding "manipulation" (50%), "massage" (9%) and "exercise" (0%) were amongst the lowest scoring categories. These results demonstrate the need for more accurate and readable internet-based ALBP information specifically centred on evidence-based guidelines. PMID:22464886

  18. Error-related processing following severe traumatic brain injury: An event-related functional magnetic resonance imaging (fMRI) study

    PubMed Central

    Sozda, Christopher N.; Larson, Michael J.; Kaufman, David A.S.; Schmalfuss, Ilona M.; Perlstein, William M.

    2011-01-01

    Continuous monitoring of one’s performance is invaluable for guiding behavior towards successful goal attainment by identifying deficits and strategically adjusting responses when performance is inadequate. In the present study, we exploited the advantages of event-related functional magnetic resonance imaging (fMRI) to examine brain activity associated with error-related processing after severe traumatic brain injury (sTBI). fMRI and behavioral data were acquired while 10 sTBI participants and 12 neurologically-healthy controls performed a task-switching cued-Stroop task. fMRI data were analyzed using a random-effects whole-brain voxel-wise general linear model and planned linear contrasts. Behaviorally, sTBI patients showed greater error-rate interference than neurologically-normal controls. fMRI data revealed that, compared to controls, sTBI patients showed greater magnitude error-related activation in the anterior cingulate cortex (ACC) and an increase in the overall spatial extent of error-related activation across cortical and subcortical regions. Implications for future research and potential limitations in conducting fMRI research in neurologically-impaired populations are discussed, as well as some potential benefits of employing multimodal imaging (e.g., fMRI and event-related potentials) of cognitive control processes in TBI. PMID:21756946

  19. Evaluation of measurement errors of temperature and relative humidity from HOBO data logger under different conditions of exposure to solar radiation.

    PubMed

    da Cunha, Antonio Ribeiro

    2015-05-01

    This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.

  20. NASA hydrogen maser accuracy and stability in relation to world standards

    NASA Technical Reports Server (NTRS)

    Peters, H. E.; Percival, D. B.

    1973-01-01

    Frequency comparisons were made among five NASA hydrogen masers in 1969 and again in 1972 to a precision of one part in 10 to the 13th power. Frequency comparisons were also made between these masers and the cesium-beam ensembles of several international standards laboratories. The hydrogen maser frequency stabilities as related to IAT were comparable to the frequency stabilities of individual time scales with respect to IAT. The relative frequency variations among the NASA masers, measured after the three-year interval, were 2 + or - 2 parts in 10 to the 13th power. Thus time scales based on hydrogen masers would have excellent long-term stability and uniformity.

  1. Sources of resonance-related errors in capacitance versus voltage measurement systems

    NASA Astrophysics Data System (ADS)

    Polishchuk, Igor; Brown, George; Huff, Howard

    2000-10-01

    A frequency dependence of the capacitance of metal-oxide-semiconductor devices is often observed in wafer-level probe station measurements for frequencies exceeding 100 kHz. It is well established, however, that the true capacitance value in the SiO2 devices biased into accumulation should remain frequency-independent well into the gigahertz range. Consequently, the apparent frequency dependence of the capacitance versus voltage characteristic may be the result of a resonance present in the measurement setup. We present a quantitative analysis, which can be used to identify the sources of error, characterize a measurement system, and improve the precision of the collected data.

  2. Accuracy of radionuclide ventriculography for estimation of left ventricular volume changes and end-systolic pressure-volume relations

    SciTech Connect

    Kronenberg, M.W.; Parrish, M.D.; Jenkins, D.W. Jr.; Sandler, M.P.; Friesinger, G.C.

    1985-11-01

    Estimation of left ventricular end-systolic pressure-volume relations depends on the accurate measurement of small changes in ventricular volume. To study the accuracy of radionuclide ventriculography, paired radionuclide and contrast ventriculograms were obtained in seven dogs during a control period and when blood pressure was increased in increments of 30 mm Hg by phenylephrine infusion. The heart rate was held constant by atropine infusion. The correlation between radionuclide and contrast ventriculography was excellent. The systolic pressure-volume relations were linear for both radionuclide and contrast ventriculography. The mean slope for radionuclide ventriculography was lower than the mean slope for contrast ventriculography; however, the slopes correlated well. The radionuclide-contrast volume relation was compared using background subtraction, attenuation correction, neither of these or both. By each method, radionuclide ventriculography was valid for measuring small changes in left ventricular volume and for defining end-systolic pressure-volume relations.

  3. Accuracy of 1908-1912 high to medium scale cartography of Rome and its surroundings and related georeferencing problems

    NASA Astrophysics Data System (ADS)

    Baiocchi, V.; Lelo, K.

    2009-04-01

    represented details were copied from one map to the other without changing their shape. In the case of the maps in our study, the greater scale requires the re-projection. In this context, the use of complex algorithms is needed to resample the raster images, implying a careful selection of the software package to perform the georeferencing process. The residual errors have been studied in detail, performing different checks for buildings, contour lines and spot heights. This was necessary since preliminary studies on different test areas have shown great incompatibilities, making us think that features such as buildings, roads, etc., could have been copied from cartographies at smaller scale through simple procedures of projection, while the characteristics of the ground could have been drawn by a special relief with characteristics of precision and accuracy proper of the 1:5000 scale. If we succeed in assessing surely that these maps have been drawn in this way we can use the obtainable height information with greater reliability. The maps georeferenced using the cartographic reprojection can furnish a valid tool to study different phenomenon, such as the geomorphologic variations due to natural and human causes within one century.

  4. Localising the auditory N1m with event-related beamformers: localisation accuracy following bilateral and unilateral stimulation

    PubMed Central

    Gascoyne, Lauren; Furlong, Paul L.; Hillebrand, Arjan; Worthen, Siân F.; Witton, Caroline

    2016-01-01

    The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses. PMID:27545435

  5. Localising the auditory N1m with event-related beamformers: localisation accuracy following bilateral and unilateral stimulation.

    PubMed

    Gascoyne, Lauren; Furlong, Paul L; Hillebrand, Arjan; Worthen, Siân F; Witton, Caroline

    2016-01-01

    The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses. PMID:27545435

  6. Approximating relational observables by absolute quantities: a quantum accuracy-size trade-off

    NASA Astrophysics Data System (ADS)

    Miyadera, Takayuki; Loveridge, Leon; Busch, Paul

    2016-05-01

    The notion that any physical quantity is defined and measured relative to a reference frame is traditionally not explicitly reflected in the theoretical description of physical experiments where, instead, the relevant observables are typically represented as ‘absolute’ quantities. However, the emergence of the resource theory of quantum reference frames as a new branch of quantum information science in recent years has highlighted the need to identify the physical conditions under which a quantum system can serve as a good reference. Here we investigate the conditions under which, in quantum theory, an account in terms of absolute quantities can provide a good approximation of relative quantities. We find that this requires the reference system to be large in a suitable sense.

  7. Medicolegal errors in the ED related to the involuntary confinement of psychiatric patients.

    PubMed

    Reeves, R R; Pinkofsky, H B; Stevens, L

    1998-11-01

    To determine the effectiveness of emergency department (ED) physicians properly and correctly completing documents required for emergency confinement of psychiatric patients, 1,000 Physician Emergency Certificates filed by ED physicians in the Shreveport, Louisiana, region were reviewed for appropriateness and for correctness of completion based on the applicable state law. Of the Physician Emergency Certificates reviewed 4.2% were incomplete or inappropriate. The most significant sources of error involved incomplete documentation of the mental status examination and not documenting the specific reason (dangerous to self, dangerous to others, or gravely disabled) for the patient meeting requirements for involuntary confinement. Other errors included confinement for reasons not appropriate for a psychiatric unit. This study suggests that ED physicians should be more cautious and thorough in completing the documents required for emergency confinement of psychiatric patients, so that the physician is less likely to be sued for malpractice or charged with the false imprisonment of such patients, and the patient's civil liberties are protected.

  8. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  9. Nonlinear Advection Algorithms Applied to Inter-related Tracers: Errors and Implications for Modeling Aerosol-Cloud Interactions

    SciTech Connect

    Ovtchinnikov, Mikhail; Easter, Richard C.

    2009-02-01

    Monotonicity constraints and gradient preserving flux corrections employed by many advection algorithms used in atmospheric models make these algorithms non-linear. Consequently, any relations among model variables transported separately are not necessarily preserved in such models. These errors cannot be revealed by traditional algorithm testing based on advection of a single tracer. New type of tests are developed and conducted to evaluate the preservation of a sum of several number mixing ratios advected independently of each other, as is the case, for example, in models using bin or sectional representation of aerosol or cloud particle size distribution. The tests show that when three tracers are advected in 1D uniform constant velocity flow, local errors in the sum can be on the order of 10%. When cloud-like interactions are allowed among the tracers, errors in total sum of three mixing ratios can reach up to 30%. Several approaches to eliminate the error are suggested, all based on advecting the sum as a separate variable and then normalizing mixing ratios for individual tracers to match the total sum. A simple scalar normalization preserves the total number mixing ratio and positive definiteness of the variables but the monotonicity constraint for individual tracers is no longer maintained. More involved flux normalization procedures are developed for the flux based advection algorithms to maintain the monotonicity for individual scalars and their sum.

  10. Motor imagery, P300 and error-related EEG-based robot arm movement control for rehabilitation purpose.

    PubMed

    Bhattacharyya, Saugat; Konar, Amit; Tibarewala, D N

    2014-12-01

    The paper proposes a novel approach toward EEG-driven position control of a robot arm by utilizing motor imagery, P300 and error-related potentials (ErRP) to align the robot arm with desired target position. In the proposed scheme, the users generate motor imagery signals to control the motion of the robot arm. The P300 waveforms are detected when the user intends to stop the motion of the robot on reaching the goal position. The error potentials are employed as feedback response by the user. On detection of error the control system performs the necessary corrections on the robot arm. Here, an AdaBoost-Support Vector Machine (SVM) classifier is used to decode the 4-class motor imagery and an SVM is used to decode the presence of P300 and ErRP waveforms. The average steady-state error, peak overshoot and settling time obtained for our proposed approach is 0.045, 2.8% and 44 s, respectively, and the average rate of reaching the target is 95%. The results obtained for the proposed control scheme make it suitable for designs of prosthetics in rehabilitative applications.

  11. Evaluation of the Quantitative Accuracy of 3D Reconstruction of Edentulous Jaw Models with Jaw Relation Based on Reference Point System Alignment

    PubMed Central

    Li, Weiwei; Yuan, Fusong; Lv, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Objectives To apply contact measurement and reference point system (RPS) alignment techniques to establish a method for 3D reconstruction of the edentulous jaw models with centric relation and to quantitatively evaluate its accuracy. Methods Upper and lower edentulous jaw models were clinically prepared, 10 pairs of resin cylinders with same size were adhered to axial surfaces of upper and lower models. The occlusal bases and the upper and lower jaw models were installed in the centric relation position. Faro Edge 1.8m was used to directly obtain center points of the base surface of the cylinders (contact method). Activity 880 dental scanner was used to obtain 3D data of the cylinders and the center points were fitted (fitting method). 3 pairs of center points were used to align the virtual model to centric relation. An observation coordinate system was interactively established. The straight-line distances in the X (horizontal left/right), Y (horizontal anterior/posterior), and Z (vertical) between the remaining 7 pairs of center points derived from contact method and fitting method were measured respectively and analyzed using a paired t-test. Results The differences of the straight-line distances of the remaining 7 pairs of center points between the two methods were X: 0.074 ± 0.107 mm, Y: 0.168 ± 0.176 mm, and Z: −0.003± 0.155 mm. The results of paired t-test were X and Z: p >0.05, Y: p <0.05. Conclusion By using contact measurement and the reference point system alignment technique, highly accurate reconstruction of the vertical distance and centric relation of a digital edentulous jaw model can be achieved, which meets the design and manufacturing requirements of the complete dentures. The error of horizontal anterior/posterior jaw relation was relatively large. PMID:25659133

  12. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  13. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1990-01-01

    In the present study, two codes which solve the three-dimensional Thin-Layer Navier-Stokes (TLNS) equations are used to compute the steady-state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  14. A high accuracy gyroscope readout test facility for the relativity gyroscope experiment

    NASA Technical Reports Server (NTRS)

    Cabrera, B.; Van Kann, F. J.

    1977-01-01

    An apparatus is under construction for ground-based testing of a gyroscope system to be used in a satellite test of general relativity. The immediate goal is a readout capable of measuring the direction of the gyroscope spin axis to an angular resolution of one arcsecond over a limited range. A combination of SQUID magnetometers and persistent current loops are used to measure the London moment of the spinning superconducting rotor levitated electrostatically. To obtain a trapped flux signal in the gyroscope sufficiently smaller than the London moment signal, the apparatus makes use of a new magnetic field shielding technique for obtaining large superconductor shielded regions below 0.1 millionth of a Gauss.

  15. Heat production and error probability relation in Landauer reset at effective temperature

    PubMed Central

    Neri, Igor; López-Suárez, Miquel

    2016-01-01

    The erasure of a classical bit of information is a dissipative process. The minimum heat produced during this operation has been theorized by Rolf Landauer in 1961 to be equal to kBT ln2 and takes the name of Landauer limit, Landauer reset or Landauer principle. Despite its fundamental importance, the Landauer limit remained untested experimentally for more than fifty years until recently when it has been tested using colloidal particles and magnetic dots. Experimental measurements on different devices, like micro-mechanical systems or nano-electronic devices are still missing. Here we show the results obtained in performing the Landauer reset operation in a micro-mechanical system, operated at an effective temperature. The measured heat exchange is in accordance with the theory reaching values close to the expected limit. The data obtained for the heat production is then correlated to the probability of error in accomplishing the reset operation. PMID:27669898

  16. Heat production and error probability relation in Landauer reset at effective temperature

    NASA Astrophysics Data System (ADS)

    Neri, Igor; López-Suárez, Miquel

    2016-09-01

    The erasure of a classical bit of information is a dissipative process. The minimum heat produced during this operation has been theorized by Rolf Landauer in 1961 to be equal to kBT ln2 and takes the name of Landauer limit, Landauer reset or Landauer principle. Despite its fundamental importance, the Landauer limit remained untested experimentally for more than fifty years until recently when it has been tested using colloidal particles and magnetic dots. Experimental measurements on different devices, like micro-mechanical systems or nano-electronic devices are still missing. Here we show the results obtained in performing the Landauer reset operation in a micro-mechanical system, operated at an effective temperature. The measured heat exchange is in accordance with the theory reaching values close to the expected limit. The data obtained for the heat production is then correlated to the probability of error in accomplishing the reset operation.

  17. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    PubMed

    Weng, Jian; Dong, Shanshan; He, Hongjian; Chen, Feiyan; Peng, Xiaogang

    2015-01-01

    Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group. PMID:26207985

  18. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    PubMed

    Weng, Jian; Dong, Shanshan; He, Hongjian; Chen, Feiyan; Peng, Xiaogang

    2015-01-01

    Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.

  19. Modeling and calibration of pointing errors with alt-az telescope

    NASA Astrophysics Data System (ADS)

    Huang, Long; Ma, Wenli; Huang, Jinlong

    2016-08-01

    This paper presents a new model for improving the pointing accuracy of a telescope. The Denavit-Hartenberg (D-H) convention was used to perform an error analysis of the telescope's kinematics. A kinematic model was used to relate pointing errors to mechanical errors and the parameters of the kinematic model were estimated with a statistical model fit using data from two large astronomical telescopes. The model illustrates the geometric errors caused by imprecision in manufacturing and assembly processes and their effects on the pointing accuracy of the telescope. A kinematic model relates pointing error to axis position when certain geometric errors are assumed to be present in a telescope. In the parameter estimation portion, the semi-parametric regression model was introduced to compensate for remaining nonlinear errors. The experimental results indicate that the proposed semi-parametric regression model eliminates both geometric and nonlinear errors, and that the telescope's pointing accuracy significantly improves after this calibration.

  20. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI).

    PubMed

    Logan, Dustin M; Hill, Kyle R; Larson, Michael J

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  1. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI)

    PubMed Central

    Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  2. The Influence of Relatives on the Efficiency and Error Rate of Familial Searching

    PubMed Central

    Rohlfs, Rori V.; Murphy, Erin; Song, Yun S.; Slatkin, Montgomery

    2013-01-01

    We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability () of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases. PMID:23967076

  3. Characterization and mitigation of relative edge placement errors (rEPE) in full-chip computational lithography

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Gupta, Rachit; Shang, Shumay; Liubich, Vlad; Word, James

    2015-10-01

    Edge placement error (EPE) was a term initially introduced to describe the difference between predicted pattern contour edge and the design target. Strictly speaking this quantity is not directly measurable in the fab, and furthermore it is not ultimately the most important metric for chip yield. What is of vital importance is the relative EPE (rEPE) between different design layers, and in the era of multi-patterning, the different constituent mask sublayers for a single design layer. There has always been a strong emphasis on measurement and control of misalignment between design layers, and the progress in this realm has been remarkable, spurned in part at least by the proliferation of multi-patterning which reduces the available overlay budget by introducing a coupling of alignment and CD errors for the target layer. In-line CD and overlay metrology specifications are typically established by starting with design rules and making certain assumptions about error distributions which might be encountered in manufacturing. Lot disposition criteria in photo metrology (rework or pass to etch) are set assuming worst case assumptions for CD and overlay respectively. For example poly to active overlay specs start with poly endcap design rules and make assumptions about active and poly lot average and across lot CDs, and incorporate general knowledge about poly line end rounding to ensure that leakage current is maintained within specification. This worst case guard banding does not consider specific chip designs, however and as we have previously shown full-chip simulation can elucidate the most critical "hot spots" for interlayer process variability comprehending the two-layer CD and misalignment process window. It was shown that there can be differences in X versus Y misalignment process windows as well as positive versus negative directional misalignment process windows and that such design specific information might be leveraged for manufacturing disposition and

  4. Fatigue-proofing: a new approach to reducing fatigue-related risk using the principles of error management.

    PubMed

    Dawson, Drew; Chapman, Janine; Thomas, Matthew J W

    2012-04-01

    In this review we introduce the idea of a novel group of strategies for further reducing fatigue-related risk in the workplace. In contrast to the risk-reduction achieved by reducing the likelihood an individual will be working while fatigued (e.g., by restricting hours of work), fatigue-proofing strategies are adaptive and protective risk-reduction behaviours that improve the resilience of a system of work. That is, they increase the likelihood that a fatigue-related error will be detected and not translate into accident or injury, thus reducing vulnerability to fatigue-related error. The first part of the review outlines the theoretical underpinnings of this approach and gives a series of ethnographically derived examples of informal fatigue-proofing strategies used in a variety of industries. A preliminary conceptual and methodological framework for the systematic identification, development and evaluation of fatigue-proofing strategies is then presented for integration into the wider organisational safety system. The review clearly identifies fatigue-proofing as a potentially valuable strategy to significantly lower fatigue-related risk independent of changes to working hours. This is of particular relevance to organisations where fatigue is difficult to manage using reductions in working hours due to operational circumstances, or the paradoxical consequences for overall safety associated with reduced working hours.

  5. Bottom-Up Mechanisms Are Involved in the Relation between Accuracy in Timing Tasks and Intelligence--Further Evidence Using Manipulations of State Motivation

    ERIC Educational Resources Information Center

    Ullen, Fredrik; Soderlund, Therese; Kaaria, Lenita; Madison, Guy

    2012-01-01

    Intelligence correlates with accuracy in various timing tasks. Such correlations could be due to both bottom-up mechanisms, e.g. neural properties that influence both temporal accuracy and cognitive processing, and differences in top-down control. We have investigated the timing-intelligence relation using a simple temporal motor task, isochronous…

  6. Most Frequent Errors in Judo Uki Goshi Technique and the Existing Relations among Them Analysed through T-Patterns

    PubMed Central

    Gutiérrez, Alfonso; Prieto, Iván; Cancela, José M.

    2009-01-01

    The purpose of this study is to provide a tool, based on the knowledge of technical errors, which helps to improve the teaching and learning process of the Uki Goshi technique. With this aim, we set out to determine the most frequent errors made by 44 students when performing this technique and how these mistakes relate. In order to do so, an observational analysis was carried out using the OSJUDO-UKG instrument and the data were registered using Match Vision Studio (Castellano, Perea, Alday and Hernández, 2008). The results, analyzed through descriptive statistics, show that the absence of a correct initial unbalancing movement (45,5%), the lack of proper right-arm pull (56,8%), not blocking the faller’s body (Uke) against the thrower’s hip -Tori- (54,5%) and throwing the Uke through the Tori’s side are the most usual mistakes (72,7%). Through the sequencial analysis of T-Patterns obtained with the THÈME program (Magnusson, 1996, 2000) we have concluded that not blocking the body with the Tori’s hip provokes the Uke’s throw through the Tori’s side during the final phase of the technique (95,8%), and positioning the right arm on the dorsal region of the Uke’s back during the Tsukuri entails the absence of a subsequent pull of the Uke’s body (73,3%). Key Points In this study, the most frequent errors in the performance of the Uki Goshi technique have been determined and the existing relations among these mistakes have been shown through T-Patterns. The SOBJUDO-UKG is an observation instrument for detecting mistakes in the aforementioned technique. The results show that those mistakes related to the initial imbalancing movement and the main driving action of the technique are the most frequent. The use of T-Patterns turns out to be effective in order to obtain the most important relations among the observed errors. PMID:24474885

  7. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    SciTech Connect

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)

  8. Accuracy in determining voice source parameters

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2014-11-01

    The paper addresses the accuracy of an approximate solution to the inverse problem of retrieving the shape of a voice source from a speech signal for a known signal-to-noise ratio (SNR). It is shown that if the source is found as a function of time with the A.N. Tikhonov regularization method, the accuracy of the found approximation is worse than the accuracy of speech signal recording by an order of magnitude. In contrast, adequate parameterization of the source ensures approximate solution accuracy comparable with the accuracy of the problem data. A corresponding algorithm is considered. On the basis of linear (in terms of data errors) estimates of approximate parametric solution accuracy, parametric models with the best accuracy can be chosen. This comparison has been carried out for the known voice source models, i.e., model [17] and the LF model [18]. The advantages of the latter are shown. Thus, for SNR = 40 dB, the relative accuracy of an approximate solution found with this algorithm is about 1% for the LF model and about 2% for model [17] as compared to an accuracy of 7-8% in the regularization method. The role of accuracy estimates found in speaker identification problems is discussed.

  9. An Event-Related Potential Study on Changes of Violation and Error Responses during Morphosyntactic Learning

    ERIC Educational Resources Information Center

    Davidson, Douglas J.; Indefrey, Peter

    2009-01-01

    Based on recent findings showing electrophysiological changes in adult language learners after relatively short periods of training, we hypothesized that adult Dutch learners of German would show responses to German gender and adjective declension violations after brief instruction. Adjective declension in German differs from previously studied…

  10. Developmental Differences in Error-Related ERPs in Middle- to Late-Adolescent Males

    ERIC Educational Resources Information Center

    Santesso, Diane L.; Segalowitz, Sidney J.

    2008-01-01

    Although there are some studies documenting structural brain changes during late adolescence, there are few showing functional brain changes over this period in humans. Of special interest would be functional changes in the medial frontal cortex that reflect response monitoring. In order to examine such age-related differences, the authors…

  11. Negative Cognitive Errors and Positive Illusions: Moderators of Relations between Divorce Events and Children's Psychological Adjustment.

    ERIC Educational Resources Information Center

    Mazur, Elizabeth; Wolchik, Sharlene

    Building on prior literature on adults' and children's appraisals of stressors, this study investigated relations among negative and positive appraisal biases, negative divorce events, and children's post-divorce adjustment. Subjects were 79 custodial nonremarried mothers and their children ages 9 to 13 who had experienced parental divorce within…

  12. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity.

    PubMed

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing "to buy" or "not to buy," participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the "not to buy" stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process. PMID:26557139

  13. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity

    PubMed Central

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing “to buy” or “not to buy,” participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the “not to buy” stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process. PMID:26557139

  14. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity.

    PubMed

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing "to buy" or "not to buy," participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the "not to buy" stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process.

  15. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    PubMed

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach. PMID:21394601

  16. Effects of Exposure Measurement Error in the Analysis of Health Effects from Traffic-Related Air Pollution

    PubMed Central

    Baxter, Lisa K.; Wright, Rosalind J.; Paciorek, Christopher J.; Laden, Francine; Suh, Helen H.; Levy, Jonathan I.

    2011-01-01

    In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to evaluate quantitatively these surrogates against measured pollutant concentrations to determine how their use affects the interpretation of epidemiological study results. In this study, we quantified the implications of using exposure models derived from validation studies, and other alternative surrogate models with varying amounts of measurement error, on epidemiological study findings. We compared previously developed multiple regression models characterizing residential indoor nitrogen dioxide (NO2), fine particulate matter (PM2.5), and elemental carbon (EC) concentrations to models with less explanatory power that may be applied in the absence of validation studies. We constructed a hypothetical epidemiological study, under a range of odds ratios, and determined the bias and uncertainty caused by the use of various exposure models predicting residential indoor exposure levels. Our simulations illustrated that exposure models with fairly modest R2 (0.3 to 0.4 for the previously developed multiple regression models for PM2.5 and NO2) yielded substantial improvements in epidemiological study performance, relative to the application of regression models created in the absence of validation studies or poorer-performing validation study models (e.g. EC). In many studies, models based on validation data may not be possible, so it may be necessary to use a surrogate model with more measurement error. This analysis provides a technique to quantify the implications of applying various exposure models with different degrees of measurement error in epidemiological research. PMID:19223939

  17. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data.

    PubMed

    Lowther, Andrew D; Lydersen, Christian; Fedak, Mike A; Lovell, Phil; Kovacs, Kit M

    2015-01-01

    Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.

  18. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  19. Accuracy estimation for supervised learning algorithms

    SciTech Connect

    Glover, C.W.; Oblow, E.M.; Rao, N.S.V.

    1997-04-01

    This paper illustrates the relative merits of three methods - k-fold Cross Validation, Error Bounds, and Incremental Halting Test - to estimate the accuracy of a supervised learning algorithm. For each of the three methods we point out the problem they address, some of the important assumptions that are based on, and illustrate them through an example. Finally, we discuss the relative advantages and disadvantages of each method.

  20. Refractive Error and Risk of Early or Late Age-Related Macular Degeneration: A Systematic Review and Meta-Analysis

    PubMed Central

    Li, Ying; Wang, JiWen; Zhong, XiaoJing; Tian, Zhen; Wu, Peipei; Zhao, Wenbo; Jin, Chenjin

    2014-01-01

    Objective To summarize relevant evidence investigating the associations between refractive error and age-related macular degeneration (AMD). Design Systematic review and meta-analysis. Methods We searched Medline, Web of Science, and Cochrane databases as well as the reference lists of retrieved articles to identify studies that met the inclusion criteria. Extracted data were combined using a random-effects meta-analysis. Studies that were pertinent to our topic but did not meet the criteria for quantitative analysis were reported in a systematic review instead. Main outcome measures Pooled odds ratios (ORs) and 95% confidence intervals (CIs) for the associations between refractive error (hyperopia, myopia, per-diopter increase in spherical equivalent [SE] toward hyperopia, per-millimeter increase in axial length [AL]) and AMD (early and late, prevalent and incident). Results Fourteen studies comprising over 5800 patients were eligible. Significant associations were found between hyperopia, myopia, per-diopter increase in SE, per-millimeter increase in AL, and prevalent early AMD. The pooled ORs and 95% CIs were 1.13 (1.06–1.20), 0.75 (0.56–0.94), 1.10 (1.07–1.14), and 0.79 (0.73–0.85), respectively. The per-diopter increase in SE was also significantly associated with early AMD incidence (OR, 1.06; 95% CI, 1.02–1.10). However, no significant association was found between hyperopia or myopia and early AMD incidence. Furthermore, neither prevalent nor incident late AMD was associated with refractive error. Considerable heterogeneity was found among studies investigating the association between myopia and prevalent early AMD (P = 0.001, I2 = 72.2%). Geographic location might play a role; the heterogeneity became non-significant after stratifying these studies into Asian and non-Asian subgroups. Conclusion Refractive error is associated with early AMD but not with late AMD. More large-scale longitudinal studies are needed to further investigate such

  1. [Determination of relative error of pressure-broadening linewidth for the experimentally indistinguishable overlapped spectral lines with Voigt profile].

    PubMed

    Lin, Jie-Li; Huang, Yi-Qing; Lu, Hong

    2005-01-01

    The simulation and fitting of the overlapped spectral lines with Voigt profile were presented in this paper. The relative errors epsilon of the fitted pressure-broadening linewidth when taking the overlapped spectral line as one spectrum were discussed in detail. The relationship between such error and the two spectral lines center distance deltav0, and theoretical pressure-broadening linewidth deltav(L)0 were analyzed. Epsilon is found to be very large and the relationship between epsilon and deltav0, deltav(L)0 is very complicated when the value of pressure-broadening linewidth is considerably less than that of Dopplerian one deltavD. When deltav(L)0 is comparative to deltaVD the relationship between epsilon and deltav0 is close to the smooth two-order polynomial curve. However, the slop of this curve is negative while deltav(L)0 is smaller than deltavD and is positive when larger. Generally, epsilon decreases with the increase of proportion of deltav(l)0 to the whole spectral linewidth. All the above conclusion and corresponding data are the significant reference to determine the precise pressure-broadening coefficient from the experimentally indistinguishable overlapped spectrum, as well as to correct the fitted pressure-broadening linewidth. PMID:15852837

  2. Extraversion/Introversion and Gender in Relation to the English Pronunciation Accuracy of Arabic Speaking College Students.

    ERIC Educational Resources Information Center

    Hassan, Badran A.

    The relationship between both extraversion/introversion and gender to the pronunciation accuracy of English as a foreign language was examined. Instruments for this study included a specifically developed introversion scale and an English language pronunciation accuracy test. Subjects were third-year English language specialists. It was found…

  3. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-01

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  4. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  5. [Longer working hours of pharmacists in the ward resulted in lower medication-related errors--survey of national university hospitals in Japan].

    PubMed

    Matsubara, Kazuo; Toyama, Akira; Satoh, Hiroshi; Suzuki, Hiroshi; Awaya, Toshio; Tasaki, Yoshikazu; Yasuoka, Toshiaki; Horiuchi, Ryuya

    2011-04-01

    It is obvious that pharmacists play a critical role as risk managers in the healthcare system, especially in medication treatment. Hitherto, there is not a single multicenter-survey report describing the effectiveness of clinical pharmacists in preventing medical errors from occurring in the wards in Japan. Thus, we conducted a 1-month survey to elucidate the relationship between the number of errors and working hours of pharmacists in the ward, and verified whether the assignment of clinical pharmacists to the ward would prevent medical errors between October 1-31, 2009. Questionnaire items for the pharmacists at 42 national university hospitals and a medical institute included the total and the respective numbers of medication-related errors, beds and working hours of pharmacist in 2 internal medicine and 2 surgical departments in each hospital. Regardless of severity, errors were consecutively reported to the Medical Security and Safety Management Section in each hospital. The analysis of errors revealed that longer working hours of pharmacists in the ward resulted in less medication-related errors; this was especially significant in the internal medicine ward (where a variety of drugs were used) compared with the surgical ward. However, the nurse assignment mode (nurse/inpatients ratio: 1 : 7-10) did not influence the error frequency. The results of this survey strongly indicate that assignment of clinical pharmacists to the ward is critically essential in promoting medication safety and efficacy. PMID:21467804

  6. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials

    PubMed Central

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  7. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials.

    PubMed

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  8. Task engagement and the relationships between the error-related negativity, agreeableness, behavioral shame proneness and cortisol.

    PubMed

    Tops, Mattie; Boksem, Maarten A S; Wester, Anne E; Lorist, Monicque M; Meijman, Theo F

    2006-08-01

    Previous results suggest that both cortisol mobilization and the error-related negativity (ERN/Ne) reflect goal engagement, i.e. the mobilization and allocation of attentional and physiological resources. Personality measures of negative affectivity have been associated both to high cortisol levels and large ERN/Ne amplitudes. However, measures of positive social adaptation and agreeableness have also been related to high cortisol levels and large ERN/Ne amplitudes. We hypothesized that, as long as they relate to concerns over social evaluation and mistakes, both personality measures reflecting positive affectivity (e.g. agreeableness) and those reflecting negative affectivity (e.g. behavioral shame proneness) would be associated with an increased likelihood of high task engagement, and hence to increased cortisol mobilization and ERN/Ne amplitudes. We had female subjects perform a flanker task while EEG was recorded. Additionally, the subjects filled out questionnaires measuring mood and personality, and salivary cortisol immediately before and after task performance was measured. The overall pattern of relationships between our measures supports the hypothesis that cortisol mobilization and ERN/Ne amplitude reflect task engagement, and both relate positively to each other and to the personality traits agreeableness and behavioral shame proneness. We discuss the potential importance of engagement-disengagement and of concerns over social evaluation for research on psychopathology, stress and the ERN/Ne.

  9. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  10. Processing of action- but not stimulus-related prediction errors differs between active and observational feedback learning.

    PubMed

    Kobza, Stefan; Bellebaum, Christian

    2015-01-01

    Learning of stimulus-response-outcome associations is driven by outcome prediction errors (PEs). Previous studies have shown larger PE-dependent activity in the striatum for learning from own as compared to observed actions and the following outcomes despite comparable learning rates. We hypothesised that this finding relates primarily to a stronger integration of action and outcome information in active learners. Using functional magnetic resonance imaging, we investigated brain activations related to action-dependent PEs, reflecting the deviation between action values and obtained outcomes, and action-independent PEs, reflecting the deviation between subjective values of response-preceding cues and obtained outcomes. To this end, 16 active and 15 observational learners engaged in a probabilistic learning card-guessing paradigm. On each trial, active learners saw one out of five cues and pressed either a left or right response button to receive feedback (monetary win or loss). Each observational learner observed exactly those cues, responses and outcomes of one active learner. Learning performance was assessed in active test trials without feedback and did not differ between groups. For both types of PEs, activations were found in the globus pallidus, putamen, cerebellum, and insula in active learners. However, only for action-dependent PEs, activations in these structures and the anterior cingulate were increased in active relative to observational learners. Thus, PE-related activity in the reward system is not generally enhanced in active relative to observational learning but only for action-dependent PEs. For the cerebellum, additional activations were found across groups for cue-related uncertainty, thereby emphasising the cerebellum's role in stimulus-outcome learning.

  11. Accepting error to make less error.

    PubMed

    Einhorn, H J

    1986-01-01

    In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.

  12. Teaching Picture-to-Object Relations in Picture-Based Requesting by Children with Autism: A Comparison between Error Prevention and Error Correction Teaching Procedures

    ERIC Educational Resources Information Center

    Carr, D.; Felce, J.

    2008-01-01

    Background: Children who have a combination of language and developmental disabilities with autism often experience major difficulties in learning relations between objects and their graphic representations. Therefore, they would benefit from teaching procedures that minimize their difficulties in acquiring these relations. This study compared two…

  13. Assessment of targeting accuracy of a low-energy stereotactic radiosurgery treatment for age-related macular degeneration

    NASA Astrophysics Data System (ADS)

    Taddei, Phillip J.; Chell, Erik; Hansen, Steven; Gertner, Michael; Newhauser, Wayne D.

    2010-12-01

    Age-related macular degeneration (AMD), a leading cause of blindness in the United States, is a neovascular disease that may be controlled with radiation therapy. Early patient outcomes of external beam radiotherapy, however, have been mixed. Recently, a novel multimodality treatment was developed, comprising external beam radiotherapy and concomitant treatment with a vascular endothelial growth factor inhibitor. The radiotherapy arm is performed by stereotactic radiosurgery, delivering a 16 Gy dose in the macula (clinical target volume, CTV) using three external low-energy x-ray fields while adequately sparing normal tissues. The purpose of our study was to test the sensitivity of the delivery of the prescribed dose in the CTV using this technique and of the adequate sparing of normal tissues to all plausible variations in the position and gaze angle of the eye. Using Monte Carlo simulations of a 16 Gy treatment, we varied the gaze angle by ±5° in the polar and azimuthal directions, the linear displacement of the eye ±1 mm in all orthogonal directions, and observed the union of the three fields on the posterior wall of spheres concentric with the eye that had diameters between 20 and 28 mm. In all cases, the dose in the CTV fluctuated <6%, the maximum dose in the sclera was <20 Gy, the dose in the optic disc, optic nerve, lens and cornea were <0.7 Gy and the three-field junction was adequately preserved. The results of this study provide strong evidence that for plausible variations in the position of the eye during treatment, either by the setup error or intrafraction motion, the prescribed dose will be delivered to the CTV and the dose in structures at risk will be kept far below tolerance doses.

  14. Content validation of the Medication Error Worksheet.

    PubMed

    Zuzelo, P R; Inverso, T; Linkewich, K M

    2001-11-01

    Clinical nurse specialists use a variety of preexisting instruments to measure and describe health-related concepts. It is important for clinical nurse specialists to know how to evaluate the content validity of potentially useful instruments. This study assessed the content validity of the Institute for Safe Medication Practice's Medication Error Worksheet. The worksheet is used as a questioning framework to guide data collection processes when beginning analysis of a medication error. Although the worksheet has been valuable to the Institute for Safe Medication Practice staff, its content validity has not been determined. Content validity methods included expert validation and a review of the related literature. Results support the validity of the Medication Error Worksheet and suggest that this worksheet is a comprehensive tool that may be helpful when exploring the circumstances of medication errors and when analyzing medication use systems. Results were shared with the Institute for Safe Medication Practice staff to improve the accuracy of the worksheet.

  15. Relating Indices of Knowledge Structure Coherence and Accuracy to Skill-Based Performance: Is There Utility in Using a Combination of Indices?

    ERIC Educational Resources Information Center

    Schuelke, Matthew J.; Day, Eric Anthony; McEntire, Lauren E.; Boatman, Paul R.; Boatman, Jazmine Espejo; Kowollik, Vanessa; Wang, Xiaoqian

    2009-01-01

    The authors examined the relative criterion-related validity of knowledge structure coherence and two accuracy-based indices (closeness and correlation) as well as the utility of using a combination of knowledge structure indices in the prediction of skill acquisition and transfer. Findings from an aggregation of 5 independent samples (N = 958)…

  16. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  17. Compensation Low-Frequency Errors in TH-1 Satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin

    2016-06-01

    The topographic mapping products at 1:50,000 scale can be realized using satellite photogrammetry without ground control points (GCPs), which requires the high accuracy of exterior orientation elements. Usually, the attitudes of exterior orientation elements are obtained from the attitude determination system on the satellite. Based on the theoretical analysis and practice, the attitude determination system exists not only the high-frequency errors, but also the low-frequency errors related to the latitude of satellite orbit and the time. The low-frequency errors would affect the location accuracy without GCPs, especially to the horizontal accuracy. In SPOT5 satellite, the latitudinal model was proposed to correct attitudes using approximately 20 calibration sites data, and the location accuracy was improved. The low-frequency errors are also found in Tian Hui 1 (TH-1) satellite. Then, the method of compensation low-frequency errors is proposed in ground image processing of TH-1, which can detect and compensate the low-frequency errors automatically without using GCPs. This paper deal with the low-frequency errors in TH-1: First, the analysis about low-frequency errors of the attitude determination system is performed. Second, the compensation models are proposed in bundle adjustment. Finally, the verification is tested using data of TH-1. The testing results show: the low-frequency errors of attitude determination system can be compensated during bundle adjustment, which can improve the location accuracy without GCPs and has played an important role in the consistency of global location accuracy.

  18. Cognitive control adjustments in healthy older and younger adults: Conflict adaptation, the error-related negativity (ERN), and evidence of generalized decline with age.

    PubMed

    Larson, Michael J; Clayson, Peter E; Keith, Cierra M; Hunt, Isaac J; Hedges, Dawson W; Nielsen, Brent L; Call, Vaughn R A

    2016-03-01

    Older adults display alterations in neural reflections of conflict-related processing. We examined response times (RTs), error rates, and event-related potential (ERP; N2 and P3 components) indices of conflict adaptation (i.e., congruency sequence effects) a cognitive control process wherein previous-trial congruency influences current-trial performance, along with post-error slowing, correct-related negativity (CRN), error-related negativity (ERN) and error positivity (Pe) amplitudes in 65 healthy older adults and 94 healthy younger adults. Older adults showed generalized slowing, had decreased post-error slowing, and committed more errors than younger adults. Both older and younger adults showed conflict adaptation effects; magnitude of conflict adaptation did not differ by age. N2 amplitudes were similar between groups; younger, but not older, adults showed conflict adaptation effects for P3 component amplitudes. CRN and Pe, but not ERN, amplitudes differed between groups. Data support generalized declines in cognitive control processes in older adults without specific deficits in conflict adaptation.

  19. Maternal Accuracy and Behavior in Anticipating Children’s Responses to Novelty: Relations to Fearful Temperament and Implications for Anxiety Development

    PubMed Central

    Kiel, Elizabeth J.; Buss, Kristin A.

    2009-01-01

    Previous research has suggested that mothers’ behaviors may serve as a mechanism in the development from toddler fearful temperament to childhood anxiety. The current study examined the maternal characteristic of accuracy in predicting toddlers’ distress reactions to novelty in relation to temperament, parenting, and anxiety development. Ninety-three two-year-old toddlers and their mothers participated in the study. Maternal accuracy moderated the relation between fearful temperament and protective behavior, suggesting this bidirectional link may be more likely to occur when mothers are particularly attuned to their children’s fear responses. An exploratory moderated mediation analysis supported the mechanistic role of protective parenting in the relation between early fearful temperament and later anxiety. Mediation only occurred, however, when mothers displayed high accuracy. Results are discussed within the broader literature of parental influence on fearful children’s development. PMID:20436795

  20. An Analysis of Factors Related to Choral Teachers' Ability to Detect Pitch Errors While Reading the Score.

    ERIC Educational Resources Information Center

    Gonzo, Carroll Lee

    In order to determine whether differences exist between undergraduate music majors preparing for teaching careers in music and experienced secondary-level choral teachers in regard to their ability to detect pitch errors, a Pitch Error Detection (PED) test was developed, and a questionnaire designed to retrieve information about the subjects'…

  1. Error detection and response adjustment in youth with mild spastic cerebral palsy: an event-related brain potential study.

    PubMed

    Hakkarainen, Elina; Pirilä, Silja; Kaartinen, Jukka; van der Meere, Jaap J

    2013-06-01

    This study evaluated the brain activation state during error making in youth with mild spastic cerebral palsy and a peer control group while carrying out a stimulus recognition task. The key question was whether patients were detecting their own errors and subsequently improving their performance in a future trial. Findings indicated that error responses of the group with cerebral palsy were associated with weak motor preparation, as indexed by the amplitude of the late contingent negative variation. However, patients were detecting their errors as indexed by the amplitude of the response-locked negativity and thus improved their performance in a future trial. Findings suggest that the consequence of error making on future performance is intact in a sample of youth with mild spastic cerebral palsy. Because the study group is small, the present findings need replication using a larger sample.

  2. Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun

    2016-07-01

    The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.

  3. Field error lottery

    NASA Astrophysics Data System (ADS)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  4. Field error lottery

    NASA Astrophysics Data System (ADS)

    Elliott, C. James; McVey, Brian D.; Quimby, David C.

    1990-11-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.

  5. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  6. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  7. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    PubMed

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  8. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    PubMed

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  9. Data Accuracy in Citation Studies.

    ERIC Educational Resources Information Center

    Boyce, Bert R.; Banning, Carolyn Sue

    1979-01-01

    Four hundred eighty-seven citations of the 1976 issues of the Journal of the American Society for Information Science and the Personnel and Guidance Journal were checked for accuracy: total error was 13.6 percent and 10.7 percent, respectively. Error categories included incorrect author name, article/book title, journal title; wrong entry; and…

  10. Digital reader vs print media: the role of digital technology in reading accuracy in age-related macular degeneration

    PubMed Central

    Gill, K; Mao, A; Powell, A M; Sheidow, T

    2013-01-01

    Purpose To compare patient satisfaction, reading accuracy, and reading speed between digital e-readers (Sony eReader, Apple iPad) and standard paper/print media for patients with stable wet age-related macular degeneration (AMD). Methods Patients recruited for the study were patients with stable wet AMD, in one or both eyes, who would benefit from a low-vision aid. The selected text sizes by patients reflected the spectrum of low vision in regard to their macular disease. Stability of macular degeneration was assessed on a clinical examination with stable visual acuity. Patients recruited for the study were assessed for reading speeds on both digital readers and standard paper text. Standardized and validated texts for reading speeds were used. Font sizes in the study reflected a spectrum from newsprint to large print books. Patients started with the smallest print size they could read on the standardized paper text. They then used digital readers to read the same size standardized text. Reading speed was calculated as words per minute by the formula (correctly read words/reading time (s)·60). The visual analog scale was completed by patients after reading each passage. These included their assessment on ‘ease of use' and ‘clarity of print' for each device and the print paper. Results A total of 27 patients were used in the study. Patients consistently read faster (P<0.0003) on the Apple iPad with larger text sizes (size 24 or greater) when compared with paper, and also on the paper compared with the Sony eReader (P<0.03) in all text group sizes. Patients chose the iPad to have the best clarity and the print paper as the easiest to use. Conclusions This study has demonstrated that digital devices may have a use in visual rehabilitation for low-vision patients. Devices that have larger display screens and offer high contrast ratios will benefit AMD patients who require larger texts to read. PMID:23492860

  11. The Episodic Engram Transformed: Time Reduces Retrieval-Related Brain Activity but Correlates It with Memory Accuracy

    ERIC Educational Resources Information Center

    Furman, Orit; Mendelsohn, Avi; Dudai, Yadin

    2012-01-01

    We took snapshots of human brain activity with fMRI during retrieval of realistic episodic memory over several months. Three groups of participants were scanned during a memory test either hours, weeks, or months after viewing a documentary movie. High recognition accuracy after hours decreased after weeks and remained at similar levels after…

  12. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    ERIC Educational Resources Information Center

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The sample consisted…

  13. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  14. The relative importance of random error and observation frequency in detecting trends in upper tropospheric water vapor

    NASA Astrophysics Data System (ADS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-11-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  15. Correcting for bias in relative risk estimates due to exposure measurement error: a case study of occupational exposure to antineoplastics in pharmacists.

    PubMed Central

    Spiegelman, D; Valanis, B

    1998-01-01

    OBJECTIVES: This paper describes 2 statistical methods designed to correct for bias from exposure measurement error in point and interval estimates of relative risk. METHODS: The first method takes the usual point and interval estimates of the log relative risk obtained from logistic regression and corrects them for nondifferential measurement error using an exposure measurement error model estimated from validation data. The second, likelihood-based method fits an arbitrary measurement error model suitable for the data at hand and then derives the model for the outcome of interest. RESULTS: Data from Valanis and colleagues' study of the health effects of antineoplastics exposure among hospital pharmacists were used to estimate the prevalence ratio of fever in the previous 3 months from this exposure. For an interdecile increase in weekly number of drugs mixed, the prevalence ratio, adjusted for confounding, changed from 1.06 to 1.17 (95% confidence interval [CI] = 1.04, 1.26) after correction for exposure measurement error. CONCLUSIONS: Exposure measurement error is often an important source of bias in public health research. Methods are available to correct such biases. PMID:9518972

  16. Individual differences in reward prediction error: contrasting relations between feedback-related negativity and trait measures of reward sensitivity, impulsivity and extraversion

    PubMed Central

    Cooper, Andrew J.; Duke, Éilish; Pickering, Alan D.; Smillie, Luke D.

    2014-01-01

    Medial-frontal negativity occurring ∼200–300 ms post-stimulus in response to motivationally salient stimuli, usually referred to as feedback-related negativity (FRN), appears to be at least partly modulated by dopaminergic-based reward prediction error (RPE) signaling. Previous research (e.g., Smillie et al., 2011) has shown that higher scores on a putatively dopaminergic-based personality trait, extraversion, were associated with a more pronounced difference wave contrasting unpredicted non-reward and unpredicted reward trials on an associative learning task. In the current study, we sought to extend this research by comparing how trait measures of reward sensitivity, impulsivity and extraversion related to the FRN using the same associative learning task. A sample of healthy adults (N = 38) completed a battery of personality questionnaires, before completing the associative learning task while EEG was recorded. As expected, FRN was most negative following unpredicted non-reward. A difference wave contrasting unpredicted non-reward and unpredicted reward trials was calculated. Extraversion, but not measures of impulsivity, had a significant association with this difference wave. Further, the difference wave was significantly related to a measure of anticipatory pleasure, but not consummatory pleasure. These findings provide support for the existing evidence suggesting that variation in dopaminergic functioning in brain “reward” pathways may partially underpin associations between the FRN and trait measures of extraversion and anticipatory pleasure. PMID:24808845

  17. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  18. Effect of ephemeris errors on the accuracy of the computation of the tangent point altitude of a solar scanning ray as measured by the SAGE 1 and 2 instruments

    NASA Technical Reports Server (NTRS)

    Buglia, James J.

    1989-01-01

    An analysis was made of the error in the minimum altitude of a geometric ray from an orbiting spacecraft to the Sun. The sunrise and sunset errors are highly correlated and are opposite in sign. With the ephemeris generated for the SAGE 1 instrument data reduction, these errors can be as large as 200 to 350 meters (1 sigma) after 7 days of orbit propagation. The bulk of this error results from errors in the position of the orbiting spacecraft rather than errors in computing the position of the Sun. These errors, in turn, result from the discontinuities in the ephemeris tapes resulting from the orbital determination process. Data taken from the end of the definitive ephemeris tape are used to generate the predict data for the time interval covered by the next arc of the orbit determination process. The predicted data are then updated by using the tracking data. The growth of these errors is very nearly linear, with a slight nonlinearity caused by the beta angle. An approximate analytic method is given, which predicts the magnitude of the errors and their growth in time with reasonable fidelity.

  19. Single-plane versus three-plane methods for relative range error evaluation of medium-range 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David K.; Cournoyer, Luc; Beraldin, J.-Angelo

    2015-05-01

    Within the context of the ASTM E57 working group WK12373, we compare the two methods that had been initially proposed for calculating the relative range error of medium-range (2 m to 150 m) optical non-contact 3D imaging systems: the first is based on a single plane (single-plane assembly) and the second on an assembly of three mutually non-orthogonal planes (three-plane assembly). Both methods are evaluated for their utility in generating a metric to quantify the relative range error of medium-range optical non-contact 3D imaging systems. We conclude that the three-plane assembly is comparable to the single-plane assembly with regard to quantification of relative range error while eliminating the requirement to isolate the edges of the target plate face.

  20. Scaling prediction errors to reward variability benefits error-driven learning in humans

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease “adapters'” accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. PMID:26180123

  1. Characteristics of patients making serious inhaler errors with a dry powder inhaler and association with asthma-related events in a primary care setting

    PubMed Central

    Westerik, Janine A. M.; Carter, Victoria; Chrystyn, Henry; Burden, Anne; Thompson, Samantha L.; Ryan, Dermot; Gruffydd-Jones, Kevin; Haughney, John; Roche, Nicolas; Lavorini, Federico; Papi, Alberto; Infantino, Antonio; Roman-Rodriguez, Miguel; Bosnic-Anticevich, Sinthia; Lisspers, Karin; Ställberg, Björn; Henrichsen, Svein Høegh; van der Molen, Thys; Hutton, Catherine; Price, David B.

    2016-01-01

    Abstract Objective: Correct inhaler technique is central to effective delivery of asthma therapy. The study aim was to identify factors associated with serious inhaler technique errors and their prevalence among primary care patients with asthma using the Diskus dry powder inhaler (DPI). Methods: This was a historical, multinational, cross-sectional study (2011–2013) using the iHARP database, an international initiative that includes patient- and healthcare provider-reported questionnaires from eight countries. Patients with asthma were observed for serious inhaler errors by trained healthcare providers as predefined by the iHARP steering committee. Multivariable logistic regression, stepwise reduced, was used to identify clinical characteristics and asthma-related outcomes associated with ≥1 serious errors. Results: Of 3681 patients with asthma, 623 (17%) were using a Diskus (mean [SD] age, 51 [14]; 61% women). A total of 341 (55%) patients made ≥1 serious errors. The most common errors were the failure to exhale before inhalation, insufficient breath-hold at the end of inhalation, and inhalation that was not forceful from the start. Factors significantly associated with ≥1 serious errors included asthma-related hospitalization the previous year (odds ratio [OR] 2.07; 95% confidence interval [CI], 1.26–3.40); obesity (OR 1.75; 1.17–2.63); poor asthma control the previous 4 weeks (OR 1.57; 1.04–2.36); female sex (OR 1.51; 1.08–2.10); and no inhaler technique review during the previous year (OR 1.45; 1.04–2.02). Conclusions: Patients with evidence of poor asthma control should be targeted for a review of their inhaler technique even when using a device thought to have a low error rate. PMID:26810934

  2. GP-B error modeling and analysis

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1982-01-01

    Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

  3. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  4. Accuracy and reliability of GPS devices for measurement of sports-specific movement patterns related to cricket, tennis, and field-based team sports.

    PubMed

    Vickery, William M; Dascombe, Ben J; Baker, John D; Higham, Dean G; Spratford, Wayne A; Duffield, Rob

    2014-06-01

    The aim of this study was to determine the accuracy and reliability of 5, 10, and 15 Hz global positioning system (GPS) devices. Two male subjects (mean ± SD; age, 25.5 ± 0.7 years; height, 1.75 ± 0.01 m; body mass, 74 ± 5.7 kg) completed 10 repetitions of drills replicating movements typical of tennis, cricket, and field-based (football) sports. All movements were completed wearing two 5 and 10 Hz MinimaxX and 2 GPS-Sports 15 Hz GPS devices in a specially designed harness. Criterion movement data for distance and speed were provided from a 22-camera VICON system sampling at 100 Hz. Accuracy was determined using 1-way analysis of variance with Tukey's post hoc tests. Interunit reliability was determined using intraclass correlation (ICC), and typical error was estimated as coefficient of variation (CV). Overall, for the majority of distance and speed measures, as measured using the 5, 10, and 15 Hz GPS devices, were not significantly different (p > 0.05) to the VICON data. Additionally, no improvements in the accuracy or reliability of GPS devices were observed with an increase in the sampling rate. However, the CV for the 5 and 15 Hz devices for distance and speed measures ranged between 3 and 33%, with increasing variability evident in higher speed zones. The majority of ICC measures possessed a low level of interunit reliability (r = -0.35 to 0.39). Based on these results, practitioners of these devices should be aware that measurements of distance and speed may be consistently underestimated, regardless of the movements performed.

  5. Negotiation Moves and Recasts in Relation to Error Types and Learner Repair in the Foreign Language Classroom.

    ERIC Educational Resources Information Center

    Morris, Frank A.

    2002-01-01

    Assessed the provision and use of implicit negative feedback in the interactional context of adult beginning learners of Spanish working in dyads in the foreign language classroom. Relationships among error types, feedback types, and immediate learner repair were also examined. Findings indicate learners did not provide explicit negative feedback…

  6. The relative degree of difficulty of L2 Spanish /d, t/, trill, and tap by L1 English speakers: Auditory and acoustic methods of defining pronunciation accuracy

    NASA Astrophysics Data System (ADS)

    Waltmunson, Jeremy C.

    2005-07-01

    This study has investigated the L2 acquisition of Spanish word-medial /d, t, r, (fish hook)/, word-initial /r/, and onset cluster /(fish hook)/. Two similar experiments were designed to address the relative degree of difficulty of the word-medial contrasts, as well as the effect of word-position on /r/ and /(fish hook)/ accuracy scores. In addition, the effect of vowel height on the production of [r] and the L2 emergence of the svarabhakti vowel in onset cluster /(fish hook)/ were investigated. Participants included 34 Ll English speakers from a range of L2 Spanish levels who were recorded in multiple sessions across a 6-month or 2-month period. The criteria for assessing segment accuracy was based on auditory and acoustic features found in productions by native Spanish speakers. In order to be scored as accurate, the L2 productions had to evidence both the auditory and acoustic features found in native speaker productions. L2 participant scores for each target were normalized in order to account for the variation of features found across native speaker productions. The results showed that word-medial accuracy scores followed two significant rankings (from lowest to highest): /r <= d <= (fish hook) <= t/ and /r <= (fish hook) <= d <= t/; however, when scores for /t/ included a voice onset time criterion, only the ranking /r <= (fish hook) <= d <= t/ was significant. These results suggest that /r/ is most difficult for learners while /t/ is least difficult, although individual variation was found. Regarding /r/, there was a strong effect of word position and vowel height on accuracy scores. For productions of /(fish hook)/, there was a strong effect of syllable position on accuracy scores. Acoustic analyses of taps in onset cluster revealed that only the experienced L2 Spanish participants demonstrated svarabhakti vowel emergence with native-like performance, suggesting that its emergence occurs relatively late in L2 acquisition.

  7. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  8. Relating indices of knowledge structure coherence and accuracy to skill-based performance: Is there utility in using a combination of indices?

    PubMed

    Schuelke, Matthew J; Day, Eric Anthony; McEntire, Lauren E; Boatman, Jazmine Espejo; Wang, Xiaoqian; Kowollik, Vanessa; Boatman, Paul R

    2009-07-01

    The authors examined the relative criterion-related validity of knowledge structure coherence and two accuracy-based indices (closeness and correlation) as well as the utility of using a combination of knowledge structure indices in the prediction of skill acquisition and transfer. Findings from an aggregation of 5 independent samples (N = 958) whose participants underwent training on a complex computer simulation indicated that coherence and the accuracy-based indices yielded comparable zero-order predictive validities. Support for the incremental validity of using a combination of indices was mixed; the most, albeit small, gain came in pairing coherence and closeness when predicting transfer. After controlling for baseline skill, general mental ability, and declarative knowledge, only coherence explained a statistically significant amount of unique variance in transfer. Overall, the results suggested that the different indices largely overlap in their representation of knowledge organization, but that coherence better reflects adaptable aspects of knowledge organization important to skill transfer.

  9. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  10. Refractive errors induced by displacement of intraocular lenses within the pseudophakic eye.

    PubMed

    Atchison, D A

    1989-03-01

    Simple methods were developed to estimate refractive errors when intraocular lenses are not fitted optimally within pseudophakic eyes. The accuracy of these methods was determined by comparing results obtained with them to results obtained by raytracing through a model eye. Accuracy was good for longitudinal displacement and tilting, and reasonable for transverse displacement. Refractive errors are related linearly to the magnitude of the longitudinal displacement, and are related to the square of the magnitude of tilt or transverse displacement. The refractive error upon transverse displacement is quadratically dependent upon lens shape. PMID:2717142

  11. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  12. Psychological masquerade embedded in a cluster of related clinical errors: Real practice, real solutions, and their scientific underpinnings.

    PubMed

    Spengler, Paul M; Miller, Deborah J; Spengler, Elliot S

    2016-09-01

    In this paper, we discuss the need for medical rule outs in over 50% of diagnoses and the risk for mental health practitioners to engage in a clinical judgment error called psychological masquerade (Taylor, 2007). We use the specific example of thyroid dysfunction as a relevant rule out when a client presents with symptoms consistent with an affective disorder. A real clinical example is provided and discussed to illustrate how the first author invoked psychological masquerade resulting in clinical decision-making errors during the treatment of a mother participating in family therapy. Solutions for this specific case and more generally for psychological masquerade are provided and discussed in the context of theory and research on mental health clinical decision-making. (PsycINFO Database Record PMID:27631863

  13. A SEASAT SASS simulation experiment to quantify the errors related to a + or - 3 hour intermittent assimilation technique

    NASA Technical Reports Server (NTRS)

    Sylvester, W. B.

    1984-01-01

    A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.

  14. Interindividual variation in fornix microstructure and macrostructure is related to visual discrimination accuracy for scenes but not faces.

    PubMed

    Postans, Mark; Hodgetts, Carl J; Mundy, Matthew E; Jones, Derek K; Lawrence, Andrew D; Graham, Kim S

    2014-09-01

    Transection of the nonhuman primate fornix has been shown to impair learning of configurations of spatial features and object-in-scene memory. Although damage to the human fornix also results in memory impairment, it is not known whether there is a preferential involvement of this white-matter tract in spatial learning, as implied by animal studies. Diffusion-weighted MR images were obtained from healthy participants who had completed versions of a task in which they made rapid same/different discriminations to two categories of highly visually similar stimuli: (1) virtual reality scene pairs; and (2) face pairs. Diffusion-MRI measures of white-matter microstructure [fractional anisotropy (FA) and mean diffusivity (MD)] and macrostructure (tissue volume fraction, f) were then extracted from the fornix of each participant, which had been reconstructed using a deterministic tractography protocol. Fornix MD and f measures correlated with scene, but not face, discrimination accuracy in both discrimination tasks. A complementary voxelwise analysis using tract-based spatial statistics suggested the crus of the fornix as a focus for this relationship. These findings extend previous reports of spatial learning impairments after fornix transection in nonhuman primates, critically highlighting the fornix as a source of interindividual variation in scene discrimination in humans.

  15. SU-E-J-19: Accuracy of Dual-Energy CT-Derived Relative Electron Density for Proton Therapy Dose Calculation

    SciTech Connect

    Mullins, J; Duan, X; Kruse, J; Herman, M; Bues, M

    2014-06-01

    Purpose: To determine the suitability of dual-energy CT (DECT) to calculate relative electron density (RED) of tissues for accurate proton therapy dose calculation. Methods: DECT images of RED tissue surrogates were acquired at 80 and 140 kVp. Samples (RED=0.19−2.41) were imaged in a water-equivalent phantom in a variety of configurations. REDs were calculated using the DECT numbers and inputs of the high and low energy spectral weightings. DECT-derived RED was compared between geometric configurations and for variations in the spectral inputs to assess the sensitivity of RED accuracy versus expected values. Results: RED accuracy was dependent on accurate spectral input influenced by phantom thickness and radius from the phantom center. Material samples located at the center of the phantom generally showed the best agreement to reference RED values, but only when attenuation of the surrounding phantom thickness was accounted for in the calculation spectra. Calculated RED changed by up to 10% for some materials when the sample was located at an 11 cm radius from the phantom center. Calculated REDs under the best conditions still differed from reference values by up to 5% in bone and 14% in lung. Conclusion: DECT has previously been used to differentiate tissue types based on RED and Z for binary tissue-type segmentation. To improve upon the current standard of empirical conversion of CT number to RED for treatment planning dose calculation, DECT methods must be able to calculate RED to better than 3% accuracy throughout the image. The DECT method is sensitive to the accuracy of spectral inputs used for calculation, as well as to spatial position in the anatomy. Effort to address adjustments to the spectral calculation inputs based on position and phantom attenuation will be required before DECT-determined RED can achieve a consistent level of accuracy for application in dose calculation.

  16. Flow measurement by cardiovascular magnetic resonance: a multi-centre multi-vendor study of background phase offset errors that can compromise the accuracy of derived regurgitant or shunt flow measurements

    PubMed Central

    2010-01-01

    Aims Cardiovascular magnetic resonance (CMR) allows non-invasive phase contrast measurements of flow through planes transecting large vessels. However, some clinically valuable applications are highly sensitive to errors caused by small offsets of measured velocities if these are not adequately corrected, for example by the use of static tissue or static phantom correction of the offset error. We studied the severity of uncorrected velocity offset errors across sites and CMR systems. Methods and Results In a multi-centre, multi-vendor study, breath-hold through-plane retrospectively ECG-gated phase contrast acquisitions, as are used clinically for aortic and pulmonary flow measurement, were applied to static gelatin phantoms in twelve 1.5 T CMR systems, using a velocity encoding range of 150 cm/s. No post-processing corrections of offsets were implemented. The greatest uncorrected velocity offset, taken as an average over a 'great vessel' region (30 mm diameter) located up to 70 mm in-plane distance from the magnet isocenter, ranged from 0.4 cm/s to 4.9 cm/s. It averaged 2.7 cm/s over all the planes and systems. By theoretical calculation, a velocity offset error of 0.6 cm/s (representing just 0.4% of a 150 cm/s velocity encoding range) is barely acceptable, potentially causing about 5% miscalculation of cardiac output and up to 10% error in shunt measurement. Conclusion In the absence of hardware or software upgrades able to reduce phase offset errors, all the systems tested appeared to require post-acquisition correction to achieve consistently reliable breath-hold measurements of flow. The effectiveness of offset correction software will still need testing with respect to clinical flow acquisitions. PMID:20074359

  17. The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey

    NASA Astrophysics Data System (ADS)

    Ji, X.; Niu, X.

    2014-04-01

    With the widespread national survey of geographic conditions, object-based data has already became the most common data organization pattern in the area of land cover research. Assessing the accuracy of object-based land cover data is related to lots of processes of data production, such like the efficiency of inside production and the quality of final land cover data. Therefore,there are a great deal of requirements of accuracy assessment of object-based classification map. Traditional approaches for accuracy assessment in surveying and mapping are not aimed at land cover data. It is necessary to employ the accuracy assessment in imagery classification. However traditional pixel-based accuracy assessing methods are inadequate for the requirements. The measures we improved are based on error matrix and using objects as sample units, because the pixel sample units are not suitable for assessing the accuracy of object-based classification result. Compared to pixel samples, we realize that the uniformity of object samples has changed. In order to make the indexes generating from error matrix reliable, we using the areas of object samples as the weight to establish the error matrix of object-based image classification map. We compare the result of two error matrixes setting up by the number of object samples and the sum of area of object samples. The error matrix using the sum of area of object sample is proved to be an intuitive, useful technique for reflecting the actual accuracy of object-based imagery classification result.

  18. Evaluation of the Accuracy and Related Factors of the Mechanical Torque-Limiting Device for Dental Implants

    PubMed Central

    Kazemi, Mahmood; Rohanian, Ahmad; Monzavi, Abbas; Nazari, Mohammad Sadegh

    2013-01-01

    Objective: Accurate delivery of torque to implant screws is critical to generate ideal preload in the screw joint and to offer protection against screw loosening. Mechanical torque-limiting devices (MTLDs) are available for this reason. In this study, the accuracy of one type of friction-style and two types of spring-style MTLDs at baseline, following fatigue conditions and sterilization processes were determined. Materials and Methods: Five unused MTLDs were selected from each of Straumann (ITI), Astra TECH and CWM systems. To measure the output of each MTLD, a digital torque gauge with a 3-jaw chuck was used to hold the driver. Force was applied to the MTLDs until either the friction styles released at a pre-calibrated torque value or the spring styles flexed to a pre-calibrated limit (target torque value). The peak torque value was recorded and the procedure was repeated 5 times for each MTLD. Then MTLDs were subjected to fatigue conditions at 500 and 1000 times and steam sterilization processes at 50 and 100 times and the peak torque value was recorded again at each stage. Results: Adjusted difference between measured torque values and target torque values differed significantly between stages for all 3 systems. Adjusted difference did not differ significantly between systems at all stages, but differed significantly between two different styles at baseline and 500 times fatigue stages. Conclusion: Straumann (ITI) devices differed minimally from target torque values at all stages. MTLDs with Spring-style were significantly more accurate than Friction-style device in achieving their target torque values at baseline and 500 times fatigue. PMID:23724209

  19. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  20. Determination of GPS orbits to submeter accuracy

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Lichten, S. M.; Katsigris, E. C.

    1988-01-01

    Orbits for satellites of the Global Positioning System (GPS) were determined with submeter accuracy. Tests used to assess orbital accuracy include orbit comparisons from independent data sets, orbit prediction, ground baseline determination, and formal errors. One satellite tracked 8 hours each day shows rms error below 1 m even when predicted more than 3 days outside of a 1-week data arc. Differential tracking of the GPS satellites in high Earth orbit provides a powerful relative positioning capability, even when a relatively small continental U.S. fiducial tracking network is used with less than one-third of the full GPS constellation. To demonstrate this capability, baselines of up to 2000 km in North America were also determined with the GPS orbits. The 2000 km baselines show rms daily repeatability of 0.3 to 2 parts in 10 to the 8th power and agree with very long base interferometry (VLBI) solutions at the level of 1.5 parts in 10 to the 8th power. This GPS demonstration provides an opportunity to test different techniques for high-accuracy orbit determination for high Earth orbiters. The best GPS orbit strategies included data arcs of at least 1 week, process noise models for tropospheric fluctuations, estimation of GPS solar pressure coefficients, and combine processing of GPS carrier phase and pseudorange data. For data arc of 2 weeks, constrained process noise models for GPS dynamic parameters significantly improved the situation.

  1. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  2. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  3. TU-C-BRE-07: Quantifying the Clinical Impact of VMAT Delivery Errors Relative to Prior Patients’ Plans and Adjusted for Anatomical Differences

    SciTech Connect

    Stanhope, C; Wu, Q; Yuan, L; Liu, J; Hood, R; Yin, F; Adamson, J

    2014-06-15

    -arc VMAT plans for low-risk prostate are relatively insensitive to many potential delivery errors.

  4. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  5. Dissociated roles of the anterior cingulate cortex in reward and conflict processing as revealed by the feedback error-related negativity and N200.

    PubMed

    Baker, Travis E; Holroyd, Clay B

    2011-04-01

    The reinforcement learning theory of the error-related negativity (ERN) holds that the impact of reward signals carried by the midbrain dopamine system modulates activity of the anterior cingulate cortex (ACC), alternatively disinhibiting and inhibiting the ACC following unpredicted error and reward events, respectively. According to a recent formulation of the theory, activity that is intrinsic to the ACC produces a component of the event-related brain potential (ERP) called the N200, and following unpredicted rewards, the N200 is suppressed by extrinsically applied positive dopamine reward signals, resulting in an ERP component called the feedback-ERN (fERN). Here we demonstrate that, despite extensive spatial and temporal overlap between the two ERP components, the functional processes indexed by the N200 (conflict) and the fERN (reward) are dissociable. These results point toward avenues for future investigation.

  6. Dissociated roles of the anterior cingulate cortex in reward and conflict processing as revealed by the feedback error-related negativity and N200.

    PubMed

    Baker, Travis E; Holroyd, Clay B

    2011-04-01

    The reinforcement learning theory of the error-related negativity (ERN) holds that the impact of reward signals carried by the midbrain dopamine system modulates activity of the anterior cingulate cortex (ACC), alternatively disinhibiting and inhibiting the ACC following unpredicted error and reward events, respectively. According to a recent formulation of the theory, activity that is intrinsic to the ACC produces a component of the event-related brain potential (ERP) called the N200, and following unpredicted rewards, the N200 is suppressed by extrinsically applied positive dopamine reward signals, resulting in an ERP component called the feedback-ERN (fERN). Here we demonstrate that, despite extensive spatial and temporal overlap between the two ERP components, the functional processes indexed by the N200 (conflict) and the fERN (reward) are dissociable. These results point toward avenues for future investigation. PMID:21295109

  7. Prospective Relations among Fearful Temperament, Protective Parenting, and Social Withdrawal: The Role of Maternal Accuracy in a Moderated Mediation Framework

    ERIC Educational Resources Information Center

    Kiel, Elizabeth J.; Buss, Kristin A.

    2011-01-01

    Early social withdrawal and protective parenting predict a host of negative outcomes, warranting examination of their development. Mothers' accurate anticipation of their toddlers' fearfulness may facilitate transactional relations between toddler fearful temperament and protective parenting, leading to these outcomes. Currently, we followed 93…

  8. Medial Prefrontal Functional Connectivity--Relation to Memory Self-Appraisal Accuracy in Older Adults with and without Memory Disorders

    ERIC Educational Resources Information Center

    Ries, Michele L.; McLaren, Donald G.; Bendlin, Barbara B.; Xu, Guofan; Rowley, Howard A.; Birn, Rasmus; Kastman, Erik K.; Sager, Mark A.; Asthana, Sanjay; Johnson, Sterling C.

    2012-01-01

    It is tentatively estimated that 25% of people with early Alzheimer's disease (AD) show impaired awareness of disease-related changes in their own cognition. Research examining both normative self-awareness and altered awareness resulting from brain disease or injury points to the central role of the medial prefrontal cortex (MPFC) in generating…

  9. Estimation of an unexpected-overlooking error by means of the single eye fixation related potential analysis with wavelet transform filter.

    PubMed

    Matsuo, N; Ohkita, Y; Tomita, Y; Honda, S; Matsunaga, K

    2001-04-01

    An unexpected-overlooking error that caused failure to notice near the peripheral vision is one of the accident factors in driving behavior. We estimated how the unexpected-overlooking error affected the amplitude of the lambda wave in the eye fixation related potential (EFRP). Four subjects participated in the experiment. Each subject was required press the right or left switch according to the given task, which was that he/she pressed the right switch when the blue dot appeared in the right detected area or he/she pressed the left switch when the red dot appeared in the right. The single trial data from Pz, which referred to both earlobes, were analyzed by means of a wavelet transform (WT) filter. The difference of the lambda amplitude between the corrected data was applied for analysis of variance. Three subjects showed a significant effect (P<0.01 or P<0.05), and the remaining one subject did not show a significant consequence of only two errors. The unexpected-overlooking errors had a low amplitude compared to the mean of amplitude throughout the task. It was concluded that the amplitude of the lambda wave might reflect the attention level of a subject.

  10. Neural response to errors in combat-exposed returning veterans with and without post-traumatic stress disorder: a preliminary event-related potential study.

    PubMed

    Rabinak, Christine A; Holman, Alexis; Angstadt, Mike; Kennedy, Amy E; Hajcak, Greg; Phan, Kinh Luan

    2013-07-30

    Post-traumatic stress disorder (PTSD) is characterized by sustained anxiety, hypervigilance for potential threat, and hyperarousal. These symptoms may enhance self-perception of one's actions, particularly the detection of errors, which may threaten safety. The error-related negativity (ERN) is an electrocortical response to the commission of errors, and previous studies have shown that other anxiety disorders associated with exaggerated anxiety and enhanced action monitoring exhibit an enhanced ERN. However, little is known about how traumatic experience and PTSD would affect the ERN. To address this gap, we measured the ERN in returning Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) veterans with combat-related PTSD (PTSD group), combat-exposed OEF/OIF veterans without PTSD [combat-exposed control (CEC) group], and non-traumatized healthy participants [healthy control (HC) group]. Event-related potential and behavioral measures were recorded while 16 PTSD patients, 18 CEC, and 16 HC participants completed an arrow version of the flanker task. No difference in the magnitude of the ERN was observed between the PTSD and HC groups; however, in comparison with the PTSD and HC groups, the CEC group displayed a blunted ERN response. These findings suggest that (1) combat trauma itself does not affect the ERN response; (2) PTSD is not associated with an abnormal ERN response; and (3) an attenuated ERN in those previously exposed to combat trauma but who have not developed PTSD may reflect resilience to the disorder, less motivation to do the task, or a decrease in the significance or meaningfulness of 'errors,' which could be related to combat experience.

  11. Moderation of the Relationship Between Reward Expectancy and Prediction Error-Related Ventral Striatal Reactivity by Anhedonia in Unmedicated Major Depressive Disorder: Findings From the EMBARC Study

    PubMed Central

    Greenberg, Tsafrir; Chase, Henry W.; Almeida, Jorge R.; Stiffler, Richelle; Zevallos, Carlos R.; Aslam, Haris A.; Deckersbach, Thilo; Weyandt, Sarah; Cooper, Crystal; Toups, Marisa; Carmody, Thomas; Kurian, Benji; Peltier, Scott; Adams, Phillip; McInnis, Melvin G.; Oquendo, Maria A.; McGrath, Patrick J.; Fava, Maurizio; Weissman, Myrna; Parsey, Ramin; Trivedi, Madhukar H.; Phillips, Mary L.

    2016-01-01

    Objective Anhedonia, disrupted reward processing, is a core symptom of major depressive disorder. Recent findings demonstrate altered reward-related ventral striatal reactivity in depressed individuals, but the extent to which this is specific to anhedonia remains poorly understood. The authors examined the effect of anhedonia on reward expectancy (expected outcome value) and prediction error-(discrepancy between expected and actual outcome) related ventral striatal reactivity, as well as the relationship between these measures. Method A total of 148 unmedicated individuals with major depressive disorder and 31 healthy comparison individuals recruited for the multisite EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care) study underwent functional MRI during a well-validated reward task. Region of interest and whole-brain data were examined in the first- (N=78) and second- (N=70) recruited cohorts, as well as the total sample, of depressed individuals, and in healthy individuals. Results Healthy, but not depressed, individuals showed a significant inverse relationship between reward expectancy and prediction error-related right ventral striatal reactivity. Across all participants, and in depressed individuals only, greater anhedonia severity was associated with a reduced reward expectancy-prediction error inverse relationship, even after controlling for other symptoms. Conclusions The normal reward expectancy and prediction error-related ventral striatal reactivity inverse relationship concords with conditioning models, predicting a shift in ventral striatal responding from reward outcomes to reward cues. This study shows, for the first time, an absence of this relationship in two cohorts of unmedicated depressed individuals and a moderation of this relationship by anhedonia, suggesting reduced reward-contingency learning with greater anhedonia. These findings help elucidate neural mechanisms of anhedonia, as a step toward

  12. Maternal Expectations for Toddlers’ Reactions to Novelty: Relations of Maternal Internalizing Symptoms and Parenting Dimensions to Expectations and Accuracy of Expectations

    PubMed Central

    Kiel, Elizabeth J.; Buss, Kristin A.

    2010-01-01

    SYNOPSIS Objective Although maternal internalizing symptoms and parenting dimensions have been linked to reports and perceptions of children’s behavior, it remains relatively unknown whether these characteristics relate to expectations or the accuracy of expectations for toddlers’ responses to novel situations. Design A community sample of 117 mother-toddler dyads participated in a laboratory visit and questionnaire completion. At the laboratory, mothers were interviewed about their expectations for their toddlers’ behaviors in a variety of novel tasks; toddlers then participated in these activities, and trained coders scored their behaviors. Mothers completed questionnaires assessing demographics, depressive and worry symptoms, and parenting dimensions. Results Mothers who reported more worry expected their toddlers to display more fearful behavior during the laboratory tasks, but worry did not moderate how accurately maternal expectations predicted toddlers’ observed behavior. When also reporting a low level of authoritative-responsive parenting, maternal depressive symptoms moderated the association between maternal expectations and observed toddler behavior, such that, as depressive symptoms increased, maternal expectations related less strongly to toddler behavior. Conclusions When mothers were asked about their expectations for their toddlers’ behavior in the same novel situations from which experimenters observe this behavior, symptoms and parenting had minimal effect on the accuracy of mothers’ expectations. When in the context of low authoritative-responsive parenting, however, depressive symptoms related to less accurate predictions of their toddlers’ fearful behavior. PMID:21037974

  13. The slider motion error analysis by positive solution method in parallel mechanism

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoqing; Zhang, Lisong; Zhu, Liang; Yang, Wenguo; Hu, Penghao

    2016-01-01

    Motion error of slider plays key role in 3-PUU parallel coordinates measuring machine (CMM) performance and influence the CMM accuracy, which attracts lots of experts eyes in the world, Generally, the analysis method is based on the view of space 6-DOF. Here, a new analysis method is provided. First, the structure relation of slider and guideway can be abstracted as a 4-bar parallel mechanism. So, the sliders can be considered as moving platform in parallel kinematic mechanism PKM. Its motion error analysis is also transferred to moving platform position analysis in PKM. Then, after establishing the positive and negative solutions, some existed theory and technology for PKM can be applied to analyze slider straightness motion error and angular motion error simultaneously. Thirdly, some experiments by autocollimator are carried out to capture the original error data about guideway its own error, the data can be described as straightness error function by fitting curvilinear equation. Finally, the Straightness error of two guideways are considered as the variation of rod length in parallel mechanism, the slider's straightness error and angular error can be obtained by putting data into the established model. The calculated result is generally consistent with experiment result. The idea will be beneficial on accuracy calibration and error correction of 3-PUU CMM and also provides a new thought to analyze kinematic error of guideway in precision machine tool and precision instrument.

  14. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  15. An Implicit Measure of Associations with Mental Illness versus Physical Illness: Response Latency Decomposition and Stimuli Differential Functioning in Relation to IAT Order of Associative Conditions and Accuracy

    PubMed Central

    Mannarini, Stefania; Boffo, Marilisa

    2014-01-01

    The present study aimed at the definition of a latent measurement dimension underlying an implicit measure of automatic associations between the concept of mental illness and the psychosocial and biogenetic causal explanatory attributes. To this end, an Implicit Association Test (IAT) assessing the association between the Mental Illness and Physical Illness target categories to the Psychological and Biologic attribute categories, representative of the causal explanation domains, was developed. The IAT presented 22 stimuli (words and pictures) to be categorized into the four categories. After 360 university students completed the IAT, a Many-Facet Rasch Measurement (MFRM) modelling approach was applied. The model specified a person latency parameter and a stimulus latency parameter. Two additional parameters were introduced to denote the order of presentation of the task associative conditions and the general response accuracy. Beyond the overall definition of the latent measurement dimension, the MFRM was also applied to disentangle the effect of the task block order and the general response accuracy on the stimuli response latency. Further, the MFRM allowed detecting any differential functioning of each stimulus in relation to both block ordering and accuracy. The results evidenced: a) the existence of a latency measurement dimension underlying the Mental Illness versus Physical Illness - Implicit Association Test; b) significant effects of block order and accuracy on the overall latency; c) a differential functioning of specific stimuli. The results of the present study can contribute to a better understanding of the functioning of an implicit measure of semantic associations with mental illness and give a first blueprint for the examination of relevant issues in the development of an IAT. PMID:25000406

  16. Abnormal error processing in depressive states: a translational examination in humans and rats.

    PubMed

    Beard, C; Donahue, R J; Dillon, D G; Van't Veer, A; Webber, C; Lee, J; Barrick, E; Hsu, K J; Foti, D; Carroll, F I; Carlezon, W A; Björgvinsson, T; Pizzagalli, D A

    2015-05-12

    Depression has been associated with poor performance following errors, but the clinical implications, response to treatment and neurobiological mechanisms of this post-error behavioral adjustment abnormality remain unclear. To fill this gap in knowledge, we tested depressed patients in a partial hospital setting before and after treatment (cognitive behavior therapy combined with medication) using a flanker task. To evaluate the translational relevance of this metric in rodents, we performed a secondary analysis on existing data from rats tested in the 5-choice serial reaction time task after treatment with corticotropin-releasing factor (CRF), a stress peptide that produces depressive-like signs in rodent models relevant to depression. In addition, to examine the effect of treatment on post-error behavior in rodents, we examined a second cohort of rodents treated with JDTic, a kappa-opioid receptor antagonist that produces antidepressant-like effects in laboratory animals. In depressed patients, baseline post-error accuracy was lower than post-correct accuracy, and, as expected, post-error accuracy improved with treatment. Moreover, baseline post-error accuracy predicted attentional control and rumination (but not depressive symptoms) after treatment. In rats, CRF significantly degraded post-error accuracy, but not post-correct accuracy, and this effect was attenuated by JDTic. Our findings demonstrate deficits in post-error accuracy in depressed patients, as well as a rodent model relevant to depression. These deficits respond to intervention in both species. Although post-error behavior predicted treatment-related changes in attentional control and rumination, a relationship to depressive symptoms remains to be demonstrated.

  17. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection

    NASA Astrophysics Data System (ADS)

    Buschang, Rebecca Ellen

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e., experimental condition). A pretest of content knowledge was administered, and then the participants in both conditions watched two hours of online videos about natural selection and attended different types of professional development sessions lasting four hours. The dependent variable measured teacher knowledge and skill related to evaluating student errors and understanding of natural selection. Significant differences between conditions in favor of the experimental condition were found on participant identification of critical elements of student understanding of natural selection and content knowledge related to natural selection. Results suggest that short-term professional development sessions focused on evaluating student errors and understanding can be effective at focusing a participant's evaluation of student work on particularly important elements of student understanding. Results have implications for understanding the types of knowledge necessary to effectively evaluate student work and for the design of professional development.

  18. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  19. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  20. Proofreading for word errors.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  1. Virtual and Actual: Relative Accuracy of On-Site and Web-based Instruments in Auditing the Environment for Physical Activity

    PubMed Central

    Ben-Joseph, Eran; Lee, Jae Seung; Cromley, Ellen K.; Laden, Francine; Troped, Philip J.

    2015-01-01

    Objectives To assess the relative accuracy and usefulness of web tools in evaluating and measuring street-scale built environment characteristics. Methods A well-known audit tool was used to evaluate 84 street segments at the urban edge of metropolitan Boston, Massachusetts, using on-site visits and three web-based tools. The assessments were compared to evaluate their relative accuracy and usefulness. Results Web-based audits, based-on Google Maps, Google Street View, and MS Visual Oblique, tend to strongly agree with on-site audits on land-use and transportation characteristics (e.g., types of buildings, commercial destinations, and streets). However, the two approaches to conducting audits (web versus on-site) tend to agree only weakly on fine-grain, temporal, and qualitative environmental elements. Among the web tools used, auditors rated MS Visual Oblique as the most valuable. Yet Street View tends to be rated as the most useful in measuring fine-grain features, such as levelness and condition of sidewalks. Conclusion While web-based tools do not offer a perfect substitute for on-site audits, they allow for preliminary audits to be performed accurately from remote locations, potentially saving time and cost and increasing the effectiveness of subsequent on-site visits. PMID:23247423

  2. Orbit IMU alignment: Error analysis

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  3. Investigation of Error Patterns in Geographical Databases

    NASA Technical Reports Server (NTRS)

    Dryer, David; Jacobs, Derya A.; Karayaz, Gamze; Gronbech, Chris; Jones, Denise R. (Technical Monitor)

    2002-01-01

    The objective of the research conducted in this project is to develop a methodology to investigate the accuracy of Airport Safety Modeling Data (ASMD) using statistical, visualization, and Artificial Neural Network (ANN) techniques. Such a methodology can contribute to answering the following research questions: Over a representative sampling of ASMD databases, can statistical error analysis techniques be accurately learned and replicated by ANN modeling techniques? This representative ASMD sample should include numerous airports and a variety of terrain characterizations. Is it possible to identify and automate the recognition of patterns of error related to geographical features? Do such patterns of error relate to specific geographical features, such as elevation or terrain slope? Is it possible to combine the errors in small regions into an error prediction for a larger region? What are the data density reduction implications of this work? ASMD may be used as the source of terrain data for a synthetic visual system to be used in the cockpit of aircraft when visual reference to ground features is not possible during conditions of marginal weather or reduced visibility. In this research, United States Geologic Survey (USGS) digital elevation model (DEM) data has been selected as the benchmark. Artificial Neural Networks (ANNS) have been used and tested as alternate methods in place of the statistical methods in similar problems. They often perform better in pattern recognition, prediction and classification and categorization problems. Many studies show that when the data is complex and noisy, the accuracy of ANN models is generally higher than those of comparable traditional methods.

  4. Combination of TOPEX/POSEIDON Data with a Hydrographic Inversion for Determination of the Oceanic General Circulation and its Relation to Geoid Accuracy

    NASA Technical Reports Server (NTRS)

    Ganachaud, Alexandre; Wunsch, Carl; Kim, Myung-Chan; Tapley, Byron

    1997-01-01

    A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceanographic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations. Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.

  5. Map accuracy

    USGS Publications Warehouse

    ,

    1981-01-01

    An inaccurate map is not a reliable map. "X" may mark the spot where the treasure is buried, but unless the seeker can locate "X" in relation to known landmarks or positions, the map is not very useful.

  6. On the Accuracy Evaluation of Ultrasonic Doppler Flowmeter

    SciTech Connect

    Tomoyuki Ohkubo; Yuji Tasaka; Yasushi Takeda; Michitsugu Mori

    2006-07-01

    The accuracy evaluation of a pipe flowmeter using ultrasonic velocity profiler is investigated theoretically and experimentally. The error depends basically on the data points on discretized velocity profile, but it decreases rapidly with increasing the data points. It can be reduced, relative to the theoretical velocity profile, below 1% with about 100 data points. The error arising from averaging the instantaneous flow rate increases with the fluctuation amplitude in the loop and decreases with its frequency and total measurement time. A procedure to determine the total averaging time is proposed. (authors)

  7. Reading skill and neural processing accuracy improvement after a 3-hour intervention in preschoolers with difficulties in reading-related skills.

    PubMed

    Lovio, Riikka; Halttunen, Anu; Lyytinen, Heikki; Näätänen, Risto; Kujala, Teija

    2012-04-11

    This study aimed at determining whether an intervention game developed for strengthening phonological awareness has a remediating effect on reading skills and central auditory processing in 6-year-old preschool children with difficulties in reading-related skills. After a 3-hour training only, these children made a greater progress in reading-related skills than did their matched controls who did mathematical exercises following comparable training format. Furthermore, the results suggest that this brief intervention might be beneficial in modulating the neural basis of phonetic discrimination as an enhanced speech-elicited mismatch negativity (MMN) was seen in the intervention group, indicating improved cortical discrimination accuracy. Moreover, the amplitude increase of the vowel-elicited MMN significantly correlated with the improvement in some of the reading-skill related test scores. The results, albeit obtained with a relatively small sample, are encouraging, suggesting that reading-related skills can be improved even by a very short intervention and that the training effects are reflected in brain activity. However, studies with larger samples and different subgroups of children are needed to confirm the present results and to determine how children with different dyslexia subtypes benefit from the intervention.

  8. Age-related changes in speed and accuracy during rapid targeted center of pressure movements near the posterior limit of the base of support

    PubMed Central

    Hernandez, Manuel E.; Ashton-Miller, James A.; Alexander, Neil B.

    2012-01-01

    Background Backward falls are often associated with injury, particularly among older women. An age-related increase occurs in center of pressure variability when standing and leaning. So, we hypothesized that, in comparison to young women, older women would display a disproportionate decrease of speed and accuracy in the primary center of pressure submovements as movement amplitude increases. Methods Ground reaction forces were recorded from thirteen healthy young and twelve older women while performing rapid, targeted, center of pressure movements of small and large amplitude in upright stance. Measures included center of pressure speed, the number of center of pressure submovements, and the incidence rate of primary center of pressure submovements undershooting the target. Findings In comparison to young women, older women used slower primary submovements, particularly as movement amplitude increased (P < 0.01). Even though older women achieved similar endpoint accuracy, they demonstrated a 2 to 5-fold increase in the incidence of primary submovement undershooting for large-amplitude movements (P < 0.01). Overall, posterior center of pressure movements of older women were 41% slower and exhibited 43% more secondary submovements than in young women (P < 0.01). Interpretations We conclude that the increased primary submovement undershoots and secondary center of pressure submovements in the older women reflect the use of a conservative control strategy near the posterior limit of their base of support. PMID:22770467

  9. Sensitivity of LIDAR Canopy Height Estimate to Geolocation Error

    NASA Astrophysics Data System (ADS)

    Tang, H.; Dubayah, R.

    2010-12-01

    Many factors affect the quality of canopy height structure data derived from space-based lidar such as DESDynI. Among these is geolocation accuracy. Inadequate geolocation information hinders subsequent analyses because a different portion of the canopy is observed relative to what is assumed. This is especially true in mountainous terrain where the effects of slope magnify geolocation errors. Mission engineering design must trade the expense of providing more accurate geolocation with the potential improvement in measurement accuracy. The objective of our work is to assess the effects of small errors in geolocation on subsequent retrievals of maximum canopy height for a varying set of canopy structures and terrains. Dense discrete lidar data from different forest sites (from La Selva Biological Station, Costa Rica, Sierra National Forest, California, and Hubbard Brook and Bartlett Experimental Forests in New Hampshire) are used to simulate DESDynI height retrievals using various geolocation accuracies. Results show that canopy height measurement errors generally increase as the geolocation error increases. Interestingly, most of the height errors are caused by variation of canopy height rather than topography (slope and aspect).

  10. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  11. Alcohol and error processing.

    PubMed

    Holroyd, Clay B; Yeung, Nick

    2003-08-01

    A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.

  12. Millisecond accuracy video display using OpenGL under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  13. A Probabilistic Model for Students' Errors and Misconceptions on the Structure of Matter in Relation to Three Cognitive Variables

    ERIC Educational Resources Information Center

    Tsitsipis, Georgios; Stamovlasis, Dimitrios; Papageorgiou, George

    2012-01-01

    In this study, the effect of 3 cognitive variables such as logical thinking, field dependence/field independence, and convergent/divergent thinking on some specific students' answers related to the particulate nature of matter was investigated by means of probabilistic models. Besides recording and tabulating the students' responses, a combination…

  14. Thematic accuracy of the NLCD 2001 land cover for the conterminous United States

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Fry, J.A.; Smith, J.H.; Homer, C.G.

    2010-01-01

    The land-cover thematic accuracy of NLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD 1992 were 58% and 80%. Forest and cropland were two classes showing substantial improvements in accuracy in NLCD 2001 relative to NLCD 1992. NLCD 2001 forest and cropland user's accuracies were 87% and 82%, respectively, compared to 80% and 43% for NLCD 1992. Accuracy results are reported for 10 geographic regions of the United States, with regional overall accuracies ranging from 68% to 86% for Level II and from 79% to 91% at Level I. Geographic variation in class-specific accuracy was strongly associated with the phenomenon that regionally more abundant land-cover classes had higher accuracy. Accuracy estimates based on several definitions of agreement are reported to provide an indication of the potential impact of reference data error on accuracy. Drawing on our experience from two NLCD national accuracy assessments, we discuss the use of designs incorporating auxiliary data to more seamlessly quantify reference data quality as a means to further advance thematic map accuracy assessment.

  15. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  16. BFC: correcting Illumina sequencing errors

    PubMed Central

    2015-01-01

    Summary: BFC is a free, fast and easy-to-use sequencing error corrector designed for Illumina short reads. It uses a non-greedy algorithm but still maintains a speed comparable to implementations based on greedy methods. In evaluations on real data, BFC appears to correct more errors with fewer overcorrections in comparison to existing tools. It particularly does well in suppressing systematic sequencing errors, which helps to improve the base accuracy of de novo assemblies. Availability and implementation: https://github.com/lh3/bfc Contact: hengli@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25953801

  17. Research of the use of autoreflection scheme to measure the error of the optical elements in space telescope's relative position

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Molev, Fedor; Konyakhin, Igor

    2015-06-01

    Autoreflection scheme is based on the scheme of measuring angles by autoreflection method, according to which the radiant stamp that was registered in the plane of the analysis is at a finite distance from the front of the lens. The main advantages and disadvantages of using autoreflection and autocollimation schemes for constructing the measuring channel, which is designed to control the relative position of the elements of the optical system Space Telescope are described in this paper. Results of modeling in the Zemax software complex are given.

  18. Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories

    NASA Technical Reports Server (NTRS)

    Green, S.; Grace, M.; Williams, D.

    1999-01-01

    The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major

  19. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  20. Equivalence and Accuracy of MOSFET Channel Length Measurement Techniques

    NASA Astrophysics Data System (ADS)

    Jain, Sanjay

    1989-02-01

    It is shown that the MOSFET channel length measurement techniques of Terada and Muta, Peng et al., Whitfield, Suciu and Johnston, and De La Moneda et al. are actually equivalent, i.e. merely different expressions of the same formula for channel length in terms of measured resistance, and that some of the transresistance methods of Jain, although not equivalent, are also related to the same formula. The accuracy of this formula is evaluated for the general case and related to the error components due to source and drain resistance asymmetry, short channel geometry effect, and variation of series resistance with bias. No independent error component due to field-induced mobility degradation is found. Finally the errors in the methods of Terada and Muta, Chen et al., Sheu et al., Wordeman et al. and Jain, are determined and compared. The gate transresistance technique is found to be the most accurate method.

  1. Error prediction for probes guided by means of fixtures

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, J. Michael

    2012-02-01

    Probe guides are surgical fixtures that are rigidly attached to bone anchors in order to place a probe at a target with high accuracy (RMS error < 1 mm). Applications include needle biopsy, the placement of electrodes for deep-brain stimulation (DBS), spine surgery, and cochlear implant surgery. Targeting is based on pre-operative images, but targeting errors can arise from three sources: (1) anchor localization error, (2) guide fabrication error, and (3) external forces and torques. A well-established theory exists for the statistical prediction of target registration error (TRE) when targeting is accomplished by means of tracked probes, but no such TRE theory is available for fixtured probe guides. This paper provides that theory and shows that all three error sources can be accommodated in a remarkably simple extension of existing theory. Both the guide and the bone with attached anchors are modeled as objects with rigid sections and elastic sections, the latter of which are described by stiffness matrices. By relating minimization of elastic energy for guide attachment to minimization of fiducial registration error for point registration, it is shown that the expression for targeting error for the guide is identical to that for weighted rigid point registration if the weighting matrices are properly derived from stiffness matrices and the covariance matrices for fiducial localization are augmented with offsets in the anchor positions. An example of the application of the theory is provided for ear surgery.

  2. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    selection for optimal accuracy is closely related to the intensity gradients of the underlying images. Also, the result that the DIR algorithms produce much lower errors in heterogeneous lung regions relative to homogeneous (low intensity gradient) regions, suggests that feature-based evaluation of deformable image registration accuracy must be viewed cautiously.

  3. On the relation between orbital-localization and self-interaction errors in the density functional theory treatment of organic semiconductors.

    PubMed

    Körzdörfer, T

    2011-03-01

    It is commonly argued that the self-interaction error (SIE) inherent in semilocal density functionals is related to the degree of the electronic localization. Yet at the same time there exists a latent ambiguity in the definitions of the terms "localization" and "self-interaction," which ultimately prevents a clear and readily accessible quantification of this relationship. This problem is particularly pressing for organic semiconductor molecules, in which delocalized molecular orbitals typically alternate with localized ones, thus leading to major distortions in the eigenvalue spectra. This paper discusses the relation between localization and SIEs in organic semiconductors in detail. Its findings provide further insights into the SIE in the orbital energies and yield a new perspective on the failure of self-interaction corrections that identify delocalized orbital densities with electrons.

  4. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  5. Simple technique for the fabrication of a penta prism with high accuracy right angle deviation.

    PubMed

    Chatterjee, Sanjib; Pavan Kumar, Y

    2007-09-10

    What we believe to be a new technique for the fabrication of a penta prism (PP) with high accuracy right angle deviation of the incident beam is presented. We derive simple formulas relating to the error in right angle deviation with the errors in 45 degrees (beta) and 90 degrees (delta) angles of a PP, and we determine error in right angle deviation from the angle ((error in right angle deviation)r) between the plane wavefronts reflected from the right angled surfaces (external Fresnel reflection on the entrance surface and internal Fresnel reflection on the exit surface) of a PP and the angular error (delta) between the same surfaces. The error in right angle deviation is determined from the measurement of (error in right angle deviation)r using an autocollimator and a Fizeau interferometer, and error in right angle deviation is corrected to a high order of accuracy during the final stage of polishing one of the slanted surfaces of the PP. A new technique to determine the magnitude and direction of the small values of (error in right angle deviation)r is proposed and verified. The result for a PP is presented.

  6. Elimination of 'ghost'-effect-related systematic error in metrology of X-ray optics with a long trace profiler

    SciTech Connect

    Yashchuk, Valeriy V.; Irick, Steve C.; MacDowell, Alastair A.

    2005-04-28

    A data acquisition technique and relevant program for suppression of one of the systematic effects, namely the ''ghost'' effect, of a second generation long trace profiler (LTP) is described. The ''ghost'' effect arises when there is an unavoidable cross-contamination of the LTP sample and reference signals into one another, leading to a systematic perturbation in the recorded interference patterns and, therefore, a systematic variation of the measured slope trace. Perturbations of about 1-2 {micro}rad have been observed with a cylindrically shaped X-ray mirror. Even stronger ''ghost'' effects show up in an LTP measurement with a mirror having a toroidal surface figure. The developed technique employs separate measurement of the ''ghost''-effect-related interference patterns in the sample and the reference arms and then subtraction of the ''ghost'' patterns from the sample and the reference interference patterns. The procedure preserves the advantage of simultaneously measuring the sample and reference signals. The effectiveness of the technique is illustrated with LTP metrology of a variety of X-ray mirrors.

  7. Improved Accuracy and Precision in LA-ICP-MS U-Th/Pb Dating of Zircon through the Reduction of Crystallinity Related Bias

    NASA Astrophysics Data System (ADS)

    Matthews, W.; McDonald, A.; Hamilton, B.; Guest, B.

    2015-12-01

    The accuracy of zircon U-Th/Pb ages generated by LA-ICP-MS is limited by systematic bias resulting from differences in crystallinity of the primary reference and that of the unknowns being analyzed. In general, the use of a highly crystalline primary reference will tend to bias analyses of materials of lesser crystallinity toward older ages. When dating igneous rocks, bias can be minimized by matching the crystallinity of the primary reference to that of the unknowns. However, the crystallinity of the unknowns is often not well constrained prior to ablation, as it is a function of U and Th concentration, crystallization age, and thermal history. Likewise, selecting an appropriate primary reference is impossible when dating detrital rocks where zircons with differing ages, protoliths, and thermal histories are analyzed in the same session. We investigate the causes of systematic bias using Raman spectroscopy and measurements of the ablated pit geometry. The crystallinity of five zircon reference materials with ages between 28.2 Ma and 2674 Ma was estimated using Raman spectroscopy. Zircon references varied from being highly crystalline to highly metamict, with individual reference materials plotting as distinct clusters in peak wavelength versus Full-Width Half-Maximum (FWHM) space. A strong positive correlation (R2=0.69) was found between the FWHM for the band at ~1000 cm-1 in the Raman spectrum of the zircon and its ablation rate, suggesting the degree of crystallinity is a primary control on ablation rate in zircons. A moderate positive correlation (R2=0.37) was found between ablation rate and the difference between the age determined by LA-ICP-MS and the accepted ID-TIMS age (ΔAge). We use the measured, intra-sessional relationship between ablation rate and ΔAge of secondary references to reduce systematic bias. Rapid, high-precision measurement of ablated pit geometries using an optical profilometer and custom MatLab algorithm facilitates the implementation

  8. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  9. Variations on a theme: songbirds, variability, and sensorimotor error correction

    PubMed Central

    Kuebrich, Benjamin; Sober, Samuel

    2014-01-01

    Songbirds provide a powerful animal model for investigating how the brain uses sensory feedback to correct behavioral errors. Here, we review a recent study in which we used online manipulations of auditory feedback to quantify the relationship between sensory error size, motor variability, and vocal plasticity. We found that although inducing small auditory errors evoked relatively large compensatory changes in behavior, as error size increased the magnitude of error correction declined. Furthermore, when we induced large errors such that auditory signals no longer overlapped with the baseline distribution of feedback, the magnitude of error correction approached zero. This pattern suggests a simple and robust strategy for the brain to maintain the accuracy of learned behaviors by evaluating sensory signals relative to the previously experienced distribution of feedback. Drawing from recent studies of auditory neurophysiology and song discrimination, we then speculate as to the mechanistic underpinnings of the results obtained in our behavioral experiments. Finally, we review how our own and other studies exploit the strengths of the songbird system, both in the specific context of vocal systems and more generally as a model of the neural control of complex behavior. PMID:25305664

  10. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  11. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  12. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  13. High Accuracy of Common HIV-Related Oral Disease Diagnoses by Non-Oral Health Specialists in the AIDS Clinical Trial Group

    PubMed Central

    Shiboski, Caroline H.; Chen, Huichao; Secours, Rode; Lee, Anthony; Webster-Cyriaque, Jennifer; Ghannoum, Mahmoud; Evans, Scott; Bernard, Daphné; Reznik, David; Dittmer, Dirk P.; Hosey, Lara; Sévère, Patrice; Aberg, Judith A.

    2015-01-01

    Objective Many studies include oral HIV-related endpoints that may be diagnosed by non-oral-health specialists (non-OHS) like nurses or physicians. Our objective was to assess the accuracy of clinical diagnoses of HIV-related oral lesions made by non-OHS compared to diagnoses made by OHS. Methods A5254, a cross-sectional study conducted by the Oral HIV/AIDS Research Alliance within the AIDS Clinical Trial Group, enrolled HIV-1-infected adults participants from six clinical trial units (CTU) in the US (San Francisco, New York, Chapel Hill, Cleveland, Atlanta) and Haiti. CTU examiners (non-OHS) received standardized training on how to perform an oral examination and make clinical diagnoses of specific oral disease endpoints. Diagnoses by calibrated non-OHS were compared to those made by calibrated OHS, and sensitivity and specificity computed. Results Among 324 participants, the majority were black (73%), men (66%), and the median CD4+ cell count 138 cells/mm3. The overall frequency of oral mucosal disease diagnosed by OHS was 43% in US sites, and 90% in Haiti. Oral candidiasis (OC) was detected in 153 (47%) by OHS, with erythematous candidiasis (EC) the most common type (39%) followed by pseudomembranous candidiasis (PC; 26%). The highest prevalence of OC (79%) was among participants in Haiti, and among those with CD4+ cell count ≤ 200 cells/mm3 and HIV-1 RNA > 1000 copies/mL (71%). The sensitivity and specificity of OC diagnoses by non-OHS were 90% and 92% (for EC: 81% and 94%; PC: 82% and 95%). Sensitivity and specificity were also high for KS (87% and 94%, respectively), but sensitivity was < 60% for HL and oral warts in all sites combined. The Candida culture confirmation of OC clinical diagnoses (as defined by ≥ 1 colony forming unit per mL of oral/throat rinse) was ≥ 93% for both PC and EC. Conclusion Trained non-OHS showed high accuracy of clinical diagnoses of OC in comparison with OHS, suggesting their usefulness in studies in resource-poor settings

  14. SU-E-T-599: The Variation of Hounsfield Unit and Relative Electron Density Determination as a Function of KVp and Its Effect On Dose Calculation Accuracy

    SciTech Connect

    Ohl, A; Boer, S De

    2014-06-01

    Purpose: To investigate the differences in relative electron density for different energy (kVp) settings and the effect that these differences have on dose calculations. Methods: A Nuclear Associates 76-430 Mini CT QC Phantom with materials of known relative electron densities was imaged by one multi-slice (16) and one single-slice computed tomography (CT) scanner. The Hounsfield unit (HU) was recorded for each material with energies ranging from 80 to 140 kVp and a representative relative electron density (RED) curve was created. A 5 cm thick inhomogeneity was created in the treatment planning system (TPS) image at a depth of 5 cm. The inhomogeneity was assigned HU for various materials for each kVp calibration curve. The dose was then calculated with the analytical anisotropic algorithm (AAA) at points within and below the inhomogeneity and compared using the 80 kVp beam as a baseline. Results: The differences in RED values as a function of kVp showed the largest variations of 580 and 547 HU for the Aluminum and Bone materials; the smallest differences of 0.6 and 3.0 HU were observed for the air and lung inhomogeneities. The corresponding dose calculations for the different RED values assigned to the 5 cm thick slab revealed the largest differences inside the aluminum and bone inhomogeneities of 2.2 to 6.4% and 4.3 to 7.0% respectively. The dose differences beyond these two inhomogeneities were between 0.4 to 1.6% for aluminum and 1.9 to 2.2 % for bone. For materials with lower HU the calculated dose differences were less than 1.0%. Conclusion: For high CT number materials the dose differences in the phantom calculation as high as 7.0% are significant. This result may indicate that implementing energy specific RED curves can increase dose calculation accuracy.

  15. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.

    PubMed

    Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter

    2016-01-01

    We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518

  16. Drawing accuracy measured using polygons

    NASA Astrophysics Data System (ADS)

    Carson, Linda; Millard, Matthew; Quehl, Nadine; Danckert, James

    2013-03-01

    The study of drawing, for its own sake and as a probe into human visual perception, generally depends on ratings by human critics and self-reported expertise of the drawers. To complement those approaches, we have developed a geometric approach to analyzing drawing accuracy, one whose measures are objective, continuous and performance-based. Drawing geometry is represented by polygons formed by landmark points found in the drawing. Drawing accuracy is assessed by comparing the geometric properties of polygons in the drawn image to the equivalent polygon in a ground truth photo. There are four distinct properties of a polygon: its size, its position, its orientation and the proportionality of its shape. We can decompose error into four components and investigate how each contributes to drawing performance. We applied a polygon-based accuracy analysis to a pilot data set of representational drawings and found that an expert drawer outperformed a novice on every dimension of polygon error. The results of the pilot data analysis correspond well with the apparent quality of the drawings, suggesting that the landmark and polygon analysis is a method worthy of further study. Applying this geometric analysis to a within-subjects comparison of accuracy in the positive and negative space suggests there is a trade-off on dimensions of error. The performance-based analysis of geometric deformations will allow the study of drawing accuracy at different levels of organization, in a systematic and quantitative manner. We briefly describe the method and its potential applications to research in drawing education and visual perception.

  17. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  18. Analysis of Solar Two Heliostat Tracking Error Sources

    SciTech Connect

    Jones, S.A.; Stone, K.W.

    1999-01-28

    This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

  19. The relative accuracy of standard estimators for macrofaunal abundance and species richness derived from selected intertidal transect designs used to sample exposed sandy beaches

    NASA Astrophysics Data System (ADS)

    Schoeman, transect designs used to sample exposed sandy beaches D. S.; Wheeler, M.; Wait, M.

    2003-10-01

    In order to ensure that patterns detected in field samples reflect real ecological processes rather than methodological idiosyncrasies, it is important that researchers attempt to understand the consequences of the sampling and analytical designs that they select. This is especially true for sandy beach ecology, which has lagged somewhat behind ecological studies of other intertidal habitats. This paper investigates the performance of routine estimators of macrofaunal abundance and species richness, which are variables that have been widely used to infer predictable patterns of biodiversity across a gradient of beach types. To do this, a total of six shore-normal strip transects were sampled on three exposed, oceanic sandy beaches in the Eastern Cape, South Africa. These transects comprised contiguous quadrats arranged linearly between the spring high and low water marks. Using simple Monte Carlo simulation techniques, data collected from the strip transects were used to assess the accuracy of parameter estimates from different sampling strategies relative to their true values (macrofaunal abundance ranged 595-1369 individuals transect -1; species richness ranged 12-21 species transect -1). Results indicated that estimates from the various transect methods performed in a similar manner both within beaches and among beaches. Estimates for macrofaunal abundance tended to be negatively biased, especially at levels of sampling effort most commonly reported in the literature, and accuracy decreased with decreasing sampling effort. By the same token, estimates for species richness were always negatively biased and were also characterised by low precision. Furthermore, triplicate transects comprising a sampled area in the region of 4 m 2 (as has been previously recommended) are expected to miss more than 30% of the species that occur on the transect. Surprisingly, for both macrofaunal abundance and species richness, estimates based on data from transects sampling quadrats

  20. Age-related changes in motor imagery from early childhood to adulthood: probing the internal representation of speed-accuracy trade-offs.

    PubMed

    Smits-Engelsman, Bouwien C M; Wilson, Peter H

    2013-10-01

    The purpose of this study was to chart the development of motor imagery ability between 5 and 29 years of age and its relationship to fine-motor skill. 237 participants performed a computerized Virtual Radial Fitts Task (VRFT) as a measure of Motor Imagery (MI) ability. Participants aimed at five targets, positioned along radial axes from a central target circle. The targets differed in width over trials (2.5, 5, 10, 20 or 40 mm). Performance was indexed by the relationship between the movement time (MT) in executed and imagined movements. A subset of participants (11-19 years old, n=22) also performed the task with their non-preferred hand. We also examined if manual skill (measured by peg board task and posting coins) was related to the executed and imagined MT on the VRFT. Our results showed that the accuracy of the imagined movement improved steadily over childhood, reaching an asymptote during adolescence and into early adulthood. The correlation between the real and virtual MT using the preferred hand did not differ appreciably from that using the non-preferred hand. If the children could perform the tasks with their non-preferred hand (11 years and older), they also scaled performance in relatively precise terms using the less dextrous non-preferred hand. The correlation between real MT on the VRFT and fine-motor performance ranged between .53 and .42, while that for virtual movement was between .37 and .34. MI ability predicts manual skill to a moderate degree. PMID:23164627

  1. Racial/Ethnic Difference in HIV-related Knowledge among Young Men who have Sex with Men and their Association with Condom Errors

    PubMed Central

    Garofalo, Robert; Gayles, Travis; Bottone, Paul Devine; Ryan, Dan; Kuhns, Lisa M; Mustanski, Brian

    2014-01-01

    Objective HIV disproportionately affects young men who have sex with men, and knowledge about HIV transmission is one factor that may play a role in high rate of infections for this population. This study examined racial/ethnic differences in HIV knowledge among young men who have sex with men in the USA and its correlation to condom usage errors. Design Participants included an ethnically diverse sample of 344 young men who have sex with men screened from an ongoing longitudinal cohort study. Eligible participants were between the ages of 16 and 20 years, born male, and had previously had at least one sexual encounter with a man and/or identify as gay or bisexual. This analysis is based on cross-sectional data collected at the baseline interview using computer assisted self-interviewing (CASI) software. Setting Chicago, IL, USA Method We utilised descriptive and inferential statistics, including ANOVA and Tukey’s Post hoc analysis to assess differences in HIV knowledge by level of education and race/ethnicity, and negative binomial regression to determine if HIV knowledge was associated with condom errors while controlling for age, education and race/ethnicity. Results The study found that Black men who have sex with men scored significantly lower (average score=67%; p<.05) than their White counterparts (average score=83%) on a measure of HIV knowledge (mean difference=16.1%, p<.001). Participants with less than a high school diploma and those with a high school diploma/GED only had lower knowledge scores, on average (66.4%, 69.9%, respectively) than participants who had obtained post-high school education (78.1%; mean difference=11.7%, 8.2% respectively, ps<.05). In addition, controlling for age, race and level of education, higher HIV knowledge scores were associated with fewer condom errors (Exp B =.995, CI 0.992-0.999, p<0.05). Conclusion These findings stress the need to for increased attention to HIV transmission-related educational activities targeting

  2. Error monitoring in musicians.

    PubMed

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  3. Correction method for the error of diamond tool's radius in ultra-precision cutting

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  4. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R.E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  5. Study of geopotential error models used in orbit determination error analysis

    NASA Technical Reports Server (NTRS)

    Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

    1991-01-01

    The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

  6. Development and evaluation of a Kalman-filter algorithm for terminal area navigation using sensors of moderate accuracy

    NASA Technical Reports Server (NTRS)

    Kanning, G.; Cicolani, L. S.; Schmidt, S. F.

    1983-01-01

    Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.

  7. How the brain prevents a second error in a perceptual decision-making task.

    PubMed

    Perri, Rinaldo Livio; Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco

    2016-01-01

    In cognitive tasks, error commission is usually followed by a performance characterized by post-error slowing (PES) and post-error improvement of accuracy (PIA). Three theoretical accounts were hypothesized to support these post-error adjustments: the cognitive, the inhibitory, and the orienting account. The aim of the present ERP study was to investigate the neural processes associated with the second error prevention. To this aim, we focused on the preparatory brain activities in a large sample of subjects performing a Go/No-go task. The main results were the enhancement of the prefrontal negativity (pN) component -especially on the right hemisphere- and the reduction of the Bereitschaftspotential (BP) -especially on the left hemisphere- in the post-error trials. The ERP data suggested an increased top-down and inhibitory control, such as the reduced excitability of the premotor areas in the preparation of the trials following error commission. The results were discussed in light of the three theoretical accounts of the post-error adjustments. Additional control analyses supported the view that the adjustments-oriented components (the post-error pN and BP) are separated by the error-related potentials (Ne and Pe), even if all these activities represent a cascade of processes triggered by error-commission. PMID:27534593

  8. How the brain prevents a second error in a perceptual decision-making task

    PubMed Central

    Perri, Rinaldo Livio; Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco

    2016-01-01

    In cognitive tasks, error commission is usually followed by a performance characterized by post-error slowing (PES) and post-error improvement of accuracy (PIA). Three theoretical accounts were hypothesized to support these post-error adjustments: the cognitive, the inhibitory, and the orienting account. The aim of the present ERP study was to investigate the neural processes associated with the second error prevention. To this aim, we focused on the preparatory brain activities in a large sample of subjects performing a Go/No-go task. The main results were the enhancement of the prefrontal negativity (pN) component -especially on the right hemisphere- and the reduction of the Bereitschaftspotential (BP) -especially on the left hemisphere- in the post-error trials. The ERP data suggested an increased top-down and inhibitory control, such as the reduced excitability of the premotor areas in the preparation of the trials following error commission. The results were discussed in light of the three theoretical accounts of the post-error adjustments. Additional control analyses supported the view that the adjustments-oriented components (the post-error pN and BP) are separated by the error-related potentials (Ne and Pe), even if all these activities represent a cascade of processes triggered by error-commission. PMID:27534593

  9. Relationships among balance, visual search, and lacrosse-shot accuracy.

    PubMed

    Marsh, Darrin W; Richard, Leon A; Verre, Arlene B; Myers, Jay

    2010-06-01

    The purpose of this study was to examine variables that may contribute to shot accuracy in women's college lacrosse. A convenience sample of 15 healthy women's National Collegiate Athletic Association Division III College lacrosse players aged 18-23 (mean+/-SD, 20.27+/-1.67) participated in the study. Four experimental variables were examined: balance, visual search, hand grip strength, and shoulder joint position sense. Balance was measured by the Biodex Stability System (BSS), and visual search was measured by the Trail-Making Test Part A (TMTA) and Trail-Making Test Part B (TMTB). Hand-grip strength was measured by a standard hand dynamometer, and shoulder joint position sense was measured using a modified inclinometer. All measures were taken in an indoor setting. These experimental variables were then compared with lacrosse-shot error that was measured indoors using a high-speed video camera recorder and a specialized L-shaped apparatus. A Stalker radar gun measured lacrosse-shot velocity. The mean lacrosse-shot error was 15.17 cm with a mean lacrosse-shot velocity of 17.14 m.s (38.35 mph). Lower scores on the BSS level 8 eyes open (BSS L8 E/O) test and TMTB were positively related to less lacrosse-shot error (r=0.760, p=0.011) and (r=0.519, p=0.048), respectively. Relations were not significant between lacrosse-shot error and grip strength (r=0.191, p = 0.496), lacrosse-shot error and BSS level 8 eyes closed (BSS L8 E/C) (r=0.501, p=0.102), lacrosse-shot error and BSS level 4 eyes open (BSS L4 E/O) (r=0.313, p=0.378), lacrosse-shot error and BSS level 4 eyes closed (BSS L4 E/C) (r=-0.029, p=0.936) lacrosse-shot error and shoulder joint position sense (r=-0.509, p=0.055) and between lacrosse-shot error and TMTA (r=0.375, p=0.168). The results reveal that greater levels of shot accuracy may be related to greater levels of visual search and balance ability in women college lacrosse athletes.

  10. Hemispheric Asymmetries in Striatal Reward Responses Relate to Approach-Avoidance Learning and Encoding of Positive-Negative Prediction Errors in Dopaminergic Midbrain Regions.

    PubMed

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2015-10-28

    Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world.

  11. Hemispheric Asymmetries in Striatal Reward Responses Relate to Approach-Avoidance Learning and Encoding of Positive-Negative Prediction Errors in Dopaminergic Midbrain Regions.

    PubMed

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2015-10-28

    Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world. PMID:26511241

  12. Measures of spatio-temporal accuracy for time series land cover data

    NASA Astrophysics Data System (ADS)

    Tsutsumida, Narumasa; Comber, Alexis J.

    2015-09-01

    Remote sensing is a useful tool for monitoring changes in land cover over time. The accuracy of such time-series analyses has hitherto only been assessed using confusion matrices. The matrix allows global measures of user, producer and overall accuracies to be generated, but lacks consideration of any spatial aspects of accuracy. It is well known that land cover errors are typically spatially auto-correlated and can have a distinct spatial distribution. As yet little work has considered the temporal dimension and investigated the persistence or errors in both geographic and temporal dimensions. Spatio-temporal errors can have a profound impact on both change detection and on environmental monitoring and modelling activities using land cover data. This study investigated methods for describing the spatio-temporal characteristics of classification accuracy. Annual thematic maps were created using a random forest classification of MODIS data over the Jakarta metropolitan areas for the period of 2001-2013. A logistic geographically weighted model was used to estimate annual spatial measures of user, producer and overall accuracies. A principal component analysis was then used to extract summaries of the multi-temporal accuracy. The results showed how the spatial distribution of user and producer accuracy varied over space and time, and overall spatial variance was confirmed by the principal component analysis. The results indicated that areas of homogeneous land cover were mapped with relatively high accuracy and low variability, and areas of mixed land cover with the opposite characteristics. A multi-temporal spatial approach to accuracy is shown to provide more informative measures of accuracy, allowing map producers and users to evaluate time series thematic maps more comprehensively than a standard confusion matrix approach. The need to identify suitable properties for a temporal kernel are discussed.

  13. Design and accuracy analysis of a metamorphic CNC flame cutting machine for ship manufacturing

    NASA Astrophysics Data System (ADS)

    Hu, Shenghai; Zhang, Manhui; Zhang, Baoping; Chen, Xi; Yu, Wei

    2016-05-01

    The current research of processing large size fabrication holes on complex spatial curved surface mainly focuses on the CNC flame cutting machines design for ship hull of ship manufacturing. However, the existing machines cannot meet the continuous cutting requirements with variable pass conditions through their fixed configuration, and cannot realize high-precision processing as the accuracy theory is not studied adequately. This paper deals with structure design and accuracy prediction technology of novel machine tools for solving the problem of continuous and high-precision cutting. The needed variable trajectory and variable pose kinematic characteristics of non-contact cutting tool are figured out and a metamorphic CNC flame cutting machine designed through metamorphic principle is presented. To analyze kinematic accuracy of the machine, models of joint clearances, manufacturing tolerances and errors in the input variables and error models considering the combined effects are derived based on screw theory after establishing ideal kinematic models. Numerical simulations, processing experiment and trajectory tracking experiment are conducted relative to an eccentric hole with bevels on cylindrical surface respectively. The results of cutting pass contour and kinematic error interval which the position error is from-0.975 mm to +0.628 mm and orientation error is from-0.01 rad to +0.01 rad indicate that the developed machine can complete cutting process continuously and effectively, and the established kinematic error models are effective although the interval is within a `large' range. It also shows the matching property between metamorphic principle and variable working tasks, and the mapping correlation between original designing parameters and kinematic errors of machines. This research develops a metamorphic CNC flame cutting machine and establishes kinematic error models for accuracy analysis of machine tools.

  14. Effects of affective arousal on choice behavior, reward prediction errors, and feedback-related negativities in human reward-based decision making.

    PubMed

    Liu, Hong-Hsiang; Hsieh, Ming H; Hsu, Yung-Fong; Lai, Wen-Sung

    2015-01-01

    Emotional experience has a pervasive impact on choice behavior, yet the underlying mechanism remains unclear. Introducing facial-expression primes into a probabilistic learning task, we investigated how affective arousal regulates reward-related choice based on behavioral, model fitting, and feedback-related negativity (FRN) data. Sixty-six paid subjects were randomly assigned to the Neutral-Neutral (NN), Angry-Neutral (AN), and Happy-Neutral (HN) groups. A total of 960 trials were conducted. Subjects in each group were randomly exposed to half trials of the pre-determined emotional faces and another half of the neutral faces before choosing between two cards drawn from two decks with different assigned reward probabilities. Trial-by-trial data were fit with a standard reinforcement learning model using the Bayesian estimation approach. The temporal dynamics of brain activity were simultaneously recorded and analyzed using event-related potentials. Our analyses revealed that subjects in the NN group gained more reward values than those in the other two groups; they also exhibited comparatively differential estimated model-parameter values for reward prediction errors. Computing the difference wave of FRNs in reward vs. non-reward trials, we found that, compared to the NN group, subjects in the AN and HN groups had larger "General" FRNs (i.e., FRNs in no-reward trials minus FRNs in reward trials) and "Expected" FRNs (i.e., FRNs in expected reward-omission trials minus FRNs in expected reward-delivery trials), indicating an interruption in predicting reward. Further, both AN and HN groups appeared to be more sensitive to negative outcomes than the NN group. Collectively, our study suggests that affective arousal negatively regulates reward-related choice, probably through overweighting with negative feedback.

  15. Effects of affective arousal on choice behavior, reward prediction errors, and feedback-related negativities in human reward-based decision making

    PubMed Central

    Liu, Hong-Hsiang; Hsieh, Ming H.; Hsu, Yung-Fong; Lai, Wen-Sung

    2015-01-01

    Emotional experience has a pervasive impact on choice behavior, yet the underlying mechanism remains unclear. Introducing facial-expression primes into a probabilistic learning task, we investigated how affective arousal regulates reward-related choice based on behavioral, model fitting, and feedback-related negativity (FRN) data. Sixty-six paid subjects were randomly assigned to the Neutral-Neutral (NN), Angry-Neutral (AN), and Happy-Neutral (HN) groups. A total of 960 trials were conducted. Subjects in each group were randomly exposed to half trials of the pre-determined emotional faces and another half of the neutral faces before choosing between two cards drawn from two decks with different assigned reward probabilities. Trial-by-trial data were fit with a standard reinforcement learning model using the Bayesian estimation approach. The temporal dynamics of brain activity were simultaneously recorded and analyzed using event-related potentials. Our analyses revealed that subjects in the NN group gained more reward values than those in the other two groups; they also exhibited comparatively differential estimated model-parameter values for reward prediction errors. Computing the difference wave of FRNs in reward vs. non-reward trials, we found that, compared to the NN group, subjects in the AN and HN groups had larger “General” FRNs (i.e., FRNs in no-reward trials minus FRNs in reward trials) and “Expected” FRNs (i.e., FRNs in expected reward-omission trials minus FRNs in expected reward-delivery trials), indicating an interruption in predicting reward. Further, both AN and HN groups appeared to be more sensitive to negative outcomes than the NN group. Collectively, our study suggests that affective arousal negatively regulates reward-related choice, probably through overweighting with negative feedback. PMID:26042057

  16. Accuracy of prediction of infarct-related arrhythmic circuits from image-based models reconstructed from low and high resolution MRI.

    PubMed

    Deng, Dongdong; Arevalo, Hermenegild; Pashakhanloo, Farhad; Prakosa, Adityo; Ashikaga, Hiroshi; McVeigh, Elliot; Halperin, Henry; Trayanova, Natalia

    2015-01-01

    Identification of optimal ablation sites in hearts with infarct-related ventricular tachycardia (VT) remains difficult to achieve with the current catheter-based mapping techniques. Limitations arise from the ambiguities in determining the reentrant pathways location(s). The goal of this study was to develop experimentally validated, individualized computer models of infarcted swine hearts, reconstructed from high-resolution ex-vivo MRI and to examine the accuracy of the reentrant circuit location prediction when models of the same hearts are instead reconstructed from low clinical-resolution MRI scans. To achieve this goal, we utilized retrospective data obtained from four pigs ~10 weeks post infarction that underwent VT induction via programmed stimulation and epicardial activation mapping via a multielectrode epicardial sock. After the experiment, high-resolution ex-vivo MRI with late gadolinium enhancement was acquired. The Hi-res images were downsampled into two lower resolutions (Med-res and Low-res) in order to replicate image quality obtainable in the clinic. The images were segmented and models were reconstructed from the three image stacks for each pig heart. VT induction similar to what was performed in the experiment was simulated. Results of the reconstructions showed that the geometry of the ventricles including the infarct could be accurately obtained from Med-res and Low-res images. Simulation results demonstrated that induced VTs in the Med-res and Low-res models were located close to those in Hi-res models. Importantly, all models, regardless of image resolution, accurately predicted the VT morphology and circuit location induced in the experiment. These results demonstrate that MRI-based computer models of hearts with ischemic cardiomyopathy could provide a unique opportunity to predict and analyze VT resulting for from specific infarct architecture, and thus may assist in clinical decisions to identify and ablate the reentrant circuit(s). PMID

  17. Collective animal decisions: preference conflict and decision accuracy

    PubMed Central

    Conradt, Larissa

    2013-01-01

    Social animals frequently share decisions that involve uncertainty and conflict. It has been suggested that conflict can enhance decision accuracy. In order to judge the practical relevance of such a suggestion, it is necessary to explore how general such findings are. Using a model, I examine whether conflicts between animals in a group with respect to preferences for avoiding false positives versus avoiding false negatives could, in principle, enhance the accuracy of collective decisions. I found that decision accuracy nearly always peaked when there was maximum conflict in groups in which individuals had different preferences. However, groups with no preferences were usually even more accurate. Furthermore, a relatively slight skew towards more animals with a preference for avoiding false negatives decreased the rate of expected false negatives versus false positives considerably (and vice versa), while resulting in only a small loss of decision accuracy. I conclude that in ecological situations in which decision accuracy is crucial for fitness and survival, animals cannot ‘afford’ preferences with respect to avoiding false positives versus false negatives. When decision accuracy is less crucial, animals might have such preferences. A slight skew in the number of animals with different preferences will result in the group avoiding that type of error more that the majority of group members prefers to avoid. The model also indicated that knowing the average success rate (‘base rate’) of a decision option can be very misleading, and that animals should ignore such base rates unless further information is available. PMID:24516716

  18. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  19. Self-identification and empathy modulate error-related brain activity during the observation of penalty shots between friend and foe

    PubMed Central

    Ganesh, Shanti; van Schie, Hein T.; De Bruijn, Ellen R. A.; Bekkering, Harold

    2009-01-01

    The ability to detect and process errors made by others plays an important role is many social contexts. The capacity to process errors is typically found to rely on sites in the medial frontal cortex. However, it remains to be determined whether responses at these sites are driven primarily by action errors themselves or by the affective consequences normally associated with their commission. Using an experimental paradigm that disentangles action errors and the valence of their affective consequences, we demonstrate that sites in the medial frontal cortex (MFC), including the ventral anterior cingulate cortex (vACC) and pre-supplementary motor area (pre-SMA), respond to action errors independent of the valence of their consequences. The strength of this response was negatively correlated with the empathic concern subscale of the Interpersonal Reactivity Index. We also demonstrate a main effect of self-identification by showing that errors committed by friends and foes elicited significantly different BOLD responses in a separate region of the middle anterior cingulate cortex (mACC). These results suggest that the way we look at others plays a critical role in determining patterns of brain activation during error observation. These findings may have important implications for general theories of error processing. PMID:19015079

  20. Navigation Accuracy Guidelines for Orbital Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2003-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation's semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver we, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  1. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  2. [Medical device use errors].

    PubMed

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  3. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  4. Developments of the general relativity accuracy test (GReAT): a ground-based experiment to test the weak equivalence principle

    NASA Astrophysics Data System (ADS)

    Iafolla, V.; Nozzoli, S.; Lorenzini, E. C.; Shapiro, I. I.; Milyukov, V.

    2000-06-01

    Some future tests of the weak equivalence principle (WEP) with laboratory-size proof masses are likely to be conducted in freefall conditions in order to improve the test accuracy substantially. Some years ago the authors of this paper proposed to test the WEP in a vertical freefall inside a capsule released from a high-altitude balloon. The estimated accuracy in testing the WEP, with a 95% confidence level, is a few parts in 1015 in a 30 s freefall. When compared with other proposed orbital freefall experiments and ground-based tests, the vertical freefall retains some key advantages of the former without some of the disadvantages of the latter. Moreover, a two orders of magnitude increase in the accuracy of testing the WEP could be achieved with an affordable experiment that allows us to recover the detector and repeat the launches at short time intervals.

  5. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  6. Scout trajectory error propagation computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1982-01-01

    Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.

  7. Changes in limb striking pattern: effects of speed and accuracy.

    PubMed

    Southard, D

    1989-12-01

    This study investigated the changes in an arm striking pattern as a result of practice and the effects of speed and accuracy requirements on such changes. The task was to strike a baseball-size foam ball from a batting tee adjusted to the height of each subject's iliac crest. Ten righthanded subjects, initially displaying an inefficient striking pattern, volunteered for this study. All subjects performed the task according to the following conditions: (1) speed, (2) accuracy, and (3) speed and accuracy. Each subject completed 10 trials in each condition (randomly ordered) for five consecutive days. A high-speed camera (64 fps) was used to photograph subjects' striking patterns for each condition over the 5-day period. Analysis of variance of joint angles at arm reversal and contact and velocity of hand relative to the glenohumeral axis at contact revealed that subjects initially constrained limb segments to act in a unitary fashion; then, with practice, a more efficient pattern was developed. The requirement of speed was found to enhance a change in limb configuration, whereas the requirement of accuracy, and subsequent reduction in speed, impeded the development of a more efficient striking pattern. Analysis of radial error revealed no differences in accuracies to the target by either condition or day of practice. A graphic analysis of segmental angular momentum versus relative time showed that joint angle changes allowed subjects to transfer angular momentum and thereby increase the velocity of the hand at contact. PMID:2489862

  8. Changes in limb striking pattern: effects of speed and accuracy.

    PubMed

    Southard, D

    1989-12-01

    This study investigated the changes in an arm striking pattern as a result of practice and the effects of speed and accuracy requirements on such changes. The task was to strike a baseball-size foam ball from a batting tee adjusted to the height of each subject's iliac crest. Ten righthanded subjects, initially displaying an inefficient striking pattern, volunteered for this study. All subjects performed the task according to the following conditions: (1) speed, (2) accuracy, and (3) speed and accuracy. Each subject completed 10 trials in each condition (randomly ordered) for five consecutive days. A high-speed camera (64 fps) was used to photograph subjects' striking patterns for each condition over the 5-day period. Analysis of variance of joint angles at arm reversal and contact and velocity of hand relative to the glenohumeral axis at contact revealed that subjects initially constrained limb segments to act in a unitary fashion; then, with practice, a more efficient pattern was developed. The requirement of speed was found to enhance a change in limb configuration, whereas the requirement of accuracy, and subsequent reduction in speed, impeded the development of a more efficient striking pattern. Analysis of radial error revealed no differences in accuracies to the target by either condition or day of practice. A graphic analysis of segmental angular momentum versus relative time showed that joint angle changes allowed subjects to transfer angular momentum and thereby increase the velocity of the hand at contact.

  9. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  10. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  11. L2 Spelling Errors in Italian Children with Dyslexia.

    PubMed

    Palladino, Paola; Cismondo, Dhebora; Ferrari, Marcella; Ballagamba, Isabella; Cornoldi, Cesare

    2016-05-01

    The present study aimed to investigate L2 spelling skills in Italian children by administering an English word dictation task to 13 children with dyslexia (CD), 13 control children (comparable in age, gender, schooling and IQ) and a group of 10 children with an English learning difficulty, but no L1 learning disorder. Patterns of difficulties were examined for accuracy and type of errors, in spelling dictated short and long words (i.e. disyllables and three syllables). Notably, CD were poor in spelling English words. Furthermore, their errors were mainly related with phonological representation of words, as they made more 'phonologically' implausible errors than controls. In addition, CD errors were more frequent for short than long words. Conversely, the three groups did not differ in the number of plausible ('non-phonological') errors, that is, words that were incorrectly written, but whose reading could correspond to the dictated word via either Italian or English rules. Error analysis also showed syllable position differences in the spelling patterns of CD, children with and English learning difficulty and control children. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26892314

  12. L2 Spelling Errors in Italian Children with Dyslexia.

    PubMed

    Palladino, Paola; Cismondo, Dhebora; Ferrari, Marcella; Ballagamba, Isabella; Cornoldi, Cesare

    2016-05-01

    The present study aimed to investigate L2 spelling skills in Italian children by administering an English word dictation task to 13 children with dyslexia (CD), 13 control children (comparable in age, gender, schooling and IQ) and a group of 10 children with an English learning difficulty, but no L1 learning disorder. Patterns of difficulties were examined for accuracy and type of errors, in spelling dictated short and long words (i.e. disyllables and three syllables). Notably, CD were poor in spelling English words. Furthermore, their errors were mainly related with phonological representation of words, as they made more 'phonologically' implausible errors than controls. In addition, CD errors were more frequent for short than long words. Conversely, the three groups did not differ in the number of plausible ('non-phonological') errors, that is, words that were incorrectly written, but whose reading could correspond to the dictated word via either Italian or English rules. Error analysis also showed syllable position differences in the spelling patterns of CD, children with and English learning difficulty and control children. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  14. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  15. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  16. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  17. Improving Automatic English Writing Assessment Using Regression Trees and Error-Weighting

    NASA Astrophysics Data System (ADS)

    Lee, Kong-Joo; Kim, Jee-Eun

    The proposed automated scoring system for English writing tests provides an assessment result including a score and diagnostic feedback to test-takers without human's efforts. The system analyzes an input sentence and detects errors related to spelling, syntax and content similarity. The scoring model has adopted one of the statistical approaches, a regression tree. A scoring model in general calculates a score based on the count and the types of automatically detected errors. Accordingly, a system with higher accuracy in detecting errors raises the accuracy in scoring a test. The accuracy of the system, however, cannot be fully guaranteed for several reasons, such as parsing failure, incompleteness of knowledge bases, and ambiguous nature of natural language. In this paper, we introduce an error-weighting technique, which is similar to term-weighting widely used in information retrieval. The error-weighting technique is applied to judge reliability of the errors detected by the system. The score calculated with the technique is proven to be more accurate than the score without it.

  18. Impact of Monetary Incentives on Cognitive Performance and Error Monitoring following Sleep Deprivation

    PubMed Central

    Hsieh, Shulan; Li, Tzu-Hsien; Tsai, Ling-Ling

    2010-01-01

    Study Objectives: To examine whether monetary incentives attenuate the negative effects of sleep deprivation on cognitive performance in a flanker task that requires higher-level cognitive-control processes, including error monitoring. Design: Twenty-four healthy adults aged 18 to 23 years were randomly divided into 2 subject groups: one received and the other did not receive monetary incentives for performance accuracy. Both subject groups performed a flanker task and underwent electroencephalographic recordings for event-related brain potentials after normal sleep and after 1 night of total sleep deprivation in a within-subject, counterbalanced, repeated-measures study design. Results: Monetary incentives significantly enhanced the response accuracy and reaction time variability under both normal sleep and sleep-deprived conditions, and they reduced the effects of sleep deprivation on the subjective effort level, the amplitude of the error-related negativity (an error-related event-related potential component), and the latency of the P300 (an event-related potential variable related to attention processes). However, monetary incentives could not attenuate the effects of sleep deprivation on any measures of behavior performance, such as the response accuracy, reaction time variability, or posterror accuracy adjustments; nor could they reduce the effects of sleep deprivation on the amplitude of the Pe, another error-related event-related potential component. Conclusions: This study shows that motivation incentives selectively reduce the effects of total sleep deprivation on some brain activities, but they cannot attenuate the effects of sleep deprivation on performance decrements in tasks that require high-level cognitive-control processes. Thus, monetary incentives and sleep deprivation may act through both common and different mechanisms to affect cognitive performance. Citation: Hsieh S; Li TH; Tsai LL. Impact of monetary incentives on cognitive performance and

  19. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  20. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  1. Error-compensation measurements on polarization qubits

    NASA Astrophysics Data System (ADS)

    Hou, Zhibo; Zhu, Huangjun; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can

    2016-06-01

    Systematic errors are inevitable in most measurements performed in real life because of imperfect measurement devices. Reducing systematic errors is crucial to ensuring the accuracy and reliability of measurement results. To this end, delicate error-compensation design is often necessary in addition to device calibration to reduce the dependence of the systematic error on the imperfection of the devices. The art of error-compensation design is well appreciated in nuclear magnetic resonance system by using composite pulses. In contrast, there are few works on reducing systematic errors in quantum optical systems. Here we propose an error-compensation design to reduce the systematic error in projective measurements on a polarization qubit. It can reduce the systematic error to the second order of the phase errors of both the half-wave plate (HWP) and the quarter-wave plate (QWP) as well as the angle error of the HWP. This technique is then applied to experiments on quantum state tomography on polarization qubits, leading to a 20-fold reduction in the systematic error. Our study may find applications in high-precision tasks in polarization optics and quantum optics.

  2. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  3. Statistics-based reconstruction method with high random-error tolerance for integral imaging.

    PubMed

    Zhang, Juan; Zhou, Liqiu; Jiao, Xiaoxue; Zhang, Lei; Song, Lipei; Zhang, Bo; Zheng, Yi; Zhang, Zan; Zhao, Xing

    2015-10-01

    A three-dimensional (3D) digital reconstruction method for integral imaging with high random-error tolerance based on statistics is proposed. By statistically analyzing the points reconstructed by triangulation from all corresponding image points in an elemental images array, 3D reconstruction with high random-error tolerance could be realized. To simulate the impacts of random errors, random offsets with different error levels are added to a different number of elemental images in simulation and optical experiments. The results of simulation and optical experiments showed that the proposed statistic-based reconstruction method has relatively stable and better reconstruction accuracy than the conventional reconstruction method. It can be verified that the proposed method can effectively reduce the impacts of random errors on 3D reconstruction of integral imaging. This method is simple and very helpful to the development of integral imaging technology.

  4. Error analysis and compensation of binocular-stereo-vision measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Guo, Junjie

    2008-09-01

    Measurement errors in binocular stereo vision are analyzed. It is proved that multi-stage calibration can efficiently reduce systematic errors due to depth of field. Furthermore, for difficulty in carry-out of multi-stage calibration, the compensation methods of errors are presented in this paper. First, using standard plane template, system calibration is completed. Then, moving the cameras to different depths, multiple views are taken and 3d coordinates of special points on template are calculated. Finally, error compensation model in depth is established with least square fitting. Experiment based on CMM indicates the relative error of measurement is reduced by 5.1% with the proposed method in this paper. This is of practical value in expanding measurement range in depth and improving measurement accuracy.

  5. Project of Neutron Beta-Decay A-Asymmetry Measurement With Relative Accuracy of (1-2)×10(-3).

    PubMed

    Serebrov, A; Rudnev, Yu; Murashkin, A; Zherebtsov, O; Kharitonov, A; Korolev, V; Morozov, T; Fomin, A; Pusenkov, V; Schebetov, A; Varlamov, V

    2005-01-01

    We are going to use a polarized cold neutron beam and an axial magnetic field in the shape of a bottle formed by a superconducting magnetic system. Such a configuration of magnetic fields allows us to extract the decay electrons inside a well-defined solid angle with high accuracy. An electrostatic cylinder with a potential of 25 kV defines the detected region of neutron decays. The protons, which come from this region will be accelerated and registered by a proton detector. The use of coincidences between electron and proton signals will allow us to considerably suppress the background. The final accuracy of the A-asymmetry will be determined by the uncertainty of the neutron beam polarization measurement which is at the level of (1-2) × 10(-3), as shown in previous studies. PMID:27308154

  6. Partially supervised P300 speller adaptation for eventual stimulus timing optimization: target confidence is superior to error-related potential score as an uncertain label

    NASA Astrophysics Data System (ADS)

    Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom

    2016-04-01

    Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA

  7. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  8. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  9. New analytical algorithm for overlay accuracy

    NASA Astrophysics Data System (ADS)

    Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo

    2012-03-01

    The extension of optical lithography to 2Xnm and beyond is often challenged by overlay control. With reduced overlay measurement error budget in the sub-nm range, conventional Total Measurement Uncertainty (TMU) data is no longer sufficient. Also there is no sufficient criterion in overlay accuracy. In recent years, numerous authors have reported new method of the accuracy of the overlay metrology: Through focus and through color. Still quantifying uncertainty in overlay measurement is most difficult work in overlay metrology. According to the ITRS roadmap, total overlay budget is getting tighter than former device node as a design rule shrink on each device node. Conventionally, the total overlay budget is defined as the square root of square sum of the following contributions: the scanner overlay performance, wafer process, metrology and mask registration. All components have been supplying sufficiently performance tool to each device nodes, delivering new scanner, new metrology tools, and new mask e-beam writers. Especially the scanner overlay performance was drastically decreased from 9nm in 8x node to 2.5nm in 3x node. The scanner overlay seems to reach the limitation the overlay performance after 3x nod. The importance of the wafer process overlay as a contribution of total wafer overlay became more important. In fact, the wafer process overlay was decreased by 3nm between DRAM 8x node and DRAM 3x node. We develop an analytical algorithm for overlay accuracy. And a concept of nondestructive method is proposed in this paper. For on product layer we discovered the layer has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. In this paper, authors suggest an analytical algorithm for overlay accuracy. And a concept of non-destructive method is proposed in this paper. For on product layers, we discovered it has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. Furthermore

  10. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  11. An acoustical assessment of pitch-matching accuracy in relation to speech frequency, speech frequency range, age and gender in preschool children

    NASA Astrophysics Data System (ADS)

    Trollinger, Valerie L.

    This study investigated the relationship between acoustical measurement of singing accuracy in relationship to speech fundamental frequency, speech fundamental frequency range, age and gender in preschool-aged children. Seventy subjects from Southeastern Pennsylvania; the San Francisco Bay Area, California; and Terre Haute, Indiana, participated in the study. Speech frequency was measured by having the subjects participate in spontaneous and guided speech activities with the researcher, with 18 diverse samples extracted from each subject's recording for acoustical analysis for fundamental frequency in Hz with the CSpeech computer program. The fundamental frequencies were averaged together to derive a mean speech frequency score for each subject. Speech range was calculated by subtracting the lowest fundamental frequency produced from the highest fundamental frequency produced, resulting in a speech range measured in increments of Hz. Singing accuracy was measured by having the subjects each echo-sing six randomized patterns using the pitches Middle C, D, E, F♯, G and A (440), using the solfege syllables of Do and Re, which were recorded by a 5-year-old female model. For each subject, 18 samples of singing were recorded. All samples were analyzed by the CSpeech for fundamental frequency. For each subject, deviation scores in Hz were derived by calculating the difference between what the model sang in Hz and what the subject sang in response in Hz. Individual scores for each child consisted of an overall mean total deviation frequency, mean frequency deviations for each pattern, and mean frequency deviation for each pitch. Pearson correlations, MANOVA and ANOVA analyses, Multiple Regressions and Discriminant Analysis revealed the following findings: (1) moderate but significant (p < .001) relationships emerged between mean speech frequency and the ability to sing the pitches E, F♯, G and A in the study; (2) mean speech frequency also emerged as the strongest

  12. An error analysis of the recovery capability of the relative sea-surface profile over the Puerto Rican trench from multi-station and ship tracking of GEOS-2

    NASA Technical Reports Server (NTRS)

    Stanley, H. R.; Martin, C. F.; Roy, N. A.; Vetter, J. R.

    1971-01-01

    Error analyses were performed to examine the height error in a relative sea-surface profile as determined by a combination of land-based multistation C-band radars and optical lasers and one ship-based radar tracking the GEOS 2 satellite. It was shown that two relative profiles can be obtained: one using available south-to-north passes of the satellite and one using available north-to-south type passes. An analysis of multi-station tracking capability determined that only Antigua and Grand Turk radars are required to provide satisfactory orbits for south-to-north type satellite passes, while a combination of Merritt Island, Bermuda, and Wallops radars provide secondary orbits for north-to-south passes. Analysis of ship tracking capabilities shows that high elevation single pass range-only solutions are necessary to give only moderate sensitivity to systematic error effects.

  13. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    After committing an error, participants tend to perform more slowly. This phenomenon is called post-error slowing (PES). Although previous studies have explored the PES effect in the context of observed errors, the issue as to whether the slowing effect generalizes across tasksets remains unclear. Further, the generation mechanisms of PES following observed errors must be examined. To address the above issues, we employed an observation-execution task in three experiments. During each trial, participants were required to mentally observe the outcomes of their partners in the observation task and then to perform their own key-press according to the mapping rules in the execution task. In Experiment 1, the same tasksets were utilized in the observation task and the execution task, and three error rate conditions (20%, 50% and 80%) were established in the observation task. The results revealed that the PES effect after observed errors was obtained in all three error rate conditions, replicating and extending previous studies. In Experiment 2, distinct stimuli and response rules were utilized in the observation task and the execution task. The result pattern was the same as that in Experiment 1, suggesting that the PES effect after observed errors was a generic adjustment process. In Experiment 3, the response deadline was shortened in the execution task to rule out the ceiling effect, and two error rate conditions (50% and 80%) were established in the observation task. The PES effect after observed errors was still obtained in the 50% and 80% error rate conditions. However, the accuracy in the post-observed error trials was comparable to that in the post-observed correct trials, suggesting that the slowing effect and improved accuracy did not rely on the same underlying mechanism. Current findings indicate that the occurrence of PES after observed errors is not dependent on the probability of observed errors, consistent with the assumption of cognitive control account

  14. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  15. Comparative evaluation of ultrasound scanner accuracy in distance measurement

    NASA Astrophysics Data System (ADS)

    Branca, F. P.; Sciuto, S. A.; Scorza, A.

    2012-10-01

    The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.

  16. Uncertainty in 2D hydrodynamic models from errors in roughness parameterization based on aerial images

    NASA Astrophysics Data System (ADS)

    Straatsma, Menno; Huthoff, Fredrik

    2011-01-01

    In The Netherlands, 2D-hydrodynamic simulations are used to evaluate the effect of potential safety measures against river floods. In the investigated scenarios, the floodplains are completely inundated, thus requiring realistic representations of hydraulic roughness of floodplain vegetation. The current study aims at providing better insight into the uncertainty of flood water levels due to uncertain floodplain roughness parameterization. The study focuses on three key elements in the uncertainty of floodplain roughness: (1) classification error of the landcover map, (2), within class variation of vegetation structural characteristics, and (3) mapping scale. To assess the effect of the first error source, new realizations of ecotope maps were made based on the current floodplain ecotope map and an error matrix of the classification. For the second error source, field measurements of vegetation structure were used to obtain uncertainty ranges for each vegetation structural type. The scale error was investigated by reassigning roughness codes on a smaller spatial scale. It is shown that classification accuracy of 69% leads to an uncertainty range of predicted water levels in the order of decimeters. The other error sources are less relevant. The quantification of the uncertainty in water levels can help to make better decisions on suitable flood protection measures. Moreover, the relation between uncertain floodplain roughness and the error bands in water levels may serve as a guideline for the desired accuracy of floodplain characteristics in hydrodynamic models.

  17. Characterization of the error budget of Alba-NOM

    NASA Astrophysics Data System (ADS)

    Nicolas, Josep; Martínez, Juan Carlos

    2013-05-01

    The Alba-NOM instrument is a high accuracy scanning machine capable of measuring the slope profile of long mirrors with resolution below the nanometer scale and for a wide range of curvatures. We present the characterization of different sources of errors that limit the uncertainty of the instrument. We have investigated three main contributions to the uncertainty of the measurements: errors introduced by the scanning system and the pentaprism, errors due to environmental conditions, and optical errors of the autocollimator. These sources of error have been investigated by measuring the corresponding motion errors with a high accuracy differential interferometer and by simulating their impact on the measurements by means of ray-tracing. Optical error contributions have been extracted from the analysis of redundant measurements of test surfaces. The methods and results are presented, as well as an example of application that has benefited from the achieved accuracy.

  18. 45 CFR 60.6 - Reporting errors, omissions, and revisions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Reporting errors, omissions, and revisions. 60.6... Information § 60.6 Reporting errors, omissions, and revisions. (a) Persons and entities are responsible for the accuracy of information which they report to the NPDB. If errors or omissions are found...

  19. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  20. How personal standards perfectionism and evaluative concerns perfectionism affect the error positivity and post-error behavior with varying stimulus visibility.

    PubMed

    Drizinsky, Jessica; Zülch, Joachim; Gibbons, Henning; Stahl, Jutta

    2016-10-01

    Error detection is required in order to correct or avoid imperfect behavior. Although error detection is beneficial for some people, for others it might be disturbing. We investigated Gaudreau and Thompson's (Personality and Individual Differences, 48, 532-537, 2010) model, which combines personal standards perfectionism (PSP) and evaluative concerns perfectionism (ECP). In our electrophysiological study, 43 participants performed a combination of a modified Simon task, an error awareness paradigm, and a masking task with a variation of stimulus onset asynchrony (SOA; 33, 67, and 100 ms). Interestingly, relative to low-ECP participants, high-ECP participants showed a better post-error accuracy (despite a worse classification accuracy) in the high-visibility SOA 100 condition than in the two low-visibility conditions (SOA 33 and SOA 67). Regarding the electrophysiological results, first, we found a positive correlation between ECP and the amplitude of the error positivity (Pe) under conditions of low stimulus visibility. Second, under the condition of high stimulus visibility, we observed a higher Pe amplitude for high-ECP-low-PSP participants than for high-ECP-high-PSP participants. These findings are discussed within the framework of the error-processing avoidance hypothesis of perfectionism (Stahl, Acharki, Kresimon, Völler, & Gibbons, International Journal of Psychophysiology, 97, 153-162, 2015). PMID:27250616

  1. Passport officers' errors in face matching.

    PubMed

    White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  2. Passport Officers’ Errors in Face Matching

    PubMed Central

    White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682

  3. Tropical errors and convection

    NASA Astrophysics Data System (ADS)

    Bechtold, P.; Bauer, P.; Engelen, R. J.

    2012-12-01

    Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical errors and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these errors is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind errors are the Indian Ocean with large interannual variations in forecast errors, and the East Pacific with persistent systematic errors that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation errors inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment time-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too

  4. The suppression of phase error by applying window functions to digital holography

    NASA Astrophysics Data System (ADS)

    Yan, Facai; Yan, Hao; Yu, Yingjie; Zhou, Wenjing; Asundi, Anand

    2016-11-01

    Digital holography (DH) is a 3D imaging technique with a theoretical axial accuracy of around 1-2 nm. However, in practice, the axial error is generally quoted as tens of nanometers. Previous studies on sources of axial error mainly focused on the phase error introduced by lens. However, it was later shown that other factors such as the limited CCD aperture size also contribute to axial error. Based on this study, further investigation approaches to suppress the axial error caused by the limited CCD aperture size is discussed in this paper. Use of a window function to modify the shape of the hologram aperture after the recording process is proposed to reduce the axial error. The mechanism of how this window function reduces axial/phase error is analyzed. Specific features of this window function related to the axial error, namely the side lobe energy to main lobe energy ratio (SMER), is postulated. Both simulation and experiment are performed to validate that the selection of an appropriate window function helps to reduce the axial error of digital holography and SMER is an effective indicator in selection of an appropriate window function.

  5. Developmental Aspects of Error and High-Conflict-Related Brain Activity in Pediatric Obsessive-Compulsive Disorder: A FMRI Study with a Flanker Task before and after CBT

    ERIC Educational Resources Information Center

    Huyser, Chaim; Veltman, Dick J.; Wolters, Lidewij H.; de Haan, Else; Boer, Frits

    2011-01-01

    Background: Heightened error and conflict monitoring are considered central mechanisms in obsessive-compulsive disorder (OCD) and are associated with anterior cingulate cortex (ACC) function. Pediatric obsessive-compulsive patients provide an opportunity to investigate the development of this area and its associations with psychopathology.…

  6. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection. CRESST Report 822

    ERIC Educational Resources Information Center

    Buschang, Rebecca E.

    2012-01-01

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e.,…

  7. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection

    ERIC Educational Resources Information Center

    Buschang, Rebecca Ellen

    2012-01-01

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e.,…

  8. Global accuracy estimates of point and mean undulation differences obtained from gravity disturbances, gravity anomalies and potential coefficients

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1979-01-01

    Through the method of truncation functions, the oceanic geoid undulation is divided into two constituents: an inner zone contribution expressed as an integral of surface gravity disturbances over a spherical cap; and an outer zone contribution derived from a finite set of potential harmonic coefficients. Global, average error estimates are formulated for undulation differences, thereby providing accuracies for a relative geoid. The error analysis focuses on the outer zone contribution for which the potential coefficient errors are modeled. The method of computing undulations based on gravity disturbance data for the inner zone is compared to the similar, conventional method which presupposes gravity anomaly data within this zone.

  9. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  10. Accuracy of References in Five Entomology Journals.

    ERIC Educational Resources Information Center

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…

  11. Spatio-temporal Dynamics of Error Processing Dysfunctions in Major Depressive Disorder

    PubMed Central

    Holmes, Avram J.; Pizzagalli, Diego A.

    2008-01-01

    Context Depression is characterized by executive dysfunctions and abnormal reactions to errors; however, little is known about the brain mechanisms that underlie these deficits. Objective To examine whether abnormal reactions to errors in patients with major depressive disorder (MDD) are associated with exaggerated paralimbic activation and/or a failure to recruit subsequent cognitive control to account for mistakes in performance. Design Between February 15, 2005, and January 19, 2006, we recorded 128-channel event-related potentials while study participants performed a Stroop task, modified to incorporate performance feedback. Setting Patients with MDD and healthy comparison subjects were recruited from the general community. Participants Study participants were 20 unmedicated patients with MDD and 20 demographically matched comparison subjects. Main Outcome Measures The error-related negativity and error positivity were analyzed through scalp and source localization analyses. Functional connectivity analyses were conducted to investigate group differences in the spatiotemporal dynamics of brain mechanisms that underlie error processing. Results Relative to comparison subjects, patients with MDD displayed significantly lower accuracy after incorrect responses, larger error-related negativity, and higher current density in the rostral anterior cingulate cortex (ACC) and medial prefrontal cortex (PFC) (Brodmann area 10/32) 80 milliseconds after committing an error. Functional connectivity analyses revealed that for the comparison subjects, but not the patients with MDD, rostral ACC and medial PFC activation 80 milliseconds after committing an error predicted left dorsolateral PFC (Brodmann area 8/9) activation 472 milliseconds after committing an error. Conclusions Unmedicated patients with MDD showed reduced accuracy and potentiated error-related negativity immediately after committing errors, highlighting dysfunctions in the automatic detection of unfavorable

  12. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  13. Error correction for rotationally asymmetric surface deviation testing based on rotational shears.

    PubMed

    Wang, Weibo; Liu, Pengfei; Xing, Yaolong; Tan, Jiubin; Liu, Jian

    2016-09-10

    We present a practical method for absolute testing of rotationally asymmetric surface deviation based on rotation averaging, additional compensation, and azimuthal errors correction. The errors of angular orders kNθ neglected in the traditional multiangle averaging method can be reconstructed and compensated with the help of least-squares fitting of Zernike polynomials by an additional rotation measurement with a suitable selection of rotation angles. The estimation algorithm adopts the least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The unknown relative alignment of the measurements also can be estimated through the differences in measurement results at overlapping areas. The method proposed combines the advantages of the single-rotation and multiangle averaging methods and realizes a balance between the efficiency and accuracy of the measurements. Experimental results show that the method proposed can obtain high accuracy even with fewer rotation measurements. PMID:27661385

  14. Assessment of optical localizer accuracy for computer aided surgery systems.

    PubMed

    Elfring, Robert; de la Fuente, Matías; Radermacher, Klaus

    2010-01-01

    The technology for localization of surgical tools with respect to the patient's reference coordinate system in three to six degrees of freedom is one of the key components in computer aided surgery. Several tracking methods are available, of which optical tracking is the most widespread in clinical use. Optical tracking technology has proven to be a reliable method for intra-operative position and orientation acquisition in many clinical applications; however, the accuracy of such localizers is still a topic of discussion. In this paper, the accuracy of three optical localizer systems, the NDI Polaris P4, the NDI Polaris Spectra (in active and passive mode) and the Stryker Navigation System II camera, is assessed and compared critically. Static tests revealed that only the Polaris P4 shows significant warm-up behavior, with a significant shift of accuracy being observed within 42 minutes of being switched on. Furthermore, the intrinsic localizer accuracy was determined for single markers as well as for tools using a volumetric measurement protocol on a coordinate measurement machine. To determine the relative distance error within the measurement volume, the Length Measurement Error (LME) was determined at 35 test lengths. As accuracy depends strongly on the marker configuration employed, the error to be expected in typical clinical setups was estimated in a simulation for different tool configurations. The two active localizer systems, the Stryker Navigation System II camera and the Polaris Spectra (active mode), showed the best results, with trueness values (mean +/- standard deviation) of 0.058 +/- 0.033 mm and 0.089 +/- 0.061 mm, respectively. The Polaris Spectra (passive mode) showed a trueness of 0.170 +/- 0.090 mm, and the Polaris P4 showed the lowest trueness at 0.272 +/- 0.394 mm with a higher number of outliers than for the other cameras. The simulation of the different tool configurations in a typical clinical setup revealed that the tracking error can

  15. Measurement error analysis of taxi meter

    NASA Astrophysics Data System (ADS)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  16. Error correction in image registration using POCS

    NASA Astrophysics Data System (ADS)

    Duraisamy, Prakash; Alam, Mohammad S.; Jackson, Stephen C.

    2011-04-01

    Image registration plays a vital role in many real time imaging applications. Registering the images in a precise manner is a challenging problem. In this paper, we focus on improving image registration error computation using the projection onto convex sets (POCS) techniques which improves the sub-pixel accuracy in the images leading to better estimates for the registration error. This can be used in turn to improve the registration itself. The results obtained from the proposed technique match well with the ground truth which validates the accuracy of this technique. Furthermore, the proposed technique shows better performance compared to existing methods.

  17. Relative significance of heat transfer processes to quantify tradeoffs between complexity and accuracy of energy simulations with a building energy use patterns classification

    NASA Astrophysics Data System (ADS)

    Heidarinejad, Mohammad

    This dissertation develops rapid and accurate building energy simulations based on a building classification that identifies and focuses modeling efforts on most significant heat transfer processes. The building classification identifies energy use patterns and their contributing parameters for a portfolio of buildings. The dissertation hypothesis is "Building classification can provide minimal required inputs for rapid and accurate energy simulations for a large number of buildings". The critical literature review indicated there is lack of studies to (1) Consider synoptic point of view rather than the case study approach, (2) Analyze influence of different granularities of energy use, (3) Identify key variables based on the heat transfer processes, and (4) Automate the procedure to quantify model complexity with accuracy. Therefore, three dissertation objectives are designed to test out the dissertation hypothesis: (1) Develop different classes of buildings based on their energy use patterns, (2) Develop different building energy simulation approaches for the identified classes of buildings to quantify tradeoffs between model accuracy and complexity, (3) Demonstrate building simulation approaches for case studies. Penn State's and Harvard's campus buildings as well as high performance LEED NC office buildings are test beds for this study to develop different classes of buildings. The campus buildings include detailed chilled water, electricity, and steam data, enabling to classify buildings into externally-load, internally-load, or mixed-load dominated. The energy use of the internally-load buildings is primarily a function of the internal loads and their schedules. Externally-load dominated buildings tend to have an energy use pattern that is a function of building construction materials and outdoor weather conditions. However, most of the commercial medium-sized office buildings have a mixed-load pattern, meaning the HVAC system and operation schedule dictate

  18. Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables

    NASA Technical Reports Server (NTRS)

    Fenyes, Peter A.; Lust, Robert V.

    1989-01-01

    Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.

  19. A Methodology for Validating Safety Heuristics Using Clinical Simulations: Identifying and Preventing Possible Technology-Induced Errors Related to Using Health Information Systems

    PubMed Central

    Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher

    2013-01-01

    Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902

  20. Error modeling for GPS geodetic applications

    NASA Technical Reports Server (NTRS)

    Rim, Hyung Jin; Schutz, Bob E.; Tapley, Byron D.

    1993-01-01

    An extensive investigation was conducted to provide realistic error models for Global Positioning System (GPS) related numerical simulation. This study considers most of the important error sources for measurement and dynamic models which are currently being used for GPS geodetic applications. These error models were evaluated by comparing with real GPS data.