Science.gov

Sample records for accuracy relative error

  1. A new accuracy measure based on bounded relative error for time series forecasting

    PubMed Central

    Twycross, Jamie; Garibaldi, Jonathan M.

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480

  2. A new accuracy measure based on bounded relative error for time series forecasting.

    PubMed

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  3. Increase in error threshold for quasispecies by heterogeneous replication accuracy

    NASA Astrophysics Data System (ADS)

    Aoki, Kazuhiro; Furusawa, Mitsuru

    2003-09-01

    In this paper we investigate the error threshold for quasispecies with heterogeneous replication accuracy. We show that the coexistence of error-free and error-prone polymerases can greatly increase the error threshold without a catastrophic loss of genetic information. We also show that the error threshold is influenced by the number of replicores. Our research suggests that quasispecies with heterogeneous replication accuracy can reduce the genetic cost of selective evolution while still producing a variety of mutants.

  4. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  5. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    SciTech Connect

    Morley, Steven Karl

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  6. Influence of Ephemeris Error on GPS Single Point Positioning Accuracy

    NASA Astrophysics Data System (ADS)

    Lihua, Ma; Wang, Meng

    2013-09-01

    The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.

  7. Morphological Awareness and Children's Writing: Accuracy, Error, and Invention.

    PubMed

    McCutchen, Deborah; Stull, Sara

    2015-02-01

    This study examined the relationship between children's morphological awareness and their ability to produce accurate morphological derivations in writing. Fifth-grade U.S. students (n = 175) completed two writing tasks that invited or required morphological manipulation of words. We examined both accuracy and error, specifically errors in spelling and errors of the sort we termed morphological inventions, which entailed inappropriate, novel pairings of stems and suffixes. Regressions were used to determine the relationship between morphological awareness, morphological accuracy, and spelling accuracy, as well as between morphological awareness and morphological inventions. Linear regressions revealed that morphological awareness uniquely predicted children's generation of accurate morphological derivations, regardless of whether or not accurate spelling was required. A logistic regression indicated that morphological awareness was also uniquely predictive of morphological invention, with higher morphological awareness increasing the probability of morphological invention. These findings suggest that morphological knowledge may not only assist children with spelling during writing, but may also assist with word production via generative experimentation with morphological rules during sentence generation. Implications are discussed for the development of children's morphological knowledge and relationships with writing.

  8. Errors in spectral fingerprints and their effects on climate fingerprinting accuracy in the solar spectrum

    NASA Astrophysics Data System (ADS)

    Jin, Zhonghai; Sun, Moguo

    2017-02-01

    Using the Earth's reflected solar spectrum for climate change fingerprinting is an emerging research area. The spectral fingerprinting approach directly retrieves the changes in climate variables from the mean spectral data averaged across large space and time scales. To investigate this fingerprinting concept, we use ten years of satellite data to simulate the monthly and annual mean reflected solar spectra and the associated spectral fingerprints for different regions over the ocean. The interannual variations in the spectral data are derived and attributed to the interannual variations in the relevant climate variables. The fingerprinting retrieved changes in climate variables are then compared with the actual underlying variable changes from the observational data to evaluate the fingerprinting retrieval accuracy. Two important errors related to the fingerprinting approach, the nonlinearity error and the averaging error in the mean fingerprints, and their impact on the retrieval accuracy, are investigated. It is found that the averaging error increases but the nonlinearity error decreases as the region size increases. The averaging error has minimal effect on the fingerprinting retrieval accuracy in small regions but has more of an impact in large regions. In comparison, the effect of nonlinearity error on the retrieval accuracy decreases as the region size increases. It is also found that the fingerprinting retrieval accuracy is more sensitive to the nonlinearity error than to the averaging error. In addition, we compare the fingerprinting accuracy between using the monthly mean data and the annual mean data. The results show that on average higher retrieval accuracy is achieved when the annual mean data are used for the fingerprinting retrieval.

  9. Theoretical Accuracy for ESTL Bit Error Rate Tests

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin

    1998-01-01

    "Bit error rate" [BER] for the purposes of this paper is the fraction of binary bits which are inverted by passage through a communication system. BER can be measured for a block of sample bits by comparing a received block with the transmitted block and counting the erroneous bits. Bit Error Rate [BER] tests are the most common type of test used by the ESTL for evaluating system-level performance. The resolution of the test is obvious: the measurement cannot be resolved more finely than 1/N, the number of bits tested. The tolerance is not. This paper examines the measurement accuracy of the bit error rate test. It is intended that this information will be useful in analyzing data taken in the ESTL. This paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, first-time readers will derive the greatest benefit from this paper by skipping the lengthy section devoted to analysis, and treating it as reference material. The analysis performed in this paper is based on a Probability Density Function [PDF] which is developed with greater detail in a past paper, Theoretical Accuracy for ESTL Probability of Acquisition Tests, EV4-98-609.

  10. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  11. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  12. Accuracy of devices for self-monitoring of blood glucose: A stochastic error model.

    PubMed

    Vettoretti, M; Facchinetti, A; Sparacino, G; Cobelli, C

    2015-01-01

    Self-monitoring of blood glucose (SMBG) devices are portable systems that allow measuring glucose concentration in a small drop of blood obtained via finger-prick. SMBG measurements are key in type 1 diabetes (T1D) management, e.g. for tuning insulin dosing. A reliable model of SMBG accuracy would be important in several applications, e.g. in in silico design and optimization of insulin therapy. In the literature, the most used model to describe SMBG error is the Gaussian distribution, which however is simplistic to properly account for the observed variability. Here, a methodology to derive a stochastic model of SMBG accuracy is presented. The method consists in dividing the glucose range into zones in which absolute/relative error presents constant standard deviation (SD) and, then, fitting by maximum-likelihood a skew-normal distribution model to absolute/relative error distribution in each zone. The method was tested on a database of SMBG measurements collected by the One Touch Ultra 2 (Lifescan Inc., Milpitas, CA). In particular, two zones were identified: zone 1 (BG≤75 mg/dl) with constant-SD absolute error and zone 2 (BG>75mg/dl) with constant-SD relative error. Mean and SD of the identified skew-normal distributions are, respectively, 2.03 and 6.51 in zone 1, 4.78% and 10.09% in zone 2. Visual predictive check validation showed that the derived two-zone model accurately reproduces SMBG measurement error distribution, performing significantly better than the single-zone Gaussian model used previously in the literature. This stochastic model allows a more realistic SMBG scenario for in silico design and optimization of T1D insulin therapy.

  13. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion

    PubMed Central

    Ricci, Luca; Taffoni, Fabrizio

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100

  14. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion.

    PubMed

    Ricci, Luca; Taffoni, Fabrizio; Formica, Domenico

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application.

  15. Eliminating alignment error and analyzing Ritchey angle accuracy in Ritchey-Common test

    NASA Astrophysics Data System (ADS)

    Zhu, Shuo; Zhang, Xiaohui

    2013-01-01

    To improve the accuracy of the Ritchey-Common (R-C) test, this study proposes a method that utilizes the relation between system pupil and test flat coordinates to obtain a flat surface and integrate the least square method to eliminate the effect of misalignment. A Ritchey angle between 20° and 50° would be suitable for a simulation test. Testing accuracy is ensured when the error of the Ritchey angle is controlled within ±1°. To avoid measurement error of the Ritchey angle, the ratio of image size to the pupil plane is used to calculate the value. The accuracy can reach 0.2°. The three Ritchey angles chosen for the experiment are separated into two groups. The residual error between ZYGO and the group of 24.8° and 40.3° is 0.0013 wavelength (λ=632.8 nm). The experimental results confirm that this R-C method is effective and accurate.

  16. Note: Periodic error measurement in heterodyne interferometers using a subpicometer accuracy Fabry-Perot interferometer.

    PubMed

    Zhu, Minhao; Wei, Haoyun; Wu, Xuejian; Li, Yan

    2014-08-01

    Periodic error is the major problem that limits the accuracy of heterodyne interferometry. A traceable system for periodic error measurement is developed based on a nonlinearity free Fabry-Perot (F-P) interferometer. The displacement accuracy of the F-P interferometer is 0.49 pm at 80 ms averaging time, with the measurement results referenced to an optical frequency comb. Experimental comparison between the F-P interferometer and a commercial heterodyne interferometer is carried out and it shows that the first harmonic periodic error dominates in the commercial heterodyne interferometer with an error amplitude of 4.64 nm.

  17. Error-related electrocorticographic activity in humans during continuous movements

    NASA Astrophysics Data System (ADS)

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2012-04-01

    Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects’ movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.

  18. Spacecraft-spacecraft very long baseline interferometry. Part 1: Error modeling and observable accuracy

    NASA Technical Reports Server (NTRS)

    Edwards, C. D., Jr.; Border, J. S.

    1992-01-01

    In Part 1 of this two-part article, an error budget is presented for Earth-based delta differential one-way range (delta DOR) measurements between two spacecraft. Such observations, made between a planetary orbiter (or lander) and another spacecraft approaching that planet, would provide a powerful target-relative angular tracking data type for approach navigation. Accuracies of better than 5 nrad should be possible for a pair of spacecraft with 8.4-GHz downlinks, incorporating 40-MHz DOR tone spacings, while accuracies approaching 1 nrad will be possible if the spacecraft incorporate 32-GHz downlinks with DOR tone spacing on the order of 250 MHz; these accuracies will be available for the last few weeks or months of planetary approach for typical Earth-Mars trajectories. Operational advantages of this data type are discussed, and ground system requirements needed to enable spacecraft-spacecraft delta DOR observations are outlined. This tracking technique could be demonstrated during the final approach phase of the Mars '94 mission, using Mars Observer as the in-orbit reference spacecraft, if the Russian spacecraft includes an 8.4-GHz downlink incorporating DOR tones. Part 2 of this article will present an analysis of predicted targeting accuracy for this scenario.

  19. Measurement accuracy of articulated arm CMMs with circular grating eccentricity errors

    NASA Astrophysics Data System (ADS)

    Zheng, Dateng; Yin, Sanfeng; Luo, Zhiyang; Zhang, Jing; Zhou, Taiping

    2016-11-01

    The 6 circular grating eccentricity errors model attempts to improve the measurement accuracy of an articulated arm coordinate measuring machine (AACMM) without increasing the corresponding hardware cost. We analyzed the AACMM’s circular grating eccentricity and obtained the 6 joints’ circular grating eccentricity error model parameters by conducting circular grating eccentricity error experiments. We completed the calibration operations for the measurement models by using home-made standard bar components. Our results show that the measurement errors from the AACMM’s measurement model without and with circular grating eccentricity errors are 0.0834 mm and 0.0462 mm, respectively. Significantly, we determined that measurement accuracy increased by about 44.6% when the circular grating eccentricity errors were corrected. This study is significant because it promotes wider applications of AACMMs both in theory and in practice.

  20. Morphological Awareness and Children's Writing: Accuracy, Error, and Invention

    ERIC Educational Resources Information Center

    McCutchen, Deborah; Stull, Sara

    2015-01-01

    This study examined the relationship between children's morphological awareness and their ability to produce accurate morphological derivations in writing. Fifth-grade US students (n = 175) completed two writing tasks that invited or required morphological manipulation of words. We examined both accuracy and error, specifically errors in…

  1. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  2. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  3. Anxiety and error-related brain activity.

    PubMed

    Hajcak, Greg; McDonald, Nicole; Simons, Robert F

    2003-10-01

    Error-related negativity (ERN/Ne) is a component of the event-related brain potential (ERP) associated with monitoring action and detecting errors. It is a sharp negative deflection that generally occurs from 50 to 150 ms following response execution and has been associated with anterior cingulate cortex (ACC) activity. An enhanced ERN has been observed in patients with obsessive-compulsive disorder (OCD)--reflecting abnormal ACC activity hypothesized as part of the pathophysiology of OCD. We recently reported that the ERN is also enhanced in a group of college students with OC characteristics. The present study extended these findings by measuring the ERN in college undergraduates who scored high on either the Penn State Worry Questionnaire (PSWQ) or a combined version of the Snake (SNAQ) and Spider (SPQ) Questionnaires. Results indicate that, like OC subjects, subjects who score high on a measure of general anxiety and worry have enhanced error-related brain activity relative to both phobic and non-anxious control subjects. The enhanced ERN was found to generalize beyond OCD within the anxiety spectrum disorders but also shows some specificity within these disorders.

  4. Capturing L2 Accuracy Developmental Patterns: Insights from an Error-Tagged EFL Learner Corpus

    ERIC Educational Resources Information Center

    Thewissen, Jennifer

    2013-01-01

    The present article addresses the issue of second language accuracy developmental trajectories and shows how they can be captured via an error-tagged version of an English as a Foreign Language (EFL) learner corpus. The data used in this study were extracted from the International Corpus of Learner English (Granger et al., 2009) and consist of a…

  5. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    SciTech Connect

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R.; National Inst. of Standards and Technology, Gaithersburg, MD )

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  6. Iterative error correction of long sequencing reads maximizes accuracy and improves contig assembly.

    PubMed

    Sameith, Katrin; Roscito, Juliana G; Hiller, Michael

    2017-01-01

    Next-generation sequencers such as Illumina can now produce reads up to 300 bp with high throughput, which is attractive for genome assembly. A first step in genome assembly is to computationally correct sequencing errors. However, correcting all errors in these longer reads is challenging. Here, we show that reads with remaining errors after correction often overlap repeats, where short erroneous k-mers occur in other copies of the repeat. We developed an iterative error correction pipeline that runs the previously published String Graph Assembler (SGA) in multiple rounds of k-mer-based correction with an increasing k-mer size, followed by a final round of overlap-based correction. By combining the advantages of small and large k-mers, this approach corrects more errors in repeats and minimizes the total amount of erroneous reads. We show that higher read accuracy increases contig lengths two to three times. We provide SGA-Iteratively Correcting Errors (https://github.com/hillerlab/IterativeErrorCorrection/) that implements iterative error correction by using modules from SGA.

  7. Accuracy of image-plane holographic tomography with filtered backprojection: random and systematic errors.

    PubMed

    Belashov, A V; Petrov, N V; Semenova, I V

    2016-01-01

    This paper explores the concept of image-plane holographic tomography applied to the measurements of laser-induced thermal gradients in an aqueous solution of a photosensitizer with respect to the reconstruction accuracy of three-dimensional variations of the refractive index. It uses the least-squares estimation algorithm to reconstruct refractive index variations in each holographic projection. Along with the bitelecentric optical system, transferring focused projection to the sensor plane, it facilitates the elimination of diffraction artifacts and noise suppression. This work estimates the influence of typical random and systematic errors in experiments and concludes that random errors such as accidental measurement errors or noise presence can be significantly suppressed by increasing the number of recorded digital holograms. On the contrary, even comparatively small systematic errors such as a displacement of the rotation axis projection in the course of a reconstruction procedure can significantly distort the results.

  8. Uncertainty relations and approximate quantum error correction

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.

    2016-09-01

    The uncertainty principle can be understood as constraining the probability of winning a game in which Alice measures one of two conjugate observables, such as position or momentum, on a system provided by Bob, and he is to guess the outcome. Two variants are possible: either Alice tells Bob which observable she measured, or he has to furnish guesses for both cases. Here I derive uncertainty relations for both, formulated directly in terms of Bob's guessing probabilities. For the former these relate to the entanglement that can be recovered by action on Bob's system alone. This gives an explicit quantum circuit for approximate quantum error correction using the guessing measurements for "amplitude" and "phase" information, implicitly used in the recent construction of efficient quantum polar codes. I also find a relation on the guessing probabilities for the latter game, which has application to wave-particle duality relations.

  9. Error-Induced Blindness: Error Detection Leads to Impaired Sensory Processing and Lower Accuracy at Short Response-Stimulus Intervals.

    PubMed

    Buzzell, George A; Beatty, Paul J; Paquette, Natalie A; Roberts, Daniel M; McDonald, Craig G

    2017-03-15

    Empirical evidence indicates that detecting one's own mistakes can serve as a signal to improve task performance. However, little work has focused on how task constraints, such as the response-stimulus interval (RSI), influence post-error adjustments. In the present study, event-related potential (ERP) and behavioral measures were used to investigate the time course of error-related processing while humans performed a difficult visual discrimination task. We found that error commission resulted in a marked reduction in both task performance and sensory processing on the following trial when RSIs were short, but that such impairments were not detectable at longer RSIs. Critically, diminished sensory processing at short RSIs, indexed by the stimulus-evoked P1 component, was predicted by an ERP measure of error processing, the Pe component. A control analysis ruled out a general lapse in attention or mind wandering as being predictive of subsequent reductions in sensory processing; instead, the data suggest that error detection causes an attentional bottleneck, which can diminish sensory processing on subsequent trials that occur in short succession. The findings demonstrate that the neural system dedicated to monitoring and improving behavior can, paradoxically, at times be the source of performance failures.SIGNIFICANCE STATEMENT The performance-monitoring system is a network of brain regions dedicated to monitoring behavior to adjust task performance when necessary. Previous research has demonstrated that activation of the performance monitoring system following incorrect decisions serves to improve future task performance. However, the present study provides evidence that, when perceptual decisions must be made rapidly (within approximately half a second of each other), activation of the performance-monitoring system is predictive of impaired task-related attention on the subsequent trial. The data illustrate that the cognitive demands imposed by error processing

  10. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  11. Influence of both angle and position error of pentaprism on accuracy of pentaprism scanning system

    NASA Astrophysics Data System (ADS)

    Xu, Kun; Han, Sen; Zhang, Qiyuan; Wu, Quanying

    2014-11-01

    Pentaprism scanning system has been widely used in the measurement of large flat and wavefront, based on its property that the deviated beam will have no motion in the pitch direction. But the manufacturing and position errors of pentaprisms will bring error to the measurement and so a good error analysis method is indispensable. In this paper, we propose a new method of building mathematic models of pentaprism and through which the size and angle errors of a pentaprism can be put into the model as parameters. 4 size parameters are selected to determine the size and 11 angle parameters are selected to determine the angles of a pentaprism. Yaw, Roll and Pitch are used to describe the position error of a pentaprism and an autocollimator. A pentaprism scanning system of wavefront test is simulated by ray tracing using matlab. We design a method of separating the constant from the measurement results which will improve the measurement accuracy and analyze the system error by Monte Carlo method. This method is simple, rapid, accurate and convenient for computer programming.

  12. The effect of clock, media, and station location errors on Doppler measurement accuracy

    NASA Technical Reports Server (NTRS)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  13. Objective Error Criterion for Evaluation of Mapping Accuracy Based on Sensor Time-of-Flight Measurements.

    PubMed

    Barshan, Billur

    2008-12-15

    An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.

  14. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.

  15. Factoring Algebraic Error for Relative Pose Estimation

    SciTech Connect

    Lindstrom, P; Duchaineau, M

    2009-03-09

    We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.

  16. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    SciTech Connect

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  17. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    PubMed

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  18. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    NASA Astrophysics Data System (ADS)

    Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-05-01

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50% when imaging with iodine-125, and up to 25% when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30%, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50%) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the use of resolution

  19. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy.

    PubMed

    Smith, Tom; Heger, Andreas; Sudbery, Ian

    2017-03-01

    Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package.

  20. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy

    PubMed Central

    2017-01-01

    Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package. PMID:28100584

  1. Accuracy of the European solar water heater test procedure. Part 1: Measurement errors and parameter estimates

    SciTech Connect

    Rabl, A.; Leide, B. ); Carvalho, M.J.; Collares-Pereira, M. ); Bourges, B.

    1991-01-01

    The Collector and System Testing Group (CSTG) of the European Community has developed a procedure for testing the performance of solar water heaters. This procedure treats a solar water heater as a black box with input-output parameters that are determined by all-day tests. In the present study the authors carry out a systematic analysis of the accuracy of this procedure, in order to answer the question: what tolerances should one impose for the measurements and how many days of testing should one demand under what meteorological conditions, in order to be able to quarantee a specified maximum error for the long term performance The methodology is applicable to other test procedures as well. The present paper (Part 1) examines the measurement tolerances of the current version of the procedure and derives a priori estimates of the errors of the parameters; these errors are then compared with the regression results of the Round Robin test series. The companion paper (Part 2) evaluates the consequences for the accuracy of the long term performance prediction. The authors conclude that the CSTG test procedure makes it possible to predict the long term performance with standard errors around 5% for sunny climates (10% for cloudy climates). The apparent precision of individual test sequences is deceptive because of large systematic discrepancies between different sequences. Better results could be obtained by imposing tighter control on the constancy of the cold water supply temperature and on the environment of the test, the latter by enforcing the recommendation for the ventilation of the collector.

  2. Impacts of motivational valence on the error-related negativity elicited by full and partial errors.

    PubMed

    Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki

    2016-02-01

    Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors.

  3. Monitoring memory errors: the influence of the veracity of retrieved information on the accuracy of judgements of learning.

    PubMed

    Rhodes, Matthew G; Tauber, Sarah K

    2011-11-01

    The current study examined the degree to which predictions of memory performance made immediately or at a delay are sensitive to confidently held memory illusions. Participants studied unrelated pairs of words and made judgements of learning (JOLs) for each item, either immediately or after a delay. Half of the unrelated pairs (deceptive items; e.g., nurse-dollar) had a semantically related competitor (e.g., doctor) that was easily accessible when given a test cue (e.g., nurse-do_ _ _r) and half had no semantically related competitor (control items; e.g., subject-dollar). Following the study phase, participants were administered a cued recall test. Results from Experiment 1 showed that memory performance was less accurate for deceptive compared with control items. In addition, delaying judgement improved the relative accuracy of JOLs for control items but not for deceptive items. Subsequent experiments explored the degree to which the relative accuracy of delayed JOLs for deceptive items improved as a result of a warning to ensure that retrieved memories were accurate (Experiment 2) and corrective feedback regarding the veracity of information retrieved prior to making a JOL (Experiment 3). In all, these data suggest that delayed JOLs may be largely insensitive to memory errors unless participants are provided with feedback regarding memory accuracy.

  4. The Relative Frequency of Spanish Pronunciation Errors.

    ERIC Educational Resources Information Center

    Hammerly, Hector

    Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…

  5. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results.

  6. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  7. Dissociable correlates of response conflict and error awareness in error-related brain activity

    PubMed Central

    Hughes, Gethin; Yeung, Nick

    2010-01-01

    Errors in speeded decision tasks are associated with characteristic patterns of brain activity. In the scalp-recorded EEG, error processing is reflected in two components, the error-related negativity (ERN) and the error positivity (Pe). These components have been widely studied, but debate remains regarding the precise aspects of error processing they reflect. The present study investigated the relation between the ERN and Pe using a novel version of the flanker task to allow a comparison between errors reflecting different causes—response conflict versus stimulus masking. The conflict and mask conditions were matched for overall behavioural performance but differed in underlying response dynamics, as indexed by response time distributions and measures of lateralised motor activity. ERN amplitude varied in relation to these differing response dynamics, being significantly larger in the conflict condition compared to the mask condition. Furthermore, differences in response dynamics between participants were predictive of modulations in ERN amplitude. In contrast, Pe activity varied little between conditions, but varied across trials in relation to participants‘ awareness of their errors. Taken together, these findings suggest a dissociation between the ERN and Pe, with the former reflecting the dynamics of response selection and conflict, and the latter reflecting conscious recognition of an error. PMID:21130788

  8. Scaling Relation for Occulter Manufacturing Errors

    NASA Technical Reports Server (NTRS)

    Sirbu, Dan; Shaklan, Stuart B.; Kasdin, N. Jeremy; Vanderbei, Robert J.

    2015-01-01

    An external occulter is a spacecraft own along the line-of-sight of a space telescope to suppress starlight and enable high-contrast direct imaging of exoplanets. The shape of an external occulter must be specially designed to optimally suppress starlight and deviations from the ideal shape due to manufacturing errors can result loss of suppression in the shadow. Due to the long separation distances and large dimensions involved for a space occulter, laboratory testing is conducted with scaled versions of occulters etched on silicon wafers. Using numerical simulations for a flight Fresnel occulter design, we show how the suppression performance of an occulter mask scales with the available propagation distance for expected random manufacturing defects along the edge of the occulter petal. We derive an analytical model for predicting performance due to such manufacturing defects across the petal edges of an occulter mask and compare this with the numerical simulations. We discuss the scaling of an extended occulter test-bed.

  9. Perfect error processing: Perfectionism-related variations in action monitoring and error processing mechanisms.

    PubMed

    Stahl, Jutta; Acharki, Manuela; Kresimon, Miriam; Völler, Frederike; Gibbons, Henning

    2015-08-01

    Showing excellent performance and avoiding poor performance are the main characteristics of perfectionists. Perfectionism-related variations (N=94) in neural correlates of performance monitoring were investigated in a flanker task by assessing two perfectionism-related trait dimensions: Personal standard perfectionism (PSP), reflecting intrinsic motivation to show error-free performance, and evaluative concern perfectionism (ECP), representing the worry of being poorly evaluated based on bad performance. A moderating effect of ECP and PSP on error processing - an important performance monitoring system - was investigated by examining the error (-related) negativity (Ne/ERN) and the error positivity (Pe). The smallest Ne/ERN difference (error-correct) was obtained for pure-ECP participants (high-ECP-low-PSP), whereas the highest difference was shown for those with high-ECP-high-PSP (i.e., mixed perfectionists). Pe was positively correlated with PSP only. Our results encouraged the cognitive-bias hypothesis suggesting that pure-ECP participants reduce response-related attention to avoid intense error processing by minimising the subjective threat of negative evaluations. The PSP-related variations in late error processing are consistent with the participants' high in PSP goal-oriented tendency to optimise their behaviour.

  10. Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials

    PubMed Central

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels

    2013-01-01

    This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212

  11. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    PubMed

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Angels

    2013-01-01

    This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  12. Error-related negativity reflects detection of negative reward prediction error.

    PubMed

    Yasuda, Asako; Sato, Atsushi; Miyawaki, Kaori; Kumano, Hiroaki; Kuboki, Tomifusa

    2004-11-15

    Error-related negativity (ERN) is a negative deflection in the event-related potential elicited in error trials. To examine the function of ERN, we performed an experiment in which two within-participants factors were manipulated: outcome uncertainty and content of feedback. The ERN was largest when participants expected correct feedback but received error feedback. There were significant positive correlations between the ERN amplitude and the rate of response switching in the subsequent trial, and between the ERN amplitude and the trait version score on negative affect scale. These results suggest that ERN reflects detection of a negative reward prediction error and promotes subsequent response switching, and that individuals with high negative affect are hypersensitive to a negative reward prediction error.

  13. High Accuracy Acoustic Relative Humidity Measurement in Duct Flow with Air

    PubMed Central

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments. PMID:22163610

  14. High accuracy acoustic relative humidity measurement in duct flow with air.

    PubMed

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  15. Stronger error disturbance relations for incompatible quantum measurements

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Chiranjib; Shukla, Namrata; Pati, Arun Kumar

    2016-03-01

    We formulate a new error-disturbance relation, which is free from explicit dependence upon variances in observables. This error-disturbance relation shows improvement over the one provided by the Branciard inequality and the Ozawa inequality for some initial states and for a particular class of joint measurements under consideration. We also prove a modified form of Ozawa's error-disturbance relation. The latter relation provides a tighter bound compared to the Ozawa and the Branciard inequalities for a small number of states.

  16. Dysfunctional error-related processing in female psychopathy.

    PubMed

    Maurer, J Michael; Steele, Vaughn R; Edwards, Bethany G; Bernat, Edward M; Calhoun, Vince D; Kiehl, Kent A

    2016-07-01

    Neurocognitive studies of psychopathy have predominantly focused on male samples. Studies have shown that female psychopaths exhibit similar affective deficits as their male counterparts, but results are less consistent across cognitive domains including response modulation. As such, there may be potential gender differences in error-related processing in psychopathic personality. Here we investigate response-locked event-related potential (ERP) components [the error-related negativity (ERN/Ne) related to early error-detection processes and the error-related positivity (Pe) involved in later post-error processing] in a sample of incarcerated adult female offenders (n = 121) who performed a response inhibition Go/NoGo task. Psychopathy was assessed using the Hare Psychopathy Checklist-Revised (PCL-R). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Consistent with previous research performed in psychopathic males, female psychopaths exhibited specific deficiencies in the neural correlates of post-error processing (as indexed by reduced Pe amplitude) but not in error monitoring (as indexed by intact ERN/Ne amplitude). Specifically, psychopathic traits reflecting interpersonal and affective dysfunction remained significant predictors of both time-domain and PCA measures reflecting reduced Pe mean amplitude. This is the first evidence to suggest that incarcerated female psychopaths exhibit similar dysfunctional post-error processing as male psychopaths.

  17. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  18. The uncertainty of errors: Intolerance of uncertainty is associated with error-related brain activity.

    PubMed

    Jackson, Felicia; Nelson, Brady D; Hajcak, Greg

    2016-01-01

    Errors are unpredictable events that have the potential to cause harm. The error-related negativity (ERN) is the electrophysiological index of errors and has been posited to reflect sensitivity to threat. Intolerance of uncertainty (IU) is the tendency to perceive uncertain events as threatening. In the present study, 61 participants completed a self-report measure of IU and a flanker task designed to elicit the ERN. Results indicated that IU subscales were associated with the ERN in opposite directions. Cognitive distress in the face of uncertainty (Prospective IU) was associated with a larger ERN and slower reaction time. Inhibition in response to uncertainty (Inhibitory IU) was associated with a smaller ERN and faster reaction time. This study suggests that sensitivity to the uncertainty of errors contributes to the magnitude of the ERN. Furthermore, these findings highlight the importance of considering the heterogeneity of anxiety phenotypes in relation to measures of threat sensitivity.

  19. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    SciTech Connect

    Ma, T; Kumaraswamy, L

    2015-06-15

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10 CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.

  20. Lexical Errors and Accuracy in Foreign Language Writing. Second Language Acquisition

    ERIC Educational Resources Information Center

    del Pilar Agustin Llach, Maria

    2011-01-01

    Lexical errors are a determinant in gaining insight into vocabulary acquisition, vocabulary use and writing quality assessment. Lexical errors are very frequent in the written production of young EFL learners, but they decrease as learners gain proficiency. Misspellings are the most common category, but formal errors give way to semantic-based…

  1. Individual Differences in Absolute and Relative Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Maki, Ruth H.; Shields, Micheal; Wheeler, Amanda Easton; Zacchilli, Tammy Lowery

    2005-01-01

    The authors investigated absolute and relative metacomprehension accuracy as a function of verbal ability in college students. Students read hard texts, revised texts, or a mixed set of texts. They then predicted their performance, took a multiple-choice test on the texts, and made posttest judgments about their performance. With hard texts,…

  2. Error-disturbance uncertainty relations studied in neutron optics

    NASA Astrophysics Data System (ADS)

    Sponar, Stephan; Sulyok, Georg; Demirel, Bulent; Hasegawa, Yuji

    2016-09-01

    Heisenberg's uncertainty principle is probably the most famous statement of quantum physics and its essential aspects are well described by a formulations in terms of standard deviations. However, a naive Heisenberg-type error-disturbance relation is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid Ozawa's relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg's original EDUR is violated, and the Ozawa's and Branciard's EDURs are valid in a wide range of experimental parameters, applying a new measurement procedure referred to as two-state method.

  3. Assessing the Accuracy and Feasibility of a Refractive Error Screening Program Conducted by School Teachers in Pre-Primary and Primary Schools in Thailand

    PubMed Central

    Teerawattananon, Kanlaya; Myint, Chaw-Yin; Wongkittirux, Kwanjai; Teerawattananon, Yot; Chinkulkitnivat, Bunyong; Orprayoon, Surapong; Kusakul, Suwat; Tengtrisorn, Supaporn; Jenchitr, Watanee

    2014-01-01

    Introduction As part of the development of a system for the screening of refractive error in Thai children, this study describes the accuracy and feasibility of establishing a program conducted by teachers. Objective To assess the accuracy and feasibility of screening by teachers. Methods A cross-sectional descriptive and analytical study was conducted in 17 schools in four provinces representing four geographic regions in Thailand. A two-staged cluster sampling was employed to compare the detection rate of refractive error among eligible students between trained teachers and health professionals. Serial focus group discussions were held for teachers and parents in order to understand their attitude towards refractive error screening at schools and the potential success factors and barriers. Results The detection rate of refractive error screening by teachers among pre-primary school children is relatively low (21%) for mild visual impairment but higher for moderate visual impairment (44%). The detection rate for primary school children is high for both levels of visual impairment (52% for mild and 74% for moderate). The focus group discussions reveal that both teachers and parents would benefit from further education regarding refractive errors and that the vast majority of teachers are willing to conduct a school-based screening program. Conclusion Refractive error screening by health professionals in pre-primary and primary school children is not currently implemented in Thailand due to resource limitations. However, evidence suggests that a refractive error screening program conducted in schools by teachers in the country is reasonable and feasible because the detection and treatment of refractive error in very young generations is important and the screening program can be implemented and conducted with relatively low costs. PMID:24926993

  4. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though

  5. Continuous theta burst stimulation over the left pre-motor cortex affects sensorimotor timing accuracy and supraliminal error correction.

    PubMed

    Bijsterbosch, Janine D; Lee, Kwang-Hyuk; Dyson-Sutton, William; Barker, Anthony T; Woodruff, Peter W R

    2011-09-02

    Adjustments to movement in response to changes in our surroundings are common in everyday behavior. Previous research has suggested that the left pre-motor cortex (PMC) is specialized for the temporal control of movement and may play a role in temporal error correction. The aim of this study was to determine the role of the left PMC in sensorimotor timing and error correction using theta burst transcranial magnetic stimulation (TBS). In Experiment 1, subjects performed a sensorimotor synchronization task (SMS) with the left and the right hand before and after either continuous or intermittent TBS (cTBS or iTBS). Timing accuracy was assessed during synchronized finger tapping with a regular auditory pacing stimulus. Responses following perceivable local timing shifts in the pacing stimulus (phase shifts) were used to measure error correction. Suppression of the left PMC using cTBS decreased timing accuracy because subjects tapped further away from the pacing tones and tapping variability increased. In addition, error correction responses returned to baseline tap-tone asynchrony levels faster following negative shifts and no overcorrection occurred following positive shifts after cTBS. However, facilitation of the left PMC using iTBS did not affect timing accuracy or error correction performance. Experiment 2 revealed that error correction performance may change with practice, independent of TBS. These findings provide evidence for a role of the left PMC in both sensorimotor timing and error correction in both hands. We propose that the left PMC may be involved in voluntarily controlled phase correction responses to perceivable timing shifts.

  6. Medication Errors in Cardiopulmonary Arrest and Code-Related Situations.

    PubMed

    Flannery, Alexander H; Parli, Sara E

    2016-01-01

    PubMed/MEDLINE (1966-November 2014) was searched to identify relevant published studies on the overall frequency, types, and examples of medication errors during medical emergencies involving cardiopulmonary resuscitation and related situations, and the breakdown by type of error. The overall frequency of medication errors during medical emergencies, specifically situations related to resuscitation, is highly variable. Medication errors during such emergencies, particularly cardiopulmonary resuscitation and surrounding events, are not well characterized in the literature but may be more frequent than previously thought. Depending on whether research methods included database mining, simulation, or prospective observation of clinical practice, reported occurrence of medication errors during cardiopulmonary resuscitation and surrounding events has ranged from less than 1% to 50%. Because of the chaos of the resuscitation environment, errors in prescribing, dosing, preparing, labeling, and administering drugs are prone to occur. System-based strategies, such as infusion pump policies and code cart management, as well as personal strategies exist to minimize medication errors during emergency situations.

  7. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition

    PubMed Central

    2016-01-01

    Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627

  8. Error-disturbance uncertainty relations in neutron spin measurements

    NASA Astrophysics Data System (ADS)

    Sponar, Stephan

    2016-05-01

    Heisenberg’s uncertainty principle in a formulation of uncertainties, intrinsic to any quantum system, is rigorously proven and demonstrated in various quantum systems. Nevertheless, Heisenberg’s original formulation of the uncertainty principle was given in terms of a reciprocal relation between the error of a position measurement and the thereby induced disturbance on a subsequent momentum measurement. However, a naive generalization of a Heisenberg-type error-disturbance relation for arbitrary observables is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid, Ozawa’s relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance under certain conditions. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg’s original EDUR is violated, and Ozawa’s and Branciard’s EDURs are valid in a wide range of experimental parameters, as well as the tightness of Branciard’s relation.

  9. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources

    NASA Technical Reports Server (NTRS)

    Olson, Corwin; Long, Anne; Car[emter. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  10. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    PubMed Central

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023

  11. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates.

    PubMed

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended.

  12. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.

  13. Small Inertial Measurement Units - Soures of Error and Limitations on Accuracy

    NASA Technical Reports Server (NTRS)

    Hoenk, M. E.

    1994-01-01

    Limits on the precision of small accelerometers for inertial measurement units are enumerated and discussed. Scaling laws and errors which affect the precision are discussed in terms of tradeoffs between size, sensitivity, and cost.

  14. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  15. GPS (Global Positioning System) Error Budgets, Accuracy and Applications Considerations for Test and Training Ranges.

    DTIC Science & Technology

    1982-12-01

    aligment , accelerometer and gyro instrument error parameters for the IGS. Estimates of these parameters can be used to isolate hardware and software...the Heavenly Bodies Moving About the Sun in Conic Sections, New York, Dover Publications Inc., 1963 (Reprint). 66. Kalman, R. E., "A New Approach to

  16. The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.

    2015-01-01

    Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…

  17. The Effect of Teacher Error Feedback on the Accuracy of EFL Student Writing

    ERIC Educational Resources Information Center

    Pan, Yi-chun

    2010-01-01

    This study investigated the effect of teacher error feedback on students' ability to write accurately. Three male first-year Physics graduate students at a university in Taiwan participated in this study. They were asked to write a 100-word passage about the greatest invention in human history. Within days of the teacher's grammatical feedback,…

  18. Accuracy of the Generalizability-Model Standard Errors for the Percents of Examinees Reaching Standards.

    ERIC Educational Resources Information Center

    Li, Yuan H.; Schafer, William D.

    An empirical study of the Yen (W. Yen, 1997) analytic formula for the standard error of a percent-above-cut [SE(PAC)] was conducted. This formula was derived from variance component information gathered in the context of generalizability theory. SE(PAC)s were estimated by different methods of estimating variance components (e.g., W. Yens…

  19. System Related Interventions to Reduce Diagnostic Error: A Narrative Review

    PubMed Central

    Singh, Hardeep; Graber, Mark L.; Kissam, Stephanie M.; Sorensen, Asta V.; Lenfestey, Nancy F.; Tant, Elizabeth M.; Henriksen, Kerm; LaBresh, Kenneth A.

    2013-01-01

    Background Diagnostic errors (missed, delayed, or wrong diagnosis) have gained recent attention and are associated with significant preventable morbidity and mortality. We reviewed the recent literature to identify interventions that have been, or could be, implemented to address systems-related factors that contribute directly to diagnostic error. Methods We conducted a comprehensive search using multiple search strategies. We first identified candidate articles in English between 2000 and 2009 from a PubMed search that exclusively evaluated for articles related to diagnostic error or delay. We then sought additional papers from references in the initial dataset, searches of additional databases, and subject matter experts. Articles were included if they formally evaluated an intervention to prevent or reduce diagnostic error; however, we also included papers if interventions were suggested and not tested in order to inform the state-of-the science on the topic. We categorized interventions according to the step in the diagnostic process they targeted: patient-provider encounter, performance and interpretation of diagnostic tests, follow-up and tracking of diagnostic information, subspecialty and referral-related; and patient-specific. Results We identified 43 articles for full review, of which 6 reported tested interventions and 37 contained suggestions for possible interventions. Empirical studies, though somewhat positive, were non-experimental or quasi-experimental and included a small number of clinicians or health care sites. Outcome measures in general were underdeveloped and varied markedly between studies, depending on the setting or step in the diagnostic process involved. Conclusions Despite a number of suggested interventions in the literature, few empirical studies have tested interventions to reduce diagnostic error in the last decade. Advancing the science of diagnostic error prevention will require more robust study designs and rigorous definitions

  20. On the Effects of Error Correction Strategies on the Grammatical Accuracy of the Iranian English Learners

    ERIC Educational Resources Information Center

    Aliakbari, Mohammad; Toni, Arman

    2009-01-01

    Writing, as a productive skill, requires an accurate in-depth knowledge of the grammar system, language form and sentence structure. The emphasis on accuracy is justified in the sense that it can lead to the production of structurally correct instances of second language, and to prevent inaccuracy that may result in the production of structurally…

  1. Compensation of Environment and Motion Error for Accuracy Improvement of Ultra-Precision Lathe

    NASA Astrophysics Data System (ADS)

    Kwac, Lee-Ku; Kim, Jae-Yeol; Kim, Hong-Gun

    The technological manipulation of the piezo-electric actuator could compensate for the errors of the machining precision during the process of machining which lead to an elevation and enhancement in overall precisions. This manipulation is a very convenient method to advance the precision for nations without the solid knowledge of the ultra-precision machining technology. There were 2 divisions of researches conducted to develop the UPCU for precision enhancement of the current lathe and compensation for the environmental errors as shown below; The first research was designed to measure and real-time correct any deviations in variety of areas to achieve a compensation system through more effective optical fiber laser encoder than the encoder resolution which was currently used in the existing lathe. The deviations for a real-time correction were composed of followings; the surrounding air temperature, the thermal deviations of the machining materials, the thermal deviations in spindles, and the overall thermal deviation occurred due to the machine structure. The second research was to develop the UPCU and to improve the machining precision through the ultra-precision positioning and the real-time operative error compensation. The ultimate goal was to improve the machining precision of the existing lathe through completing the 2 research tasks mentioned above.

  2. Learning to optimize speed, accuracy, and energy expenditure: a framework for understanding speed-accuracy relations in goal-directed aiming.

    PubMed

    Elliott, Digby; Hansen, Steven; Mendoza, Jocelyn; Tremblay, Luc

    2004-09-01

    Over the last century, investigators have developed a number of models to explain the relation between speed and accuracy in target-directed manual aiming. The models vary in the extent to which they stress the importance of feedforward processes and the online use of sensory information (see D. Elliott, W. F. Helsen, & R. Chua, 2001, for a recent review). A common feature of those models is that the role of practice in optimizing speed, accuracy, and energy expenditure in goal-directed aiming is either ignored or minimized. The authors present a theoretical framework for understanding speed-accuracy tradeoffs that takes into account the strategic, trial-to-trial behavior of the performer. The strategic behavior enables individuals to maximize movement speed while minimizing error and energy expenditure.

  3. Assessing Accuracy of Waveform Models against Numerical Relativity Waveforms

    NASA Astrophysics Data System (ADS)

    Pürrer, Michael; LVC Collaboration

    2016-03-01

    We compare currently available phenomenological and effective-one-body inspiral-merger-ringdown models for gravitational waves (GW) emitted from coalescing black hole binaries against a set of numerical relativity waveforms from the SXS collaboration. Simplifications are used in the construction of some waveform models, such as restriction to spins aligned with the orbital angular momentum, no inclusion of higher harmonics in the GW radiation, no modeling of eccentricity and the use of effective parameters to describe spin precession. In contrast, NR waveforms provide us with a high fidelity representation of the ``true'' waveform modulo small numerical errors. To focus on systematics we inject NR waveforms into zero noise for early advanced LIGO detector sensitivity at a moderately optimistic signal-to-noise ratio. We discuss where in the parameter space the above modeling assumptions lead to noticeable biases in recovered parameters.

  4. Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2011-01-01

    Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…

  5. Moving Away From Error-Related Potentials to Achieve Spelling Correction in P300 Spellers

    PubMed Central

    Mainsah, Boyla O.; Morton, Kenneth D.; Collins, Leslie M.; Sellers, Eric W.; Throckmorton, Chandra S.

    2016-01-01

    P300 spellers can provide a means of communication for individuals with severe neuromuscular limitations. However, its use as an effective communication tool is reliant on high P300 classification accuracies (>70%) to account for error revisions. Error-related potentials (ErrP), which are changes in EEG potentials when a person is aware of or perceives erroneous behavior or feedback, have been proposed as inputs to drive corrective mechanisms that veto erroneous actions by BCI systems. The goal of this study is to demonstrate that training an additional ErrP classifier for a P300 speller is not necessary, as we hypothesize that error information is encoded in the P300 classifier responses used for character selection. We perform offline simulations of P300 spelling to compare ErrP and non-ErrP based corrective algorithms. A simple dictionary correction based on string matching and word frequency significantly improved accuracy (35–185%), in contrast to an ErrP-based method that flagged, deleted and replaced erroneous characters (−47 – 0%). Providing additional information about the likelihood of characters to a dictionary-based correction further improves accuracy. Our Bayesian dictionary-based correction algorithm that utilizes P300 classifier confidences performed comparably (44–416%) to an oracle ErrP dictionary-based method that assumed perfect ErrP classification (43–433%). PMID:25438320

  6. Technical Errors May Affect Accuracy of Torque Limiter in Locking Plate Osteosynthesis.

    PubMed

    Savin, David D; Lee, Simon; Bohnenkamp, Frank C; Pastor, Andrew; Garapati, Rajeev; Goldberg, Benjamin A

    2016-01-01

    In locking plate osteosynthesis, proper surgical technique is crucial in reducing potential pitfalls, and use of a torque limiter makes it possible to control insertion torque. We conducted a study of the ways in which different techniques can alter the accuracy of torque limiters. We tested 22 torque limiters (1.5 Nm) for accuracy using hand and power tools under different rotational scenarios: hand power at low and high velocity and drill power at low and high velocity. We recorded the maximum torque reached after each torque-limiting event. Use of torque limiters under hand power at low velocity and high velocity resulted in significantly (P < .0001) different mean (SD) measurements: 1.49 (0.15) Nm and 3.73 (0.79) Nm. Use under drill power at controlled low velocity and at high velocity also resulted in significantly (P < .0001) different mean (SD) measurements: 1.47 (0.14) Nm and 5.37 (0.90) Nm. Maximum single measurement obtained was 9.0 Nm using drill power at high velocity. Locking screw insertion with improper technique may result in higher than expected torque and subsequent complications. For torque limiters, the most reliable technique involves hand power at slow velocity or drill power with careful control of insertion speed until 1 torque-limiting event occurs.

  7. An analysis of pilot error-related aircraft accidents

    NASA Technical Reports Server (NTRS)

    Kowalsky, N. B.; Masters, R. L.; Stone, R. B.; Babcock, G. L.; Rypka, E. W.

    1974-01-01

    A multidisciplinary team approach to pilot error-related U.S. air carrier jet aircraft accident investigation records successfully reclaimed hidden human error information not shown in statistical studies. New analytic techniques were developed and applied to the data to discover and identify multiple elements of commonality and shared characteristics within this group of accidents. Three techniques of analysis were used: Critical element analysis, which demonstrated the importance of a subjective qualitative approach to raw accident data and surfaced information heretofore unavailable. Cluster analysis, which was an exploratory research tool that will lead to increased understanding and improved organization of facts, the discovery of new meaning in large data sets, and the generation of explanatory hypotheses. Pattern recognition, by which accidents can be categorized by pattern conformity after critical element identification by cluster analysis.

  8. Capillary glucose meter accuracy and sources of error in the ambulatory setting.

    PubMed

    Lunt, Helen; Florkowski, Christopher; Bignall, Michael; Budgen, Christopher

    2010-03-05

    Hand-held glucose meters are used throughout the health system by both patients with diabetes and also by health care practitioners. Glucose meter technology is constantly evolving. The current generation of meters and strips are quick to use and require a very small volume of blood. This review aims to describe meters currently available in New Zealand, for use in the ambulatory setting. It also aims to discuss the limits of meter performance and provide technical information that is relevant to the clinician, using locally available data. Commoner causes and consequences of end-user (patient and health professional) error are illustrated using clinical case examples. No meter offers definite advantages over other meters in all clinical situations, rather meters should be chosen because they fit the needs of individual patients and because the provider is able to offer appropriate educational and quality assurance backup to the meter user. A broad understanding of the advantages and disadvantages of the subsidised meter systems available in New Zealand will help the health practitioner decide when it is in the best interests of their patients to change or update meter technology.

  9. Are lies more wrong than errors? Accuracy judgments of inaccurate statements.

    PubMed

    Teigen, Karl Halvor; Filkuková, Petra

    2011-02-01

    People are often mistaken when estimating and predicting quantities, and sometimes they report values that they know are false: they lie. There exists, however, little research devoted to how such deviations are being perceived. In four vignette studies, participants were asked to rate the accuracy of inaccurate statements about quantities (prices, numbers and amounts). The results indicate that overstatements are generally judged to be more inaccurate than understatements of the same magnitude; self-favorable (optimistic) statements are considered more inaccurate than unfavorable (pessimistic) statements, and false reports (lies) are perceived to be more inaccurate than equally mistaken estimates. Lies about the future did not differ from lies about the past, but own lies were perceived as larger than the same lies attributed to another person. It is suggested that estimates are judged according to how close they come to the true values (close estimates are more correct than estimates that are less close), whereas lies are judged as deviant from truth, with less importance attached to the magnitude of the deviation.

  10. Computerised physician order entry-related medication errors: analysis of reported errors and vulnerability testing of current systems

    PubMed Central

    Schiff, G D; Amato, M G; Eguale, T; Boehne, J J; Wright, A; Koppel, R; Rashidee, A H; Elson, R B; Whitney, D L; Thach, T-T; Bates, D W; Seger, A C

    2015-01-01

    Importance Medication computerised provider order entry (CPOE) has been shown to decrease errors and is being widely adopted. However, CPOE also has potential for introducing or contributing to errors. Objectives The objectives of this study are to (a) analyse medication error reports where CPOE was reported as a ‘contributing cause’ and (b) develop ‘use cases’ based on these reports to test vulnerability of current CPOE systems to these errors. Methods A review of medication errors reported to United States Pharmacopeia MEDMARX reporting system was made, and a taxonomy was developed for CPOE-related errors. For each error we evaluated what went wrong and why and identified potential prevention strategies and recurring error scenarios. These scenarios were then used to test vulnerability of leading CPOE systems, asking typical users to enter these erroneous orders to assess the degree to which these problematic orders could be entered. Results Between 2003 and 2010, 1.04 million medication errors were reported to MEDMARX, of which 63 040 were reported as CPOE related. A review of 10 060 CPOE-related cases was used to derive 101 codes describing what went wrong, 67 codes describing reasons why errors occurred, 73 codes describing potential prevention strategies and 21 codes describing recurring error scenarios. Ability to enter these erroneous order scenarios was tested on 13 CPOE systems at 16 sites. Overall, 298 (79.5%) of the erroneous orders were able to be entered including 100 (28.0%) being ‘easily’ placed, another 101 (28.3%) with only minor workarounds and no warnings. Conclusions and relevance Medication error reports provide valuable information for understanding CPOE-related errors. Reports were useful for developing taxonomy and identifying recurring errors to which current CPOE systems are vulnerable. Enhanced monitoring, reporting and testing of CPOE systems are important to improve CPOE safety. PMID:25595599

  11. TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW

    PubMed Central

    Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

    2012-01-01

    Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

  12. Deriving tight error-trade-off relations for approximate joint measurements of incompatible quantum observables

    NASA Astrophysics Data System (ADS)

    Branciard, Cyril

    2014-02-01

    The quantification of the "measurement uncertainty"aspect of Heisenberg's uncertainty principle—that is, the study of trade-offs between accuracy and disturbance, or between accuracies in an approximate joint measurement on two incompatible observables—has regained a lot of interest recently. Several approaches have been proposed and debated. In this paper we consider Ozawa's definitions for inaccuracies (as root-mean-square errors) in approximate joint measurements, and study how these are constrained in different cases, whether one specifies certain properties of the approximations—namely their standard deviations and/or their bias—or not. Extending our previous work [C. Branciard, Proc. Natl. Acad. Sci. USA 110, 6742 (2013), 10.1073/pnas.1219331110], we derive error-trade-off relations, which we prove to be tight for pure states. We show explicitly how all previously known relations for Ozawa's inaccuracies follow from ours. While our relations are in general not tight for mixed states, we show how these can be strengthened and how tight relations can still be obtained in that case.

  13. A virtual reference satellite differential method for relative correction of satellite ephemeris errors

    NASA Astrophysics Data System (ADS)

    Cai, Chenglin; Li, Xiaohui; Wu, Haitao

    2010-12-01

    In order to solve the problems that the novel wide area differential method on the satellite clock and ephemeris relative correction (CERC) in the non-geostationary orbit satellite constellation, a virtual reference satellite (VRS) differential principle using relative correction of satellite ephemeris errors is proposed. It is referred to be as the VRS differential principle, and the elaboration is focused on the construction of pseudo-range errors of VRS. Through qualitative analysis, it can be found that the impact of the satellite's clock and ephemeris errors on positioning can basically be removed and the users' positioning errors are near zero. Through simulation analysis of the differential performance, it is verified that the differential method is universal in all kinds of satellite navigation systems with geostationary orbit (GEO) constellation, Medium orbit (MEO) constellation or hybrid orbit constellation, and it has insensitivity to abnormal aspects of a satellite ephemeris and clock. Moreover, the real time positioning accuracy of differential users can be maintained within several decimeters after the pseudo-range measurement noise is effectively weakened or eliminated.

  14. Error-related negativities during spelling judgments expose orthographic knowledge.

    PubMed

    Harris, Lindsay N; Perfetti, Charles A; Rickles, Benjamin

    2014-02-01

    In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects' spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge.

  15. Error-Related Negativities During Spelling Judgments Expose Orthographic Knowledge

    PubMed Central

    Harris, Lindsay N.; Perfetti, Charles A.; Rickles, Benjamin

    2014-01-01

    In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects’ spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. PMID:24389506

  16. Using GPS data to evaluate the accuracy of state-space methods for correction of Argos satellite telemetry error.

    PubMed

    Patterson, Toey A; McConnell, Bernie J; Fedak, Mike A; Bravington, Mark V; Hindell, Mark A

    2010-01-01

    Recent studies have applied state-space models to satellite telemetry data in order to remove noise from raw location estimates and infer the true tracks of animals. However, while the resulting tracks may appear plausible, it is difficult to determine the accuracy of the estimated positions, especially for position estimates interpolated to times between satellite locations. In this study, we use data from two gray seals (Halichoerus grypus) carrying tags that transmitted Fastloc GPS positions via Argos satellites. This combination of Service Argos data and highly accurate GPS data allowed examination of the accuracy of state-space position estimates and their uncertainty derived from satellite telemetry data. After applying a speed filter to remove aberrant satellite telemetry locations, we fit a continuous-time Kalman filter to estimate the parameters of a random walk, used Kalman smoothing to infer positions at the times of the GPS measurements, and then compared the filtered telemetry estimates with the actual GPS measurements. We investigated the effect of varying maximum speed thresholds in the speed-filtering algorithm on the root mean-square error (RMSE) estimates and used minimum RMSE as a criterion to guide the final choice of speed threshold. The optimal speed thresholds differed between the two animals (1.1 m/s and 2.5 m/s) and retained 50% and 65% of the data for each seal. However, using a speed filter of 1.1 m/s resulted in very similar RMSE for both animals. For the two seals, the RMSE of the Kalman-filtered estimates of location were 5.9 and 12.76 km, respectively, and 75% of the modeled positions had errors less than 6.25 km and 11.7 km for each seal. Confidence interval coverage was close to correct at typical levels (80-95%), although it tended to be overly generous at smaller sizes. The reliability of uncertainty estimates was also affected by the chosen speed threshold. The combination of speed and Kalman filtering allows for effective

  17. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model

    PubMed Central

    Song, Jin-Myoung; Cho, Jin-Hyoung

    2016-01-01

    Purpose The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Materials and Methods Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Results Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Conclusions Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement. PMID:27065238

  18. Monitoring what is real: The effects of modality and action on accuracy and type of reality monitoring error.

    PubMed

    Garrison, Jane R; Bond, Rebecca; Gibbard, Emma; Johnson, Marcia K; Simons, Jon S

    2017-02-01

    Reality monitoring refers to processes involved in distinguishing internally generated information from information presented in the external world, an activity thought to be based, in part, on assessment of activated features such as the amount and type of cognitive operations and perceptual content. Impairment in reality monitoring has been implicated in symptoms of mental illness and associated more widely with the occurrence of anomalous perceptions as well as false memories and beliefs. In the present experiment, the cognitive mechanisms of reality monitoring were probed in healthy individuals using a task that investigated the effects of stimulus modality (auditory vs visual) and the type of action undertaken during encoding (thought vs speech) on subsequent source memory. There was reduced source accuracy for auditory stimuli compared with visual, and when encoding was accompanied by thought as opposed to speech, and a greater rate of externalization than internalization errors that was stable across factors. Interpreted within the source monitoring framework (Johnson, Hashtroudi, & Lindsay, 1993), the results are consistent with the greater prevalence of clinically observed auditory than visual reality discrimination failures. The significance of these findings is discussed in light of theories of hallucinations, delusions and confabulation.

  19. Absolute and relative height-pixel accuracy of SRTM-GL1 over the South American Andean Plateau

    NASA Astrophysics Data System (ADS)

    Satge, Frédéric; Denezine, Matheus; Pillco, Ramiro; Timouk, Franck; Pinel, Sébastien; Molina, Jorge; Garnier, Jérémie; Seyler, Frédérique; Bonnet, Marie-Paule

    2016-11-01

    Previously available only over the Continental United States (CONUS), the 1 arc-second mesh size (spatial resolution) SRTM-GL1 (Shuttle Radar Topographic Mission - Global 1) product has been freely available worldwide since November 2014. With a relatively small mesh size, this digital elevation model (DEM) provides valuable topographic information over remote regions. SRTM-GL1 is assessed for the first time over the South American Andean Plateau in terms of both the absolute and relative vertical point-to-point accuracies at the regional scale and for different slope classes. For comparison, SRTM-v4 and GDEM-v2 Global DEM version 2 (GDEM-v2) generated by ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) are also considered. A total of approximately 160,000 ICESat/GLAS (Ice, Cloud and Land Elevation Satellite/Geoscience Laser Altimeter System) data are used as ground reference measurements. Relative error is often neglected in DEM assessments due to the lack of reference data. A new methodology is proposed to assess the relative accuracies of SRTM-GL1, SRTM-v4 and GDEM-v2 based on a comparison with ICESat/GLAS measurements. Slope values derived from DEMs and ICESat/GLAS measurements from approximately 265,000 ICESat/GLAS point pairs are compared using quantitative and categorical statistical analysis introducing a new index: the False Slope Ratio (FSR). Additionally, a reference hydrological network is derived from Google Earth and compared with river networks derived from the DEMs to assess each DEM's potential for hydrological applications over the region. In terms of the absolute vertical accuracy on a global scale, GDEM-v2 is the most accurate DEM, while SRTM-GL1 is more accurate than SRTM-v4. However, a simple bias correction makes SRTM-GL1 the most accurate DEM over the region in terms of vertical accuracy. The relative accuracy results generally did not corroborate the absolute vertical accuracy. GDEM-v2 presents the lowest statistical

  20. Error-Related Processing in Adult Males with Elevated Psychopathic Traits

    PubMed Central

    Steele, Vaughn R.; Maurer, J. Michael; Bernat, Edward M.; Calhoun, Vince D.; Kiehl, Kent A.

    2015-01-01

    Psychopathy is a serious personality disorder characterized by dysfunctional affective and behavioral symptoms. In incarcerated populations, elevated psychopathic traits have been linked to increased rates of violent recidivism. Cognitive processes related to error processing have been shown to differentiate individuals with high and low psychopathic traits and may contribute to poor decision making that increases the risk of recidivism. Error processing abnormalities related to psychopathy may be due to error-monitoring (error detection) or post-error processing (error evaluation). A recent ‘bottleneck’ theory predicts deficiencies in post-error processing in individuals with high psychopathic traits. In the current study, incarcerated males (n = 93) performed a Go/NoGo response inhibition task while event-related potentials (ERPs) were recorded. Classic time-domain windowed component and principal component analyses were used to measure error-monitoring (as measured with the error-related negativity [ERN/Ne]) and post-error processing (as measured with the error positivity [Pe]). Psychopathic traits were assessed using Hare’s Psychopathy Checklist-Revised (PCL-R). PCL-R Total score, Factor 1 (interpersonal-affective traits), and Facet 3 (lifestyle traits) scores were positively related to post-error processes (i.e., increased Pe amplitude) but unrelated to error-monitoring processes (i.e., ERN/Ne). These results support the attentional bottleneck theory and further describe deficiencies related to elevated psychopathic traits that could be beneficial for new treatment strategies for psychopathy. PMID:26479259

  1. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  2. A high-accuracy roundness measurement for cylindrical components by a morphological filter considering eccentricity, probe offset, tip head radius and tilt error

    NASA Astrophysics Data System (ADS)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Zhou, Tong; Kuang, Ye

    2016-08-01

    A morphological filter is proposed to obtain a high-accuracy roundness measurement based on the four-parameter roundness measurement model, which takes into account eccentricity, probe offset, probe tip head radius and tilt error. This paper analyses the sample angle deviations caused by the four systematic errors to design a morphological filter based on the distribution of the sample angle. The effectiveness of the proposed method is verified through simulations and experiments performed with a roundness measuring machine. Compared to the morphological filter with the uniform sample angle, the accuracy of the roundness measurement can be increased by approximately 0.09 μm using the morphological filter with a non-uniform sample angle based on the four-parameter roundness measurement model, when eccentricity is above 16 μm, probe offset is approximately 1000 μm, tilt error is approximately 1″, the probe tip head radius is 1 mm and the cylindrical component radius is approximately 37 mm. The accuracy and reliability of roundness measurements are improved by using the proposed method for cylindrical components with a small radius, especially if the eccentricity and probe offset are large, and the tilt error and probe tip head radius are small. The proposed morphological filter method can be used for precision and ultra-precision roundness measurements, especially for functional assessments of roundness profiles.

  3. Specimen geometry effect on the accuracy of constitutive relations in a superplastic 5083 aluminum alloy

    SciTech Connect

    Khaleel, M.A.; Johnson, K.I.; Lavender, C.A.; Smith, M.T.; Hamilton, C.H.

    1996-05-01

    Current experimental methods are influenced by the end effects that cause non-uniform strain rates in the gauge section and material flow within the grips. A series of tension tests and finite element models confirm this for an Al-5083 alloy. Both the tests and the finite element simulations predict that the actual strain rate begins at about 60 percent of the desired strain rate and increases gradually with strain. Material flow from the grips into the gauge effectively ``slows`` the strain rate at the initial stages of the test. As the test proceeds thinning of the gauge section occurs and most of the strain occurs in the gauge section due to the relative cross-sectional areas of the grip and gauge section. Testing and models were also run comparing specimens with and without alignment holes in the grips. It was shown that alignment holes increase flow from the grips and thus introduce additional error in the tests. Further modeling was performed to evaluate the improved accuracy of specimens with increased length-to-width ratios. This work showed that a specimen with 50% reduction in the standard gauge width and double the standard gauge length (4:1 increase in length-to-width ratio) gave strain rates within 10% of the desired value throughout the test.

  4. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  5. Effect of radiometric errors on accuracy of temperature-profile measurement by spectral scanning using absorption-emission pyrometry

    NASA Technical Reports Server (NTRS)

    Buchele, D. R.

    1972-01-01

    The spectral-scanning method may be used to determine the temperature profile of a jet- or rocket-engine exhaust stream by measurements of gas radiation and transmittance, at two or more wavelengths. A single, fixed line of sight is used, using immobile radiators outside of the gas stream, and there is no interference with the flow. At least two sets of measurements are made, each set consisting of the conventional three radiometric measurements of absorption-emission pyrometry, but each set is taken over a different spectral interval that gives different weight to the radiation from a different portion of the optical path. Thereby, discrimination is obtained with respect to location along the path. A given radiometric error causes an error in computed temperatures. The ratio between temperature error and radiometric error depends on profile shape, path length, temperature level, and strength of line absorption, and the absorption coefficient and its temperature dependency. These influence the choice of wavelengths, for any given gas. Conditions for minimum temperature error are derived. Numerical results are presented for a two-wavelength measurement on a family of profiles that may be expected in a practical case of hydrogen-oxygen combustion. Under favorable conditions, the fractional error in temperature approximates the fractional error in radiant-flux measurement.

  6. Research Into the Collimation and Horizontal Axis Errors Influence on the Z+F Laser Scanner Accuracy of Verticality Measurement

    NASA Astrophysics Data System (ADS)

    Sawicki, J.; Kowalczyk, M.

    2016-06-01

    Aim of this study was to appoint values of collimation and horizontal axis errors of the laser scanner ZF 5006h owned by Department of Geodesy and Cartography, Warsaw University of Technology, and then to determine the effect of those errors on the results of measurements. An experiment has been performed, involving measurement of the test field , founded in the Main Hall of the Main Building of the Warsaw University of Technology, during which values of instrumental errors of interest were determined. Then, an universal computer program that automates the proposed algorithm and capable of applying corrections to measured target coordinates or even entire point clouds from individual stations, has been developed.

  7. Relation between minimum-error discrimination and optimum unambiguous discrimination

    SciTech Connect

    Qiu Daowen; Li Lvjun

    2010-09-15

    In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating some states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.

  8. Sudden Flow Changes Not Related to Field Errors

    NASA Astrophysics Data System (ADS)

    Hansen, A. K.; Chapman, J. T.; den Hartog, D. J.; Hegna, C. C.; Prager, S. C.; Sarff, J. S.

    1997-11-01

    It has heretofore been assumed that, in the Madison Symmetric Torus RFP, the slowing down of core-resonant tearing modes during a sawtooth crash is caused by external field errors(Den Hartog et. al., Phys. Plasmas 2) 2281, June 1995. New evidence suggests other torques are responsible. In plasmas which have been electrostatically biased to produce reversed toroidal rotation, the rotation speed increases at a crash, i.e. the usual trend is preserved. This is contrary to a torque exerted by a field error, which should always decrease the speed of the mode velocities. Examples of torques possibly responsible for the flow changes during the crash are internal electromagnetic torques between the modes and a fluctuation-driven torque acting on the plasma flow. These torques may also provide an explanation for the observed bifurcation^2 between reacceleration and permanent locking of the modes at an individual crash. We have observed that the mode deceleration occurs earlier for sawteeth in which permanent locking occurs than those where there is reacceleration; also, the core mode amplitudes increase earlier in the sawtooth cycle which immediately precedes locking.

  9. Children's use of decomposition strategies mediates the visuospatial memory and arithmetic accuracy relation.

    PubMed

    Foley, Alana E; Vasilyeva, Marina; Laski, Elida V

    2016-12-14

    This study examined the mediating role of children's use of decomposition strategies in the relation between visuospatial memory (VSM) and arithmetic accuracy. Children (N = 78; Age M = 9.36) completed assessments of VSM, arithmetic strategies, and arithmetic accuracy. Consistent with previous findings, VSM predicted arithmetic accuracy in children. Extending previous findings, the current study showed that the relation between VSM and arithmetic performance was mediated by the frequency of children's use of decomposition strategies. Identifying the role of arithmetic strategies in this relation has implications for increasing the math performance of children with lower VSM. Statement of contribution What is already known on this subject? The link between children's visuospatial working memory and arithmetic accuracy is well documented. Frequency of decomposition strategy use is positively related to children's arithmetic accuracy. Children's spatial skill positively predicts the frequency with which they use decomposition. What does this study add? Short-term visuospatial memory (VSM) positively relates to the frequency of children's decomposition use. Decomposition use mediates the relation between short-term VSM and arithmetic accuracy. Children with limited short-term VSM may struggle to use decomposition, decreasing accuracy.

  10. The error-related negativity relates to sadness following mood induction among individuals with high neuroticism

    PubMed Central

    Hajcak, Greg

    2012-01-01

    The error-related negativity (ERN) is an event-related potential (ERP) that indexes error monitoring. Research suggests that the ERN is increased in internalizing disorders, such as depression and anxiety. Although studies indicate that the ERN is insensitive to state-related fluctuations in anxiety, few studies have carefully examined the effect of state-related changes in sadness on the ERN. In the current study, we sought to determine whether the ERN would be altered by a sad mood induction using a between-subjects design. Additionally, we explored if this relationship would be moderated by individual differences in neuroticism—a personality trait related to both anxiety and depression. Forty-seven undergraduate participants were randomly assigned to either a sad or neutral mood induction prior to performing an arrow version of the flanker task. Participants reported greater sadness following the sad than neutral mood induction; there were no significant group differences on behavioral or ERP measures. Across the entire sample, however, participants with a larger increase in sad mood from baseline to post-induction had a larger (i.e. more negative) ERN. Furthermore, this effect was larger among individuals reporting higher neuroticism. These data indicate that neuroticism moderates the relationship between the ERN and changes in sad mood. PMID:21382967

  11. Frequency, types, and direct related costs of medication errors in an academic nephrology ward in Iran.

    PubMed

    Gharekhani, Afshin; Kanani, Negin; Khalili, Hossein; Dashti-Khavidaki, Simin

    2014-09-01

    Medication errors are ongoing problems among hospitalized patients especially those with multiple co-morbidities and polypharmacy such as patients with renal diseases. This study evaluated the frequency, types and direct related cost of medication errors in nephrology ward and the role played by clinical pharmacists. During this study, clinical pharmacists detected, managed, and recorded the medication errors. Prescribing errors including inappropriate drug, dose, or treatment durations were gathered. To assess transcription errors, the equivalence of nursery charts and physician's orders were evaluated. Administration errors were assessed by observing drugs' preparation, storage, and administration by nurses. The changes in medications costs after implementing clinical pharmacists' interventions were compared with the calculated medications costs if the medication errors were continued up to patients' discharge time. More than 85% of patients experienced medication error. The rate of medication errors was 3.5 errors per patient and 0.18 errors per ordered medication. More than 95% of medication errors occurred at prescription nodes. Most common prescribing errors were omission (26.9%) or unauthorized drugs (18.3%) and low drug dosage or frequency (17.3%). Most of the medication errors happened on cardiovascular drugs (24%) followed by vitamins and electrolytes (22.1%) and antimicrobials (18.5%). The number of medication errors was correlated with the number of ordered medications and length of hospital stay. Clinical pharmacists' interventions decreased patients' direct medication costs by 4.3%. About 22% of medication errors led to patients' harm. In conclusion, clinical pharmacists' contributions in nephrology wards were of value to prevent medication errors and to reduce medications cost.

  12. Non-Destructive Assay (NDA) Uncertainties Impact on Physical Inventory Difference (ID) and Material Balance Determination: Sources of Error, Precision/Accuracy, and ID/Propagation of Error (POV)

    SciTech Connect

    Wendelberger, James G.

    2016-10-31

    These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.

  13. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-12-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  14. Using brain potentials to understand prism adaptation: the error-related negativity and the P300

    PubMed Central

    MacLean, Stephane J.; Hassall, Cameron D.; Ishigami, Yoko; Krigolson, Olav E.; Eskes, Gail A.

    2015-01-01

    Prism adaptation (PA) is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN)—a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuomotor responding is shifted to the opposite direction of that initially induced by the prisms. This visuomotor aftereffect has been used to study visuomotor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided) space. In order to optimize PA's use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs) were recorded at the termination of each reach (screen-touch), then binned according to accuracy (hit vs. miss) and phase of exposure block (early, middle, late). Results show that two ERP components were evoked by screen-touch: an error-related negativity (ERN), and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN. PMID:26124715

  15. Linear constraint relations in biochemical reaction systems: II. Diagnosis and estimation of gross errors.

    PubMed

    van der Heijden, R T; Romein, B; Heijnen, J J; Hellinga, C; Luyben, K C

    1994-01-05

    Conservation equations derived from elemental balances, heat balances, and metabolic stoichiometry, can be used to constrain the values of conversion rates of relevant components. In the present work, their use will be discussed for detection and localization of significant errors of the following types: 1.At least one of the primary measurements has a significant error (gross measurement error).2.The system definition is incorrect: a component a.is not included in the system description.b.has a composition different from that specified.3.The specified variances are too small, resulting in a too-sensitive test.The error diagnosis technique presented here, is based on the following: given the conservation equations, for each set of measured rates, a vector of residuals of these equations can be constructed, of which the direction is related to the error source, as its length is a measure of the error size. The similarity of the directions of such a residual vector and certain compare vectors, each corresponding to a specific error source, is considered in a statistical test. If two compare vectors that result from different error sources have (almost) the same direction, errors of these types cannot be distinguished from each other. For each possible error in the primary measurements of flows and concentrations, the compare vector can be constructed a priori, thus allowing analysis beforehand, which errors can be observed. Therefore, the detectability of certain errors likely to occur can be insured by selecting a proper measurement set. The possibility of performing this analysis before experiments are carried out is an important advantage, providing a profound understanding of the detectability of errors. The characteristics of the method with respect to diagnosis of simultaneous errors and error size estimation are discussed and compared to those of the serial elimination method and the serial compensation strategy, published elsewhere. (c) 1994 John Wiley & Sons

  16. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Lai, Keke

    2011-01-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

  17. Spouses' Effectiveness as End-of-Life Health Care Surrogates: Accuracy, Uncertainty, and Errors of Overtreatment or Undertreatment

    ERIC Educational Resources Information Center

    Moorman, Sara M.; Carr, Deborah

    2008-01-01

    Purpose: We document the extent to which older adults accurately report their spouses' end-of-life treatment preferences, in the hypothetical scenarios of terminal illness with severe physical pain and terminal illness with severe cognitive impairment. We investigate the extent to which accurate reports, inaccurate reports (i.e., errors of…

  18. Error Self-Correction and Spelling: Improving the Spelling Accuracy of Secondary Students with Disabilities in Written Expression

    ERIC Educational Resources Information Center

    Viel-Ruma, Kim; Houchins, David; Fredrick, Laura

    2007-01-01

    In order to improve the spelling performance of high school students with deficits in written expression, an error self-correction procedure was implemented. The participants were two tenth-grade students and one twelfth-grade student in a program for individuals with learning disabilities. Using an alternating treatments design, the effect of…

  19. A non-orthogonal SVD-based decomposition for phase invariant error-related potential estimation.

    PubMed

    Phlypo, Ronald; Jrad, Nisrine; Rousseau, Sandra; Congedo, Marco

    2011-01-01

    The estimation of the Error Related Potential from a set of trials is a challenging problem. Indeed, the Error Related Potential is of low amplitude compared to the ongoing electroencephalographic activity. In addition, simple summing over the different trials is prone to errors, since the waveform does not appear at an exact latency with respect to the trigger. In this work, we propose a method to cope with the discrepancy of these latencies of the Error Related Potential waveform and offer a framework in which the estimation of the Error Related Potential waveform reduces to a simple Singular Value Decomposition of an analytic waveform representation of the observed signal. The followed approach is promising, since we are able to explain a higher portion of the variance of the observed signal with fewer components in the expansion.

  20. Is Your Error My Concern? An Event-Related Potential Study on Own and Observed Error Detection in Cooperation and Competition

    PubMed Central

    de Bruijn, Ellen R. A.; von Rhein, Daniel T.

    2011-01-01

    Electroencephalogram studies have identified an error-related event-related potential (ERP) component known as the error-related negativity or ERN, thought to result from the detection of a loss of reward during performance monitoring. However, as own errors are always associated with a loss of reward, disentangling whether the ERN is error- or reward-dependent has proven to be a difficult endeavor. Recently, an ERN has also been demonstrated following the observation of other’s errors. Importantly, other people’s errors can be associated with loss or gain depending on the cooperative or competitive context in which they are made. The aim of the current ERP study was to disentangle the error- or reward-dependency of performance monitoring. Twelve pairs (N = 24) of participants performed and observed a speeded-choice-reaction task in two contexts. Own errors were always associated with a loss of reward. Observed errors in the cooperative context also yielded a loss of reward, but observed errors in the competitive context resulted in a gain. The results showed that the ERN was present following all types of errors independent of who made the error and the outcome of the action. Consequently, the current study demonstrates that performance monitoring as reflected by the ERN is error-specific and not directly dependent on reward. PMID:22347154

  1. [Learning from errors after a care-related adverse event].

    PubMed

    Richard, Christian; Pibarot, Marie-Laure; Zantman, Françoise

    2016-04-01

    The mobilisation of all health professionals with regard to the detection and analysis of care-related adverse events is an essential element in the improvement of the safety of care. This approach is required by the authorities and justifiably expected by users.

  2. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  3. Shoulder proprioception is not related to throwing speed or accuracy in elite adolescent male baseball players.

    PubMed

    Freeston, Jonathan; Adams, Roger D; Rooney, Kieron

    2015-01-01

    Understanding factors that influence throwing speed and accuracy is critical to performance in baseball. Shoulder proprioception has been implicated in the injury risk of throwing athletes, but no such link has been established with performance outcomes. The purpose of this study was to describe any relationship between shoulder proprioception acuity and throwing speed or accuracy. Twenty healthy elite adolescent male baseball players (age, 19.6 ± 2.6 years), who had represented the state of New South Wales in the past 18 months, were assessed for bilateral active shoulder proprioception (shoulder rotation in 90° of arm abduction moving toward external rotation using the active movement extent discrimination apparatus), maximal throwing speed (MTS, meters per second measured via a radar gun), and accuracy (total error in centimeters determined by video analysis) at 80 and 100% of MTS. Although proprioception in the dominant and nondominant arms was significantly correlated with each other (r = 0.54, p < 0.01), no relationship was found between shoulder proprioception and performance. Shoulder proprioception was not a significant determinant of throwing performance such that high levels of speed and accuracy were achieved without a high degree of proprioception. There is no evidence to suggest therefore that this particular method of shoulder proprioception measurement should be implemented in clinical practice. Consequently, clinicians are encouraged to consider proprioception throughout the entire kinetic chain rather than the shoulder joint in isolation as a determining factor of performance in throwing athletes.

  4. Quantification and correction of the error due to limited PIV resolution on the accuracy of non-intrusive spatial pressure measurement using a DNS channel flow database

    NASA Astrophysics Data System (ADS)

    Liu, Xiaofeng; Siddle-Mitchell, Seth

    2016-11-01

    The effect of the subgrid-scale (SGS) stress due to limited PIV resolution on pressure measurement accuracy is quantified using data from a direct numerical simulation database of turbulent channel flow (JHTDB). A series of 2000 consecutive realizations of sample block data with 512x512x49 grid nodal points were selected and spatially filtered with a coarse 17x17x17 and a fine 5x5x5 box averaging, respectively, giving rise to corresponding PIV resolutions of roughly 62.6 and 18.4 times of the viscous length scale. Comparison of the reconstructed pressure at different levels of pressure gradient approximation with the filtered pressure shows that the neglect of the viscous term leads to a small but noticeable change in the reconstructed pressure, especially in regions near the channel walls. As a contrast, the neglect of the SGS stress results in a more significant increase in both the bias and the random errors, indicating the SGS term must be accounted for in PIV pressure measurement. Correction using similarity SGS modeling reduces the random error due to the omission of SGS stress from 114.5% of the filtered pressure r.m.s. fluctuation to 89.1% for the coarse PIV resolution, and from 66.5% to 35.9% for the fine PIV resolution, respectively, confirming the benefit of the error compensation method and the positive influence of increasing PIV resolution on pressure measurement accuracy improvement.

  5. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    EPA Science Inventory

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...

  6. Mal-Adaptation of Event-Related EEG Responses Preceding Performance Errors

    PubMed Central

    Eichele, Heike; Juvodden, Hilde T.; Ullsperger, Markus; Eichele, Tom

    2010-01-01

    Recent EEG and fMRI evidence suggests that behavioral errors are foreshadowed by systematic changes in brain activity preceding the outcome by seconds. In order to further characterize this type of error precursor activity, we investigated single-trial event-related EEG activity from 70 participants performing a modified Eriksen flanker task, in particular focusing on the trial-by-trial dynamics of a fronto-central independent component that previously has been associated with error and feedback processing. The stimulus-locked peaks in the N2 and P3 latency range in the event-related averages showed expected compatibility and error-related modulations. In addition, a small pre-stimulus negative slow wave was present at erroneous trials. Significant error-preceding activity was found in local stimulus sequences with decreased conflict in the form of less negativity at the N2 latency (310–350 ms) accumulating across five trials before errors; concomitantly response times were speeding across trials. These results illustrate that error-preceding activity in event-related EEG is associated with the performance monitoring system and we conclude that the dynamics of performance monitoring contribute to the generation of error-prone states in addition to the more remote and indirect effects in ongoing activity such as posterior alpha power in EEG and default mode drifts in fMRI. PMID:20740080

  7. Adaptation of hybrid human-computer interaction systems using EEG error-related potentials.

    PubMed

    Chavarriaga, Ricardo; Biasiucci, Andrea; Forster, Killian; Roggen, Daniel; Troster, Gerhard; Millan, Jose Del R

    2010-01-01

    Performance improvement in both humans and artificial systems strongly relies in the ability of recognizing erroneous behavior or decisions. This paper, that builds upon previous studies on EEG error-related signals, presents a hybrid approach for human computer interaction that uses human gestures to send commands to a computer and exploits brain activity to provide implicit feedback about the recognition of such commands. Using a simple computer game as a case study, we show that EEG activity evoked by erroneous gesture recognition can be classified in single trials above random levels. Automatic artifact rejection techniques are used, taking into account that subjects are allowed to move during the experiment. Moreover, we present a simple adaptation mechanism that uses the EEG signal to label newly acquired samples and can be used to re-calibrate the gesture recognition system in a supervised manner. Offline analysis show that, although the achieved EEG decoding accuracy is far from being perfect, these signals convey sufficient information to significantly improve the overall system performance.

  8. Evaluation of Relative Geometric Accuracy of Terrasar-X by Pixel Matching Methodology

    NASA Astrophysics Data System (ADS)

    Nonaka, T.; Asaka, T.; Iwashita, K.

    2016-06-01

    Recently, high-resolution commercial SAR satellites with several meters of resolutions are widely utilized for various applications and disaster monitoring is one of the commonly applied areas. The information about the flooding situation and ground displacement was rapidly announced to the public after the Great East Japan Earthquake 2011. One of the studies reported the displacement in Tohoku region by the pixel matching methodology using both pre- and post- event TerraSAR-X data, and the validated accuracy was about 30 cm at the GEONET reference points. In order to discuss the spatial distribution of the displacement, we need to evaluate the relative accuracy of the displacement in addition to the absolute accuracy. In the previous studies, our study team evaluated the absolute 2D geo-location accuracy of the TerraSAR-X ortho-rectified EEC product for both flat and mountain areas. Therefore, the purpose of the current study was to evaluate the spatial and temporal relative geo-location accuracies of the product by considering the displacement of the fixed point as the relative geo-location accuracy. Firstly, by utilizing TerraSAR-X StripMap dataset, the pixel matching method for estimating the displacement with sub-pixel level was developed. Secondly, the validity of the method was confirmed by comparing with GEONET data. We confirmed that the accuracy of the displacement for X and Y direction was in agreement with the previous studies. Subsequently, the methodology was applied to 20 pairs of data set for areas of Tokyo Ota-ku and Kawasaki-shi, and the displacement of each pair was evaluated. It was revealed that the time series displacement rate had the seasonal trend and seemed to be related to atmospheric delay.

  9. 26 CFR 1.6664-1 - Accuracy-related and fraud penalties; definitions, effective date and special rules.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 13 2013-04-01 2013-04-01 false Accuracy-related and fraud penalties... to the Tax, Additional Amounts, and Assessable Penalties § 1.6664-1 Accuracy-related and fraud... “underpayment” for purposes of the accuracy-related penalty under section 6662 and the fraud penalty...

  10. 26 CFR 1.6664-1 - Accuracy-related and fraud penalties; definitions, effective date and special rules.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 13 2012-04-01 2012-04-01 false Accuracy-related and fraud penalties... to the Tax, Additional Amounts, and Assessable Penalties § 1.6664-1 Accuracy-related and fraud... “underpayment” for purposes of the accuracy-related penalty under section 6662 and the fraud penalty...

  11. 26 CFR 1.6664-1 - Accuracy-related and fraud penalties; definitions, effective date and special rules.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 13 2011-04-01 2011-04-01 false Accuracy-related and fraud penalties... to the Tax, Additional Amounts, and Assessable Penalties § 1.6664-1 Accuracy-related and fraud... “underpayment” for purposes of the accuracy-related penalty under section 6662 and the fraud penalty...

  12. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Geneviève; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  13. Violation of Heisenberg's error-disturbance uncertainty relation in neutron-spin measurements

    NASA Astrophysics Data System (ADS)

    Sulyok, Georg; Sponar, Stephan; Erhart, Jacqueline; Badurek, Gerald; Ozawa, Masanao; Hasegawa, Yuji

    2013-08-01

    In its original formulation, Heisenberg's uncertainty principle dealt with the relationship between the error of a quantum measurement and the thereby induced disturbance on the measured object. Meanwhile, Heisenberg's heuristic arguments have turned out to be correct only for special cases. An alternative universally valid relation was derived by Ozawa in 2003. Here, we demonstrate that Ozawa's predictions hold for projective neutron-spin measurements. The experimental inaccessibility of error and disturbance claimed elsewhere has been overcome using a tomographic method. By a systematic variation of experimental parameters in the entire configuration space, the physical behavior of error and disturbance for projective spin-(1)/(2) measurements is illustrated comprehensively. The violation of Heisenberg's original relation, as well as the validity of Ozawa's relation become manifest. In addition, our results conclude that the widespread assumption of a reciprocal relation between error and disturbance is not valid in general.

  14. Event-related potentials elicited by errors during the stop-signal task. I: Macaque monkeys

    PubMed Central

    Godlove, David C.; Emeric, Erik E.; Segovis, Courtney M.; Young, Michelle S.; Schall, Jeffrey D.; Woodman, Geoffrey F.

    2011-01-01

    The error-related negativity (ERN) and positivity (Pe) are components of event-related potential (ERP) waveforms recorded from humans that are thought to reflect performance monitoring. Error-related signals have also been found in single-neuron responses and local-field potentials recorded in supplementary eye field and anterior cingulate cortex of macaque monkeys. However, the homology of these neural signals across species remains controversial. Here, we show that monkeys exhibit ERN and Pe components when they commit errors during a saccadic stop-signal task. The voltage distributions and current densities of these components were similar to those found in humans performing the same task. Subsequent analyses show that neither stimulus- nor response-related artifacts accounted for the error-ERPs. This demonstration of macaque homologues of the ERN and Pe forms a keystone in the bridge linking human and nonhuman primate studies on the neural basis of performance monitoring. PMID:22049407

  15. The error-related negativity (ERN) and psychopathology: Toward an Endophenotype

    PubMed Central

    Olvet, Doreen M.; Hajcak, Greg

    2008-01-01

    The ERN is a negative deflection in the event-related potential that peaks approximately 50 ms after the commission of an error. The ERN is thought to reflect early error-processing activity of the anterior cingulate cortex (ACC). First, we review current functional, neurobiological, and developmental data on the ERN. Next, the ERN is discussed in terms of three psychiatric disorders characterized by abnormal response monitoring: anxiety disorders, depression, and substance abuse. These data indicate that increased and decreased error-related brain activity is associated with the internalizing and externalizing dimensions of psychopathology, respectively. Recent data further suggest that abnormal error-processing indexed by the ERN indexes trait- but not state-related symptoms, especially related to anxiety. Overall, these data point to utility of ERN in studying risk for psychiatric disorders, and are discussed in terms of the endophenotype construct. PMID:18694617

  16. Medial frontal cortex activity and loss-related responses to errors.

    PubMed

    Taylor, Stephan F; Martis, Brian; Fitzgerald, Kate D; Welsh, Robert C; Abelson, James L; Liberzon, Israel; Himle, Joseph A; Gehring, William J

    2006-04-12

    Making an error elicits activity from brain regions that monitor performance, especially the medial frontal cortex (MFC). However, uncertainty exists about whether the posterior or anterior/rostral MFC processes errors and to what degree affective responses to errors are mediated in the MFC, specifically the rostral anterior cingulate cortex (rACC). To test the hypothesis that rACC mediates a type of affective response, we conceptualized affect in response to an error as a reaction to loss and amplified this response with a monetary penalty. While subjects performed a cognitive interference task during functional magnetic resonance imaging, hemodynamic activity in the rACC was significantly greater when subjects lost money as a result of an error compared with errors that did not lead to monetary loss. A significant interaction between the incentive conditions and error events demonstrated that the effect was not merely attributable to working harder to win (or not lose) money, although an effect of motivation was noted in the mid-MFC. Activation foci also occurred in similar regions of the posterior MFC for error and interference processing, which were not modulated by the incentive conditions. However, at the level of the individual subject, substantial functional variability occurred along the MFC during error processing, including foci in the rostral/anterior extent of the MFC not appearing in the group analysis. The findings support the hypothesis that the rostral extent of the MFC (rACC) processes loss-related responses to errors, and individual differences may account for some of the reported variation of error-related foci in the MFC.

  17. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    PubMed

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  18. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  19. Punishment has a lasting impact on error-related brain activity.

    PubMed

    Riesel, Anja; Weinberg, Anna; Endrass, Tanja; Kathmann, Norbert; Hajcak, Greg

    2012-02-01

    The current study examined whether punishment has direct and lasting effects on error-related brain activity, and whether this effect is larger with increasing trait anxiety. Participants were told that errors on a flanker task would be punished in some blocks but not others. Punishment was applied following 50% of errors in punished blocks during the first half of the experiment (i.e., acquisition), but never in the second half (i.e., extinction). The ERN was enhanced in the punished blocks in both experimental phases--this enhancement remained stable throughout the extinction phase. More anxious individuals were characterized by larger punishment-related modulations in the ERN. The study reveals evidence for lasting, punishment-based modulations of the ERN that increase with anxiety. These data suggest avenues for research to examine more specific learning-related mechanisms that link anxiety to overactive error monitoring.

  20. Crying tapir: the functionality of errors and accuracy in predator recognition in two neotropical high-canopy primates.

    PubMed

    Mourthé, Ítalo; Barnett, Adrian A

    2014-01-01

    Predation is often considered to be a prime driver in primate evolution, but, as predation is rarely observed in nature, little is known of primate antipredator responses. Time-limited primates should be highly discerning when responding to predators, since time spent in vigilance and avoidance behaviour may supplant other activities. We present data from two independent studies describing and quantifying the frequency, nature and duration of predator-linked behaviours in 2 high-canopy primates, Ateles belzebuth and Cacajao ouakary. We introduce the concept of 'pseudopredators' (harmless species whose appearance is sufficiently similar to that of predators to elicit antipredator responses) and predict that changes in behaviour should increase with risk posed by a perceived predator. We studied primate group encounters with non-primate vertebrates across 14 (Ateles) and 19 (Cacajao) months in 2 undisturbed Amazonian forests. Although preliminary, data on both primates revealed that they distinguished the potential predation capacities of other species, as predicted. They appeared to differentiate predators from non-predators and distinguished when potential predators were not an immediate threat, although they reacted erroneously to pseudopredators, on average in about 20% of the responses given toward other vertebrates. Reacting to pseudopredators would be interesting since, in predation, one error can be fatal to the prey.

  1. Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder

    ERIC Educational Resources Information Center

    Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.

    2012-01-01

    Objective: The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes…

  2. Conscious perception of errors and its relation to the anterior insula

    PubMed Central

    Harsay, Helga A.; Wessel, Jan R.; Ridderinkhof, K. Richard

    2010-01-01

    To detect erroneous action outcomes is necessary for flexible adjustments and therefore a prerequisite of adaptive, goal-directed behavior. While performance monitoring has been studied intensively over two decades and a vast amount of knowledge on its functional neuroanatomy has been gathered, much less is known about conscious error perception, often referred to as error awareness. Here, we review and discuss the conditions under which error awareness occurs, its neural correlates and underlying functional neuroanatomy. We focus specifically on the anterior insula, which has been shown to be (a) reliably activated during performance monitoring and (b) modulated by error awareness. Anterior insular activity appears to be closely related to autonomic responses associated with consciously perceived errors, although the causality and directions of these relationships still needs to be unraveled. We discuss the role of the anterior insula in generating versus perceiving autonomic responses and as a key player in balancing effortful task-related and resting-state activity. We suggest that errors elicit reactions highly reminiscent of an orienting response and may thus induce the autonomic arousal needed to recruit the required mental and physical resources. We discuss the role of norepinephrine activity in eliciting sufficiently strong central and autonomic nervous responses enabling the necessary adaptation as well as conscious error perception. PMID:20512371

  3. CORRECTED ERROR VIDEO VERSUS A PHYSICAL THERAPIST INSTRUCTED HOME EXERCISE PROGRAM: ACCURACY OF PERFORMING THERAPEUTIC SHOULDER EXERCISES

    PubMed Central

    Krishnamurthy, Kamesh; Hopp, Jennifer; Stanley, Laura; Spores, Ken; Braunreiter, David

    2016-01-01

    Background and Purpose The accurate performance of physical therapy exercises can be difficult. In this evolving healthcare climate it is important to continually look for better methods to educate patients. The use of handouts, in-person demonstration, and video instruction are all potential avenues used to teach proper exercise form. The purpose of this study was to examine if a corrected error video (CEV) would be as effective as a single visit with a physical therapist (PT) to teach healthy subjects how to properly perform four different shoulder rehabilitation exercises. Study Design This was a prospective, single-blinded interventional trial. Methods Fifty-eight subjects with no shoulder complaints were recruited from two institutions and randomized into one of two groups: the CEV group (30 subjects) was given a CEV comprised of four shoulder exercises, while the physical therapy group (28 subjects) had one session with a PT as well as a handout of how to complete the exercises. Each subject practiced the exercises for one week and was then videotaped performing them during a return visit. Videos were scored with the shoulder exam assessment tool (SEAT) created by the authors. Results There was no difference between the groups on total SEAT score (13.66 ± 0.29 vs 13.46 ± 0.30 for CEV vs PT, p = 0.64, 95% CI [−0.06, 0.037]). Average scores for individual exercises also showed no significant difference. Conclusion/Clinical Relevance These results demonstrate that the inexpensive and accessible CEV is as beneficial as direct instruction in teaching subjects to properly perform shoulder rehabilitation exercises. Level of Evidence 1b PMID:27757288

  4. Accuracy of velocities from repeated GPS surveys: relative positioning is concerned

    NASA Astrophysics Data System (ADS)

    Duman, Huseyin; Ugur Sanli, D.

    2016-04-01

    Over more than a decade, researchers have been interested in studying the accuracy of GPS positioning solutions. Recently, reporting the accuracy of GPS velocities has been added to this. Researchers studying landslide motion, tectonic motion, uplift, sea level rise, and subsidence still report results from GPS experiments in which repeated GPS measurements from short sessions are used. This motivated some other researchers to study the accuracy of GPS deformation rates/velocities from various repeated GPS surveys. In one of the efforts, the velocity accuracy was derived from repeated GPS static surveys using short observation sessions and Precise Point Positioning mode of GPS software. Velocities from short GPS sessions were compared with the velocities from 24 h sessions. The accuracy of velocities was obtained using statistical hypothesis testing and quantifying the accuracy of least squares estimation models. The results reveal that 45-60 % of the horizontal and none of the vertical solutions comply with the results from 24 h solutions. We argue that this case in which the data was evaluated using PPP should also apply to the case in which the data belonging to long GPS base lengths is processed using fundamental relative point positioning. To test this idea we chose the two IGS stations ANKR and NICO and derive their velocities from the reference stations held fixed in the stable EURASIAN plate. The University of Bern's GNSS software BERNESE was used to produce relative positioning solutions, and the results are compared with those of GIPSY/OASIS II PPP results. First impressions indicate that it is worth designing a global experiment and test these ideas in detail.

  5. Precision error in dual-photon absorptiometry related to source age

    SciTech Connect

    Ross, P.D.; Wasnich, R.D.; Vogel, J.M.

    1988-02-01

    An average, variable precision error of up to 6% related to source age was observed for dual-photon absorptiometry of the spine in a longitudinal study of bone mineral content involving 393 women. Application of a software correction for source decay compensated for only a portion of this error. The authors conclude that measurement of bone-loss rates using serial dual-photon bone mineral measurements must be interpreted with caution.

  6. Accuracy of Noncycloplegic Retinoscopy, Retinomax Autorefractor, and SureSight Vision Screener for Detecting Significant Refractive Errors

    PubMed Central

    Kulp, Marjean Taylor; Ying, Gui-shuang; Huang, Jiayan; Maguire, Maureen; Quinn, Graham; Ciner, Elise B.; Cyert, Lynn A.; Orel-Bixler, Deborah A.; Moore, Bruce D.

    2014-01-01

    Purpose. To evaluate, by receiver operating characteristic (ROC) analysis, the ability of noncycloplegic retinoscopy (NCR), Retinomax Autorefractor (Retinomax), and SureSight Vision Screener (SureSight) to detect significant refractive errors (RE) among preschoolers. Methods. Refraction results of eye care professionals using NCR, Retinomax, and SureSight (n = 2588) and of nurse and lay screeners using Retinomax and SureSight (n = 1452) were compared with masked cycloplegic retinoscopy results. Significant RE was defined as hyperopia greater than +3.25 diopters (D), myopia greater than 2.00 D, astigmatism greater than 1.50 D, and anisometropia greater than 1.00 D interocular difference in hyperopia, greater than 3.00 D interocular difference in myopia, or greater than 1.50 D interocular difference in astigmatism. The ability of each screening test to identify presence, type, and/or severity of significant RE was summarized by the area under the ROC curve (AUC) and calculated from weighted logistic regression models. Results. For detection of each type of significant RE, AUC of each test was high; AUC was better for detecting the most severe levels of RE than for all REs considered important to detect (AUC 0.97–1.00 vs. 0.92–0.93). The area under the curve of each screening test was high for myopia (AUC 0.97–0.99). Noncycloplegic retinoscopy and Retinomax performed better than SureSight for hyperopia (AUC 0.92–0.99 and 0.90–0.98 vs. 0.85–0.94, P ≤ 0.02), Retinomax performed better than NCR for astigmatism greater than 1.50 D (AUC 0.95 vs. 0.90, P = 0.01), and SureSight performed better than Retinomax for anisometropia (AUC 0.85–1.00 vs. 0.76–0.96, P ≤ 0.07). Performance was similar for nurse and lay screeners in detecting any significant RE (AUC 0.92–1.00 vs. 0.92–0.99). Conclusions. Each test had a very high discriminatory power for detecting children with any significant RE. PMID:24481262

  7. SCIAMACHY WFM-DOAS XCO2: reduction of scattering related errors

    NASA Astrophysics Data System (ADS)

    Heymann, J.; Bovensmann, H.; Buchwitz, M.; Burrows, J. P.; Deutscher, N. M.; Notholt, J.; Rettinger, M.; Reuter, M.; Schneising, O.; Sussmann, R.; Warneke, T.

    2012-06-01

    Global observations of column-averaged dry air mole fractions of carbon dioxide (CO2), denoted by XCO2, retrieved from passive remote sensing instruments on Earth orbiting satellites can provide important and missing global information on the distribution and magnitude of regional CO2 surface fluxes. This application has challenging precision and accuracy requirements. SCIAMACHY on-board ENVISAT is the first satellite instrument, which measures the upwelling electromagnetic radiation in the near and short wave infrared at an adequate spectral and spatial resolution to yield near-surface sensitive XCO2. In a previous publication (Heymann et al., 2012), it has been shown by analysing seven years of SCIAMACHY WFM-DOAS XCO2 (WFMDv2.1) that unaccounted thin cirrus clouds can result in significant errors. In order to enhance the quality of the SCIAMACHY XCO2 data product, we have developed a new version of the retrieval algorithm (WFMDv2.2), which is described in this manuscript. It is based on an improved cloud filtering and correction method using the 1.4 μm strong water vapour absorption and 0.76 μm O2-A bands. The new algorithm has been used to generate a SCIAMACHY XCO2 data set covering the years 2003-2009. The new XCO2 data set has been validated using ground-based observations from the Total Carbon Column Observing Network (TCCON). The validation shows a significant improvement of the new product (v2.2) in comparison to the previous product (v2.1). For example, the standard deviation of the difference to TCCON at Darwin, Australia, has been reduced from 4 ppm to 2 ppm. The monthly regional-scale scatter of the data (defined as the mean inner monthly standard deviation of all quality filtered XCO2 retrievals within a radius of 350 km around various locations) has also been reduced, typically by a factor of about 1.5. Overall, the validation of the new WFMDv2.2 XCO2 data product can be summarised by a single measurement precision of 3.8 ppm, an estimated regional

  8. SCIAMACHY WFM-DOAS XCO2: reduction of scattering related errors

    NASA Astrophysics Data System (ADS)

    Heymann, J.; Bovensmann, H.; Buchwitz, M.; Burrows, J. P.; Deutscher, N. M.; Notholt, J.; Rettinger, M.; Reuter, M.; Schneising, O.; Sussmann, R.; Warneke, T.

    2012-10-01

    Global observations of column-averaged dry air mole fractions of carbon dioxide (CO2), denoted by XCO2 , retrieved from SCIAMACHY on-board ENVISAT can provide important and missing global information on the distribution and magnitude of regional CO2 surface fluxes. This application has challenging precision and accuracy requirements. In a previous publication (Heymann et al., 2012), it has been shown by analysing seven years of SCIAMACHY WFM-DOAS XCO2 (WFMDv2.1) that unaccounted thin cirrus clouds can result in significant errors. In order to enhance the quality of the SCIAMACHY XCO2 data product, we have developed a new version of the retrieval algorithm (WFMDv2.2), which is described in this manuscript. It is based on an improved cloud filtering and correction method using the 1.4 μm strong water vapour absorption and 0.76 μm O2-A bands. The new algorithm has been used to generate a SCIAMACHY XCO2 data set covering the years 2003-2009. The new XCO2 data set has been validated using ground-based observations from the Total Carbon Column Observing Network (TCCON). The validation shows a significant improvement of the new product (v2.2) in comparison to the previous product (v2.1). For example, the standard deviation of the difference to TCCON at Darwin, Australia, has been reduced from 4 ppm to 2 ppm. The monthly regional-scale scatter of the data (defined as the mean intra-monthly standard deviation of all quality filtered XCO2 retrievals within a radius of 350 km around various locations) has also been reduced, typically by a factor of about 1.5. Overall, the validation of the new WFMDv2.2 XCO2 data product can be summarised by a single measurement precision of 3.8 ppm, an estimated regional-scale (radius of 500 km) precision of monthly averages of 1.6 ppm and an estimated regional-scale relative accuracy of 0.8 ppm. In addition to the comparison with the limited number of TCCON sites, we also present a comparison with NOAA's global CO2 modelling and

  9. Age-Related Differences in the Accuracy of Web Query-Based Predictions of Influenza-Like Illness

    PubMed Central

    Domnich, Alexander; Panatto, Donatella; Signori, Alessio; Lai, Piero Luigi; Gasparini, Roberto; Amicizia, Daniela

    2015-01-01

    Background Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI). However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age. Methods Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed. Results Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929). However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0–4 and 25–44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15–24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing. Conclusions The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI. PMID:26011418

  10. Error-Related Brain Activity in Young Children: Associations with Parental Anxiety and Child Temperamental Negative Emotionality

    ERIC Educational Resources Information Center

    Torpey, Dana C.; Hajcak, Greg; Kim, Jiyon; Kujawa, Autumn J.; Dyson, Margaret W.; Olino, Thomas M.; Klein, Daniel N.

    2013-01-01

    Background: There is increasing interest in error-related brain activity in anxiety disorders. The error-related negativity (ERN) is a negative deflection in the event-related potential approximately 50 [milliseconds] after errors compared to correct responses. Recent studies suggest that the ERN may be a biomarker for anxiety, as it is positively…

  11. Spatial and Temporal Characteristics of Error-Related Activity in the Human Brain

    PubMed Central

    Miezin, Francis M.; Nelson, Steven M.; Dubis, Joseph W.; Dosenbach, Nico U.F.; Schlaggar, Bradley L.; Petersen, Steven E.

    2015-01-01

    A number of studies have focused on the role of specific brain regions, such as the dorsal anterior cingulate cortex during trials on which participants make errors, whereas others have implicated a host of more widely distributed regions in the human brain. Previous work has proposed that there are multiple cognitive control networks, raising the question of whether error-related activity can be found in each of these networks. Thus, to examine error-related activity broadly, we conducted a meta-analysis consisting of 12 tasks that included both error and correct trials. These tasks varied by stimulus input (visual, auditory), response output (button press, speech), stimulus category (words, pictures), and task type (e.g., recognition memory, mental rotation). We identified 41 brain regions that showed a differential fMRI BOLD response to error and correct trials across a majority of tasks. These regions displayed three unique response profiles: (1) fast, (2) prolonged, and (3) a delayed response to errors, as well as a more canonical response to correct trials. These regions were found mostly in several control networks, each network predominantly displaying one response profile. The one exception to this “one network, one response profile” observation is the frontoparietal network, which showed prolonged response profiles (all in the right hemisphere), and fast profiles (all but one in the left hemisphere). We suggest that, in the place of a single localized error mechanism, these findings point to a large-scale set of error-related regions across multiple systems that likely subserve different functions. PMID:25568119

  12. Senior High School Students' Errors on the Use of Relative Words

    ERIC Educational Resources Information Center

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  13. Error-Related Electrocortical Responses in 10-Year-Old Children and Young Adults

    ERIC Educational Resources Information Center

    Santesso, Diane L.; Segalowitz, Sidney J.; Schmidt, Louis A.

    2006-01-01

    Recent anatomical and electrophysiological evidence suggests that the anterior cingulate cortex (ACC) is relatively late to mature. This brain region appears to be critical for monitoring, evaluating, and adjusting ongoing behaviors. This monitoring elicits characteristic ERP components including the error-related negativity (ERN), error…

  14. Spatial reconstruction by patients with hippocampal damage is dominated by relational memory errors.

    PubMed

    Watson, Patrick D; Voss, Joel L; Warren, David E; Tranel, Daniel; Cohen, Neal J

    2013-07-01

    Hippocampal damage causes profound yet circumscribed memory impairment across diverse stimulus types and testing formats. Here, within a single test format involving a single class of stimuli, we identified different performance errors to better characterize the specifics of the underlying deficit. The task involved study and reconstruction of object arrays across brief retention intervals. The most striking feature of patients' with hippocampal damage performance was that they tended to reverse the relative positions of item pairs within arrays of any size, effectively "swapping" pairs of objects. These "swap errors" were the primary error type in amnesia, almost never occurred in healthy comparison participants, and actually contributed to poor performance on more traditional metrics (such as distance between studied and reconstructed location). Patients made swap errors even in trials involving only a single pair of objects. The selectivity and severity of this particular deficit creates serious challenges for theories of memory and hippocampus.

  15. Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits

    PubMed Central

    Maurer, J. Michael; Steele, Vaughn R.; Cope, Lora M.; Vincent, Gina M.; Stephen, Julia M.; Calhoun, Vince D.; Kiehl, Kent A.

    2016-01-01

    Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n = 100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170

  16. Age-Related Reduction of the Confidence-Accuracy Relationship in Episodic Memory: Effects of Recollection Quality and Retrieval Monitoring

    PubMed Central

    Wong, Jessica T.; Cramer, Stefanie J.; Gallo, David A.

    2012-01-01

    We investigated age-related reductions in episodic metamemory accuracy. Participants studied pictures and words in different colors, and then took forced-choice recollection tests. These tests required recollection of the earlier presentation color, holding familiarity of the response options constant. Metamemory accuracy was assessed for each participant by comparing recollection test accuracy to corresponding confidence judgments. We found that recollection test accuracy was greater in younger than older adults, and also for pictures than font color. Metamemory accuracy tracked each of these recollection differences, as well as individual differences in recollection test accuracy within each age group, suggesting that recollection ability affects metamemory accuracy. Critically, the age-related impairment in metamemory accuracy persisted even when the groups were matched on recollection test accuracy, suggesting that metamemory declines were not entirely due to differences in recollection frequency or quantity, but that differences in recollection quality and/or monitoring also played a role. We also found that age-related impairments in recollection and metamemory accuracy were equivalent for pictures and font colors. This result contrasted with previous false recognition findings, which predicted that older adults would be differentially impaired when monitoring memory for less distinctive memories. These and other results suggest that age-related reductions in metamemory accuracy are not entirely attributable to false recognition effects, but also depend heavily on deficient recollection and/or monitoring of specific details associated with studied stimuli. PMID:22449027

  17. Investigation of technology needs for avoiding helicopter pilot error related accidents

    NASA Technical Reports Server (NTRS)

    Chais, R. I.; Simpson, W. E.

    1985-01-01

    Pilot error which is cited as a cause or related factor in most rotorcraft accidents was examined. Pilot error related accidents in helicopters to identify areas in which new technology could reduce or eliminate the underlying causes of these human errors were investigated. The aircraft accident data base at the U.S. Army Safety Center was studied as the source of data on helicopter accidents. A randomly selected sample of 110 aircraft records were analyzed on a case-by-case basis to assess the nature of problems which need to be resolved and applicable technology implications. Six technology areas in which there appears to be a need for new or increased emphasis are identified.

  18. An analytic approach to the relation between GPS attitude determination accuracy and antenna configuration geometry

    NASA Astrophysics Data System (ADS)

    Kozlov, Alexander; Nikulin, Alexei

    2017-01-01

    The reliability and accuracy of GPS attitude determination are still the main relevant theoretical questions in this particular field of study. While the first one derives from the probabilistic nature of phase ambiguity resolution algorithms, outlier measurement detection and effectiveness of multipath reduction, the second is additionally affected by geometric properties of the GNSS antenna configuration. Being trivial in two-antenna system, the relation between GPS attitude determination accuracy and antenna spatial layout becomes much less intuitive for multi-antenna configurations, and seems to have been examined analytically in some specific cases only. For example, most of research papers in the field use Euler angles as attitude representation, which have singularity in some cases, and consider the number of antennas of not more than four. We present some further investigation in this area.

  19. Experimental violation and reformulation of the Heisenberg's error-disturbance uncertainty relation

    NASA Astrophysics Data System (ADS)

    Baek, So-Young; Kaneda, Fumihiro; Ozawa, Masanao; Edamatsu, Keiichi

    2013-07-01

    The uncertainty principle formulated by Heisenberg in 1927 describes a trade-off between the error of a measurement of one observable and the disturbance caused on another complementary observable such that their product should be no less than the limit set by Planck's constant. However, Ozawa in 1988 showed a model of position measurement that breaks Heisenberg's relation and in 2003 revealed an alternative relation for error and disturbance to be proven universally valid. Here, we report an experimental test of Ozawa's relation for a single-photon polarization qubit, exploiting a more general class of quantum measurements than the class of projective measurements. The test is carried out by linear optical devices and realizes an indirect measurement model that breaks Heisenberg's relation throughout the range of our experimental parameter and yet validates Ozawa's relation.

  20. EEG-based decoding of error-related brain activity in a real-world driving task

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Chavarriaga, R.; Khaliliardali, Z.; Gheorghe, L.; Iturrate, I.; Millán, J. d. R.

    2015-12-01

    Objectives. Recent studies have started to explore the implementation of brain-computer interfaces (BCI) as part of driving assistant systems. The current study presents an EEG-based BCI that decodes error-related brain activity. Such information can be used, e.g., to predict driver’s intended turning direction before reaching road intersections. Approach. We executed experiments in a car simulator (N = 22) and a real car (N = 8). While subject was driving, a directional cue was shown before reaching an intersection, and we classified the presence or not of an error-related potentials from EEG to infer whether the cued direction coincided with the subject’s intention. In this protocol, the directional cue can correspond to an estimation of the driving direction provided by a driving assistance system. We analyzed ERPs elicited during normal driving and evaluated the classification performance in both offline and online tests. Results. An average classification accuracy of 0.698 ± 0.065 was obtained in offline experiments in the car simulator, while tests in the real car yielded a performance of 0.682 ± 0.059. The results were significantly higher than chance level for all cases. Online experiments led to equivalent performances in both simulated and real car driving experiments. These results support the feasibility of decoding these signals to help estimating whether the driver’s intention coincides with the advice provided by the driving assistant in a real car. Significance. The study demonstrates a BCI system in real-world driving, extending the work from previous simulated studies. As far as we know, this is the first online study in real car decoding driver’s error-related brain activity. Given the encouraging results, the paradigm could be further improved by using more sophisticated machine learning approaches and possibly be combined with applications in intelligent vehicles.

  1. Experimental Test of Error-Disturbance Uncertainty Relations by Weak Measurement

    NASA Astrophysics Data System (ADS)

    Kaneda, Fumihiro; Baek, So-Young; Ozawa, Masanao; Edamatsu, Keiichi

    2014-01-01

    We experimentally test the error-disturbance uncertainty relation (EDR) in generalized, strength-variable measurement of a single photon polarization qubit, making use of weak measurement that keeps the initial signal state practically unchanged. We demonstrate that the Heisenberg EDR is violated, yet the Ozawa and Branciard EDRs are valid throughout the range of our measurement strength.

  2. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  3. RapidEye constellation relative radiometric accuracy measurement using lunar images

    NASA Astrophysics Data System (ADS)

    Steyn, Joe; Tyc, George; Beckett, Keith; Hashida, Yoshi

    2009-09-01

    The RapidEye constellation includes five identical satellites in Low Earth Orbit (LEO). Each satellite has a 5-band (blue, green, red, red-edge and near infrared (NIR)) multispectral imager at 6.5m GSD. A three-axes attitude control system allows pointing the imager of each satellite at the Moon during lunations. It is therefore possible to image the Moon from near identical viewing geometry within a span of 80 minutes with each one of the imagers. Comparing the radiometrically corrected images obtained from each band and each satellite allows a near instantaneous relative radiometric accuracy measurement and determination of relative gain changes between the five imagers. A more traditional terrestrial vicarious radiometric calibration program has also been completed by MDA on RapidEye. The two components of this program provide for spatial radiometric calibration ensuring that detector-to-detector response remains flat, while a temporal radiometric calibration approach has accumulated images of specific dry dessert calibration sites. These images are used to measure the constellation relative radiometric response and make on-ground gain and offset adjustments in order to maintain the relative accuracy of the constellation within +/-2.5%. A quantitative comparison between the gain changes measured by the lunar method and the terrestrial temporal radiometric calibration method is performed and will be presented.

  4. Strategies for nurses to prevent sleep-related injuries and errors.

    PubMed

    Caruso, Claire C; Hitchcock, Edward M

    2010-01-01

    Rehabilitation nurses work shift schedules or long hours to provide essential patient services around the clock. These demanding hours can lead to sleep difficulties, declines in performance, and increased worker errors. This article gives an overview of selected declines in cognitive performance that are associated with inadequate sleep and several factors that increase riskforfatigue-related errors. Selected strategies for nurses and managers to reduce these risks are discussed, such as better sleep practices, improved work schedule design, naps, caffeine, exposure to light, and rest breaks. Both nurses and managers share responsibility for implementing strategies to reduce risks from inadequate sleep.

  5. Pitch discrimination accuracy in musicians vs nonmusicians: an event-related potential and behavioral study.

    PubMed

    Tervaniemi, Mari; Just, Viola; Koelsch, Stefan; Widmann, Andreas; Schröger, Erich

    2005-02-01

    Previously, professional violin players were found to automatically discriminate tiny pitch changes, not discriminable by nonmusicians. The present study addressed the pitch processing accuracy in musicians with expertise in playing a wide selection of instruments (e.g., piano; wind and string instruments). Of specific interest was whether also musicians with such divergent backgrounds have facilitated accuracy in automatic and/or attentive levels of auditory processing. Thirteen professional musicians and 13 nonmusicians were presented with frequent standard sounds and rare deviant sounds (0.8, 2, or 4% higher in frequency). Auditory event-related potentials evoked by these sounds were recorded while first the subjects read a self-chosen book and second they indicated behaviorally the detection of sounds with deviant frequency. Musicians detected the pitch changes faster and more accurately than nonmusicians. The N2b and P3 responses recorded during attentive listening had larger amplitude in musicians than in nonmusicians. Interestingly, the superiority in pitch discrimination accuracy in musicians over nonmusicians was observed not only with the 0.8% but also with the 2% frequency changes. Moreover, also nonmusicians detected quite reliably the smallest pitch changes of 0.8%. However, the mismatch negativity (MMN) and P3a recorded during a reading condition did not differentiate musicians and nonmusicians. These results suggest that musical expertise may exert its effects merely at attentive levels of processing and not necessarily already at the preattentive levels.

  6. Error-related brain activity reveals self-centric motivation: culture matters.

    PubMed

    Kitayama, Shinobu; Park, Jiyoung

    2014-02-01

    To secure the interest of the personal self (vs. social others) is considered a fundamental human motive, but the nature of the motivation to secure the self-interest is not well understood. To address this issue, we assessed electrocortical responses of European Americans and Asians as they performed a flanker task while instructed to earn as many reward points as possible either for the self or for their same-sex friend. For European Americans, error-related negativity (ERN)-an event-related-potential component contingent on error responses--was significantly greater in the self condition than in the friend condition. Moreover, post-error slowing--an index of cognitive control to reduce errors--was observed in the self condition but not in the friend condition. Neither of these self-centric effects was observed among Asians, consistent with prior cross-cultural behavioral evidence. Interdependent self-construal mediated the effect of culture on the ERN self-centric effect. Our findings provide the first evidence for a neural correlate of self-centric motivation, which becomes more salient outside of interdependent social relations.

  7. Error-related brain activity in extraverts: evidence for altered response monitoring in social context.

    PubMed

    Fishman, Inna; Ng, Rowena

    2013-04-01

    While the personality trait of extraversion has been linked to enhanced reward sensitivity and its putative neural correlates, little is known about whether extraverts' neural circuits are particularly sensitive to social rewards, given their preference for social engagement and social interactions. Using event-related potentials (ERPs), this study examined the relationship between the variation on the extraversion spectrum and a feedback-related ERP component (the error-related negativity or ERN) known to be sensitive to the value placed on errors and reward. Participants completed a forced-choice task, in which either rewarding or punitive feedback regarding their performance was provided, through either social (facial expressions) or non-social (verbal written) mode. The ERNs elicited by error trials in the social - but not in non-social - blocks were found to be associated with the extent of one's extraversion. However, the directionality of the effect was in contrast with the original prediction: namely, extraverts exhibited smaller ERNs than introverts during social blocks, whereas all participants produced similar ERNs in the non-social, verbal feedback condition. This finding suggests that extraverts exhibit diminished engagement in response monitoring - or find errors to be less salient - in the context of social feedback, perhaps because they find social contexts more predictable and thus more pleasant and less anxiety provoking.

  8. Error-related ERP components and individual differences in punishment and reward sensitivity.

    PubMed

    Boksem, Maarten A S; Tops, Mattie; Wester, Anne E; Meijman, Theo F; Lorist, Monicque M

    2006-07-26

    Although the focus of the discussion regarding the significance of the error related negatively (ERN/Ne) has been on the cognitive factors reflected in this component, there is now a growing body of research that describes influences of motivation, affective style and other factors of personality on ERN/Ne amplitude. The present study was conducted to further evaluate the relationship between affective style, error related ERP components and their neural basis. Therefore, we had our subjects fill out the Behavioral Activation System/Behavioral Inhibition System (BIS/BAS) scales, which are based on Gray's (1987, 1989) biopsychological theory of personality. We found that subjects scoring high on the BIS scale displayed larger ERN/Ne amplitudes, while subjects scoring high on the BAS scale displayed larger error positivity (Pe) amplitudes. No correlations were found between BIS and Pe amplitude or between BAS and ERN/Ne amplitude. Results are discussed in terms of individual differences in reward and punishment sensitivity that are reflected in error related ERP components.

  9. 26 CFR 1.6664-1 - Accuracy-related and fraud penalties; definitions, effective date and special rules.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 13 2010-04-01 2010-04-01 false Accuracy-related and fraud penalties... SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Additions to the Tax, Additional Amounts, and Assessable Penalties § 1.6664-1 Accuracy-related and fraud penalties;...

  10. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    PubMed

    Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg

    2012-01-01

    The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  11. The Relative Effectiveness of Signaling Systems: Relying on External Items Reduces Signaling Accuracy while Leks Increase Accuracy

    PubMed Central

    Leighton, Gavin M.

    2014-01-01

    Multiple evolutionary phenomena require individual animals to assess conspecifics based on behaviors, morphology, or both. Both behavior and morphology can provide information about individuals and are often used as signals to convey information about quality, motivation, or energetic output. In certain cases, conspecific receivers of this information must rank these signaling individuals based on specific traits. The efficacy of information transfer associated within a signal is likely related to the type of trait used to signal, though few studies have investigated the relative effectiveness of contrasting signaling systems. I present a set of models that represent a large portion of signaling systems and compare them in terms of the ability of receivers to rank signalers accurately. Receivers more accurately assess signalers if the signalers use traits that do not require non-food resources; similarly, receivers more accurately ranked signalers if all the signalers could be observed simultaneously, similar to leks. Surprisingly, I also found that receivers are only slightly better at ranking signaler effort if the effort results in a cumulative structure. This series of findings suggests that receivers may attend to specific traits because the traits provide more information relative to others; and similarly, these results may explain the preponderance of morphological and behavioral display signals. PMID:24626221

  12. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  13. Mediofrontal event-related potentials in response to positive, negative and unsigned prediction errors.

    PubMed

    Sambrook, Thomas D; Goslin, Jeremy

    2014-08-01

    Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an expected and obtained reward. There is evidence that the brain computes RPEs, but an outstanding question is whether positive RPEs ("better than expected") and negative RPEs ("worse than expected") are represented in a single integrated system. An electrophysiological component, feedback related negativity, has been claimed to encode an RPE but its relative sensitivity to the utility of positive and negative RPEs remains unclear. This study explored the question by varying the utility of positive and negative RPEs in a design that controlled for other closely related properties of feedback and could distinguish utility from salience. It revealed a mediofrontal sensitivity to utility, for positive RPEs at 275-310ms and for negative RPEs at 310-390ms. These effects were preceded and succeeded by a response consistent with an unsigned prediction error, or "salience" coding.

  14. Retrieval of relative humidity profiles and its associated error from Megha-Tropiques measurements

    NASA Astrophysics Data System (ADS)

    Sivira, R.; Brogniez, H.; Mallet, C.; Oussar, Y.

    2013-05-01

    The combination of the two microwave radiometers, SAPHIR and MADRAS, on board the Megha-Tropiques platform is explored to define a retrieval method that estimates not only the relative humidity profile but also the associated confidence intervals. A comparison of three retrieval models was performed, in equal conditions of input and output data sets, through their statistical values (error variance, correlation coefficient and error mean) obtaining a profile of seven layers of relative humidity. The three models show the same behavior with respect to layers, mid-tropospheric layers reaching the best statistical values suggesting a model-independent problem. Finally, the study of the probability density function of the relative humidity at a given atmospheric pressure further gives insight of the confidence intervals.

  15. Cortical delta activity reflects reward prediction error and related behavioral adjustments, but at different times.

    PubMed

    Cavanagh, James F

    2015-04-15

    Recent work has suggested that reward prediction errors elicit a positive voltage deflection in the scalp-recorded electroencephalogram (EEG); an event sometimes termed a reward positivity. However, a strong test of this proposed relationship remains to be defined. Other important questions remain unaddressed: such as the role of the reward positivity in predicting future behavioral adjustments that maximize reward. To answer these questions, a three-armed bandit task was used to investigate the role of positive prediction errors during trial-by-trial exploration and task-set based exploitation. The feedback-locked reward positivity was characterized by delta band activities, and these related EEG features scaled with the degree of a computationally derived positive prediction error. However, these phenomena were also dissociated: the computational model predicted exploitative action selection and related response time speeding whereas the feedback-locked EEG features did not. Compellingly, delta band dynamics time-locked to the subsequent bandit (the P3) successfully predicted these behaviors. These bandit-locked findings included an enhanced parietal to motor cortex delta phase lag that correlated with the degree of response time speeding, suggesting a mechanistic role for delta band activities in motivating action selection. This dissociation in feedback vs. bandit locked EEG signals is interpreted as a differentiation in hierarchically distinct types of prediction error, yielding novel predictions about these dissociable delta band phenomena during reinforcement learning and decision making.

  16. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action.

    PubMed

    Bissonette, Gregory B; Roesch, Matthew R

    2016-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum.

  17. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action

    PubMed Central

    Roesch, Matthew R.

    2017-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum. PMID:26276036

  18. Order of accuracy of QUICK and related convection-diffusion schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.

  19. Errare machinale est: the use of error-related potentials in brain-machine interfaces

    PubMed Central

    Chavarriaga, Ricardo; Sobolewski, Aleksander; Millán, José del R.

    2014-01-01

    The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches. PMID:25100937

  20. Software platform for managing the classification of error- related potentials of observers

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  1. Operational constraints on state-dependent formulations of quantum error-disturbance trade-off relations

    NASA Astrophysics Data System (ADS)

    Korzekwa, Kamil; Jennings, David; Rudolph, Terry

    2014-05-01

    We argue for an operational requirement that all state-dependent measures of disturbance should satisfy. Motivated by this natural criterion, we prove that in any d-dimensional Hilbert space and for any pair of noncommuting operators, A and B, there exists a set of at least 2d -1 zero-noise, zero-disturbance (ZNZD) states, for which the first observable can be measured without noise and the second will not be disturbed. Moreover, we show that it is possible to construct such ZNZD states for which the expectation value of the commutator [A,B] does not vanish. Therefore any state-dependent error-disturbance relation, based on the expectation value of the commutator as a lower bound, must violate the operational requirement. We also discuss Ozawa's state-dependent error-disturbance relation in light of our results and show that the disturbance measure used in this relation exhibits unphysical properties. We conclude that the trade-off is inevitable only between state-independent measures of error and disturbance.

  2. Quantifying the relative contributions of lexical and phonological factors to regular past tense accuracy

    PubMed Central

    Owen Van Horne, Amanda J.; Green Fager, Melanie

    2015-01-01

    Purpose Children with specific language impairment (SLI) frequently have difficulty producing the past tense. This study aimed to quantify the relative influence of telicity (i.e., the completedness of an event), verb frequency, and stem final phonemes on the production of past tense by school-age children with SLI and their typically-developing (TD) peers. Method Archival elicited production data from children with SLI between the ages of 6 and 9 and TD peers ages 4 to 8 were reanalyzed. Past tense accuracy was predicted using measures of telicity, verb frequency measures, and properties of the final consonant of the verb stem. Result All children were highly accurate when verbs were telic, the inflected form was frequently heard in the past tense, and the word ended in a sonorant/ non-alveolar consonant. All children were less accurate when verbs were atelic, rarely heard in the past tense, or ended in a word final obstruent or alveolar consonant. SLI status depressed overall accuracy rates, but did not influence how facilitative a given factor was. Conclusion Some factors that have been believed to be useful only when children are first discovering past tense, such as telicity, appear to be influential in later years as well. PMID:25879455

  3. Accuracy of relative positioning by interferometry with GPS Double-blind test results

    NASA Technical Reports Server (NTRS)

    Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.

    1983-01-01

    MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.

  4. A field calibration method to eliminate the error caused by relative tilt on roll angle measurement

    NASA Astrophysics Data System (ADS)

    Qi, Jingya; Wang, Zhao; Huang, Junhui; Yu, Bao; Gao, Jianmin

    2016-11-01

    The roll angle measurement method based on a heterodyne interferometer is an efficient technique for its high precision and environmental noise immunity. The optical layout bases on a polarization-assisted conversion of the roll angle into an optical phase shift, read by a beam passing through the objective plate actuated by the roll rotation. The measurement sensitivity or the gain coefficient G is calibrated before. However, a relative tilt between the laser and objective plate always exist due to the tilt of the laser and the roll of the guide in the field long rail measurement. The relative tilt affect the value of G, thus result in the roll angle measurement error. In this paper, a method for field calibration of G is presented to eliminate the measurement error above. The field calibration layout turns the roll angle into an optical path change (OPC) by a rotary table. Thus, the roll angle can be obtained from the OPC read by a two-frequency interferometer. Together with the phase shift, an accurate G in field measurement can be obtained and the measurement error can be corrected. The optical system of the field calibration method is set up and the experiment results are given. Contrasted with the Renishaw XL-80 for calibration, the proposed field calibration method can obtain the accurate G in the field rail roll angle measurement.

  5. Mechanistically-informed damage detection using dynamic measurements: Extended constitutive relation error

    NASA Astrophysics Data System (ADS)

    Hu, X.; Prabhu, S.; Atamturktur, S.; Cogan, S.

    2017-02-01

    Model-based damage detection entails the calibration of damage-indicative parameters in a physics-based computer model of an undamaged structural system against measurements collected from its damaged counterpart. The approach relies on the premise that changes identified in the damage-indicative parameters during calibration reveal the structural damage in the system. In model-based damage detection, model calibration has traditionally been treated as a process, solely operating on the model output without incorporating available knowledge regarding the underlying mechanistic behavior of the structural system. In this paper, the authors propose a novel approach for model-based damage detection by implementing the Extended Constitutive Relation Error (ECRE), a method developed for error localization in finite element models. The ECRE method was originally conceived to identify discrepancies between experimental measurements and model predictions for a structure in a given healthy state. Implementing ECRE for damage detection leads to the evaluation of a structure in varying healthy states and determination of discrepancy between model predictions and experiments due to damage. The authors developed an ECRE-based damage detection procedure in which the model error and structural damage are identified in two distinct steps and demonstrate feasibility of the procedure in identifying the presence, location and relative severity of damage on a scaled two-story steel frame for damage scenarios of varying type and severity.

  6. Aberrant error processing in relation to symptom severity in obsessive–compulsive disorder: A multimodal neuroimaging study

    PubMed Central

    Agam, Yigal; Greenberg, Jennifer L.; Isom, Marlisa; Falkenstein, Martha J.; Jenike, Eric; Wilhelm, Sabine; Manoach, Dara S.

    2014-01-01

    Background Obsessive–compulsive disorder (OCD) is characterized by maladaptive repetitive behaviors that persist despite feedback. Using multimodal neuroimaging, we tested the hypothesis that this behavioral rigidity reflects impaired use of behavioral outcomes (here, errors) to adaptively adjust responses. We measured both neural responses to errors and adjustments in the subsequent trial to determine whether abnormalities correlate with symptom severity. Since error processing depends on communication between the anterior and the posterior cingulate cortex, we also examined the integrity of the cingulum bundle with diffusion tensor imaging. Methods Participants performed the same antisaccade task during functional MRI and electroencephalography sessions. We measured error-related activation of the anterior cingulate cortex (ACC) and the error-related negativity (ERN). We also examined post-error adjustments, indexed by changes in activation of the default network in trials surrounding errors. Results OCD patients showed intact error-related ACC activation and ERN, but abnormal adjustments in the post- vs. pre-error trial. Relative to controls, who responded to errors by deactivating the default network, OCD patients showed increased default network activation including in the rostral ACC (rACC). Greater rACC activation in the post-error trial correlated with more severe compulsions. Patients also showed increased fractional anisotropy (FA) in the white matter underlying rACC. Conclusions Impaired use of behavioral outcomes to adaptively adjust neural responses may contribute to symptoms in OCD. The rACC locus of abnormal adjustment and relations with symptoms suggests difficulty suppressing emotional responses to aversive, unexpected events (e.g., errors). Increased structural connectivity of this paralimbic default network region may contribute to this impairment. PMID:25057466

  7. Errors Related to Medication Reconciliation: A Prospective Study in Patients Admitted to the Post CCU.

    PubMed

    Haji Aghajani, Mohammad; Ghazaeian, Monireh; Mehrazin, Hamid Reza; Sistanizad, Mohammad; Miri, Mirmohammad

    2016-01-01

    Medication errors are one of the important factors that increase fatal injuries to the patients and burden significant economic costs to the health care. An appropriate medical history could reduce errors related to omission of the previous drugs at the time of hospitalization. The aim of this study, as first one in Iran, was evaluating the discrepancies between medication histories obtained by pharmacists and physicians/nurses and first order of physician. From September 2012 until March 2013, patients admitted to the post CCU of a 550 bed university hospital, were recruited in the study. As a part of medication reconciliation on admission, the physicians/nurses obtained medication history from all admitted patients. For patients included in the study, medication history was obtained by both physician/nurse and a pharmacy student (after training by a faculty clinical pharmacist) during the first 24 h of admission. 250 patients met inclusion criteria. The mean age of patients was 61.19 ± 14.41 years. Comparing pharmacy student drug history with medication lists obtained by nurses/physicians revealed 3036 discrepancies. On average, 12.14 discrepancies, ranged from 0 to 68, were identified per patient. Only in 20 patients (8%) there was 100 % agreement among medication lists obtained by pharmacist and physician/nurse. Comparing the medications by list of drugs ordered by physician at first visit showed 12.1 discrepancies on average ranging 0 to 72. According to the results, omission errors in our setting are higher than other countries. Pharmacy-based medication reconciliation could be recommended to decrease this type of error.

  8. Real time, high accuracy, relative state estimation for multiple vehicle systems

    NASA Astrophysics Data System (ADS)

    Williamson, Walton Ross

    2000-10-01

    This dissertation presents the development, implementation, and test results from a new instrumentation package for relative navigation between moving vehicles. The instrumentation package on each vehicle is composed of a GPS (Global Positioning System) receiver, an IMU (Inertial Measurement Unit), a wireless communication system, and a modular computer system. The GPS places all vehicles into the same inertial reference frame and provides a common clock allowing synchronization among all instrument packages. The IMU tracks the high frequency motion of the vehicle alleviating the need for a fixed base station. The wireless communication system communicates GPS code and carrier phase measurements and computed state estimates from each vehicle at a rate fast enough to capture the dynamic changes in the vehicles. This data representing both GPS and IMU measurements from each vehicle is fused together on each vehicle to produce position, velocity and attitude estimates relative to the other vehicles. This capability to estimate relative motion without a base station appears unique. Furthermore, the application of fusion algorithms to address this new estimation problem is unique. The use of carrier phase provides very accurate relative measurements. In constructing carrier phase measurement, the integer number of wave lengths between vehicles must be resolved. Although there exist integer resolution schemes, these algorithms are ad hoe. The scheme presented here is based on generating the conditional probability of the hypothesis of each integer given the measurement sequence. This nonlinear filter is an elegant and novel contribution. The entire system is tested in real time in an experiment intended to validate the measurement accuracy. The system built using the algorithms designed in this dissertation is capable of estimating relative range to less than 5 cm. RMS, relative roll and pitch to less than 0.2 degrees RMS, and relative yaw to less than 0.7 degrees RMS

  9. Improving accuracy for identifying related PubMed queries by an integrated approach.

    PubMed

    Lu, Zhiyong; Wilbur, W John

    2009-10-01

    PubMed is the most widely used tool for searching biomedical literature online. As with many other online search tools, a user often types a series of multiple related queries before retrieving satisfactory results to fulfill a single information need. Meanwhile, it is also a common phenomenon to see a user type queries on unrelated topics in a single session. In order to study PubMed users' search strategies, it is necessary to be able to automatically separate unrelated queries and group together related queries. Here, we report a novel approach combining both lexical and contextual analyses for segmenting PubMed query sessions and identifying related queries and compare its performance with the previous approach based solely on concept mapping. We experimented with our integrated approach on sample data consisting of 1539 pairs of consecutive user queries in 351 user sessions. The prediction results of 1396 pairs agreed with the gold-standard annotations, achieving an overall accuracy of 90.7%. This demonstrates that our approach is significantly better than the previously published method. By applying this approach to a one day query log of PubMed, we found that a significant proportion of information needs involved more than one PubMed query, and that most of the consecutive queries for the same information need are lexically related. Finally, the proposed PubMed distance is shown to be an accurate and meaningful measure for determining the contextual similarity between biological terms. The integrated approach can play a critical role in handling real-world PubMed query log data as is demonstrated in our experiments.

  10. Task-dependent signal variations in EEG error-related potentials for brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Iturrate, I.; Montesano, L.; Minguez, J.

    2013-04-01

    Objective. A major difficulty of brain-computer interface (BCI) technology is dealing with the noise of EEG and its signal variations. Previous works studied time-dependent non-stationarities for BCIs in which the user’s mental task was independent of the device operation (e.g., the mental task was motor imagery and the operational task was a speller). However, there are some BCIs, such as those based on error-related potentials, where the mental and operational tasks are dependent (e.g., the mental task is to assess the device action and the operational task is the device action itself). The dependence between the mental task and the device operation could introduce a new source of signal variations when the operational task changes, which has not been studied yet. The aim of this study is to analyse task-dependent signal variations and their effect on EEG error-related potentials.Approach. The work analyses the EEG variations on the three design steps of BCIs: an electrophysiology study to characterize the existence of these variations, a feature distribution analysis and a single-trial classification analysis to measure the impact on the final BCI performance.Results and significance. The results demonstrate that a change in the operational task produces variations in the potentials, even when EEG activity exclusively originated in brain areas related to error processing is considered. Consequently, the extracted features from the signals vary, and a classifier trained with one operational task presents a significant loss of performance for other tasks, requiring calibration or adaptation for each new task. In addition, a new calibration for each of the studied tasks rapidly outperforms adaptive techniques designed in the literature to mitigate the EEG time-dependent non-stationarities.

  11. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    SciTech Connect

    Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.

    2010-09-15

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article

  12. Neuroimaging measures of error-processing: Extracting reliable signals from event-related potentials and functional magnetic resonance imaging.

    PubMed

    Steele, Vaughn R; Anderson, Nathaniel E; Claus, Eric D; Bernat, Edward M; Rao, Vikram; Assaf, Michal; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A

    2016-05-15

    Error-related brain activity has become an increasingly important focus of cognitive neuroscience research utilizing both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Given the significant time and resources required to collect these data, it is important for researchers to plan their experiments such that stable estimates of error-related processes can be achieved efficiently. Reliability of error-related brain measures will vary as a function of the number of error trials and the number of participants included in the averages. Unfortunately, systematic investigations of the number of events and participants required to achieve stability in error-related processing are sparse, and none have addressed variability in sample size. Our goal here is to provide data compiled from a large sample of healthy participants (n=180) performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages. We examine ERP measures of error-related negativity (ERN/Ne) and error positivity (Pe), as well as event-related fMRI measures locked to False Alarms. We find that achieving stable estimates of ERP measures required four to six error trials and approximately 30 participants; fMRI measures required six to eight trials and approximately 40 participants. Fewer trials and participants were required for measures where additional data reduction techniques (i.e., principal component analysis and independent component analysis) were implemented. Ranges of reliability statistics for various sample sizes and numbers of trials are provided. We intend this to be a useful resource for those planning or evaluating ERP or fMRI investigations with tasks designed to measure error-processing.

  13. Error-Related Negativity and the Misattribution of State-Anxiety Following Errors: On the Reproducibility of Inzlicht and Al-Khindi (2012)

    PubMed Central

    Cano Rodilla, Carmen; Beauducel, André; Leue, Anja

    2016-01-01

    In their innovative study, Inzlicht and Al-Khindi (2012) demonstrated that participants who were allowed to misattribute their arousal and negative affect induced by errors to a placebo beverage had a reduced error-related negativity (ERN/Ne) compared to controls not being allowed to misattribute their arousal following errors. These results contribute to the ongoing debate that affect and motivation are interwoven with the cognitive processing of errors. Evidence that the misattribution of negative affect modulates the ERN/Ne is essential for understanding the mechanisms behind ERN/Ne. Therefore, and because of the growing debate on reproducibility of empirical findings, we aimed at replicating the misattribution effects on the ERN/Ne in a go/nogo task. Students were randomly assigned to a misattribution group (n = 48) or a control group (n = 51). Participants of the misattribution group consumed a beverage said to have side effects that would increase their physiological arousal, so that they could misattribute the negative affect induced by errors to the beverage. Participants of the control group correctly believed that the beverage had no side effects. As Inzlicht and Al-Khindi (2012), we did not observe performance differences between both groups. However, ERN/Ne differences between misattribution and control group could not be replicated, although the statistical power of the replication study was high. Evidence regarding the replication of performance and the non-replication of ERN/Ne findings was confirmed by Bayesian statistics. PMID:27708571

  14. Error-related negativity in the skilled brain of pianists reveals motor simulation.

    PubMed

    Proverbio, Alice Mado; Cozzi, Matteo; Orlandi, Andrea; Carminati, Manuel

    2017-03-27

    Evidences have been provided of a crucial role of multimodal audio-visuomotor processing in subserving the musical ability. In this paper we investigated whether musical audiovisual stimulation might trigger the activation of motor information in the brain of professional pianists, due to the presence of permanent gestures/sound associations. At this aim EEG was recorded in 24 pianists and naive participants engaged in the detection of rare targets while watching hundreds of video clips showing a pair of hands in the act of playing, along with a compatible or incompatible piano soundtrack. Hands size and apparent distance allowed self-ownership and agency illusions, and therefore motor simulation. Event-related potentials (ERPs) and relative source reconstruction showed the presence of an Error-related negativity (ERN) to incongruent trials at anterior frontal scalp sites, only in pianists, with no difference in naïve participants. ERN was mostly explained by an anterior cingulate cortex (ACC) source. Other sources included "hands" IT regions, the superior temporal gyrus (STG) involved in conjoined auditory and visuomotor processing, SMA and cerebellum (representing and controlling motor subroutines), and regions involved in body parts representation (somatosensory cortex, uncus, cuneus and precuneus). The findings demonstrate that instrument-specific audiovisual stimulation is able to trigger error shooting and correction neural responses via motor resonance and mirroring, being a possible aid in learning and rehabilitation.

  15. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  16. Relative Accuracy of Nucleic Acid Amplification Tests and Culture in Detecting Chlamydia in Asymptomatic Men

    PubMed Central

    Cheng, Hong; Macaluso, Maurizio; Vermund, Sten H.; Hook, Edward W.

    2001-01-01

    Published estimates of the sensitivity and specificity of PCR and ligase chain reaction (LCR) for detecting Chlamydia trachomatis are potentially biased because of study design limitations (confirmation of test results was limited to subjects who were PCR or LCR positive but culture negative). Relative measures of test accuracy are less prone to bias in incomplete study designs. We estimated the relative sensitivity (RSN) and relative false-positive rate (RFP) for PCR and LCR versus cell culture among 1,138 asymptomatic men and evaluated the potential bias of RSN and RFP estimates. PCR and LCR testing in urine were compared to culture of urethral specimens. Discordant results (PCR or LCR positive, but culture negative) were confirmed by using a sequence including the other DNA amplification test, direct fluorescent antibody testing, and a DNA amplification test to detect chlamydial major outer membrane protein. The RSN estimates for PCR and LCR were 1.45 (95% confidence interval [CI] = 1.3 to 1.7) and 1.49 (95% CI = 1.3 to 1.7), respectively, indicating that both methods are more sensitive than culture. Very few false-positive results were found, indicating that the specificity levels of PCR, LCR, and culture are high. The potential bias in RSN and RFP estimates were <5 and <20%, respectively. The estimation of bias is based on the most likely and probably conservative parameter settings. If the sensitivity of culture is between 60 and 65%, then the true sensitivity of PCR and LCR is between 90 and 97%. Our findings indicate that PCR and LCR are significantly more sensitive than culture, while the three tests have similar specificities. PMID:11682509

  17. Assessing disease severity: accuracy and reliability of rater estimates in relation to number of diagrams in a standard area diagram set

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Error in rater estimates of plant disease severity occur, and standard area diagrams (SADs) help improve accuracy and reliability. The effects of diagram number in a SAD set on accuracy and reliability is unknown. The objective of this study was to compare estimates of pecan scab severity made witho...

  18. Post-glacial landforms dating by lichenometry in Iceland - the accuracy of relative results and conversely

    NASA Astrophysics Data System (ADS)

    Decaulne, Armelle

    2014-05-01

    Lichenometry studies are carried out in Iceland since 1970 all over the country, using various techniques to solve a range of geomorphologic issues, from moraine dating and glacial advances, outwash timing, proglacial river incision, soil erosion, rock-glacier development, climate variations, to debris-flow occurrence and extreme snow-avalanche frequency. Most users have sought to date proglacial landforms in two main areas, around the southern ice-caps of Vatnajökull and Myrdalsjökull; and in Tröllaskagi in northern Iceland. Based on the results of over thirty five published studies, lichenometry is deemed to be successful dating tool in Iceland, and seems to approach an absolute dating technique at least over the last hundred years, under well constrained environmental conditions at local scale. With an increasing awareness of the methodological limitations of the technique, together with more sophisticated data treatments, predicted lichenometric 'ages' are supposedly gaining in robustness and in precision. However, comparisons between regions, and even between studies in the same area, are hindered by the use of different measurement techniques and data processing. These issues are exacerbated in Iceland by rapid environmental changes across short distances and, more generally, by the common problems surrounding lichen species mis-identification in the field; not mentioning the age discrepancy offered by other dating tools, such as tephrochronology. Some authors claim lichenometry can help to a precise reconstruction of landforms and geomorphic processes in Iceland, proposing yearly dating, others includes margin errors in their reconstructions, while some limit its use to generation identifications, refusing to overpass the nature of the gathered data and further interpretation. Finally, can lichenometry be a relatively accurate dating technique or rather an accurate relative dating tool in Iceland?

  19. Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity.

    PubMed

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L

    2016-03-23

    The error-related negativity (ERN or Ne) is a negative event-related brain potential that peaks about 20-100 ms after people perform an incorrect response in choice reaction time tasks. Prior research has shown that the ERN may be enhanced by situational and dispositional factors that promote intrinsic motivation. Building on and extending this work the authors hypothesized that simulated interpersonal touch may increase task engagement and thereby increase ERN amplitude. To test this notion, 20 participants performed a Go/No-Go task while holding a teddy bear or a same-sized cardboard box. As expected, the ERN was significantly larger when participants held a teddy bear rather than a cardboard box. This effect was most pronounced for people high (rather than low) in trait intrinsic motivation, who may depend more on intrinsically motivating task cues to maintain task engagement. These findings highlight the potential benefits of simulated interpersonal touch in stimulating attention to errors, especially among people who are intrinsically motivated.

  20. Impacts of visuomotor sequence learning methods on speed and accuracy: Starting over from the beginning or from the point of error.

    PubMed

    Tanaka, Kanji; Watanabe, Katsumi

    2016-02-01

    The present study examined whether sequence learning led to more accurate and shorter performance time if people who are learning a sequence start over from the beginning when they make an error (i.e., practice the whole sequence) or only from the point of error (i.e., practice a part of the sequence). We used a visuomotor sequence learning paradigm with a trial-and-error procedure. In Experiment 1, we found fewer errors, and shorter performance time for those who restarted their performance from the beginning of the sequence as compared to those who restarted from the point at which an error occurred, indicating better learning of spatial and motor representations of the sequence. This might be because the learned elements were repeated when the next performance started over from the beginning. In subsequent experiments, we increased the occasions for the repetitions of learned elements by modulating the number of fresh start points in the sequence after errors. The results showed that fewer fresh start points were likely to lead to fewer errors and shorter performance time, indicating that the repetitions of learned elements enabled participants to develop stronger spatial and motor representations of the sequence. Thus, a single or two fresh start points in the sequence (i.e., starting over only from the beginning or from the beginning or midpoint of the sequence after errors) is likely to lead to more accurate and faster performance.

  1. A 2 x 2 Taxonomy of Multilevel Latent Contextual Models: Accuracy-Bias Trade-Offs in Full and Partial Error Correction Models

    ERIC Educational Resources Information Center

    Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich

    2011-01-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…

  2. The effect of errors in the assignment of the transmission functions on the accuracy of the thermal sounding of the atmosphere

    NASA Technical Reports Server (NTRS)

    Timofeyev, Y. M.

    1979-01-01

    In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.

  3. Type I Error Inflation in the Traditional By-Participant Analysis to Metamemory Accuracy: A Generalized Mixed-Effects Model Perspective

    ERIC Educational Resources Information Center

    Murayama, Kou; Sakaki, Michiko; Yan, Veronica X.; Smith, Garry M.

    2014-01-01

    In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are…

  4. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-14

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended.

  5. Predicting sex offender recidivism. I. Correcting for item overselection and accuracy overestimation in scale development. II. Sampling error-induced attenuation of predictive validity over base rate information.

    PubMed

    Vrieze, Scott I; Grove, William M

    2008-06-01

    The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003 Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/ approximately dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST-R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in

  6. Single-session attention bias modification and error-related brain activity.

    PubMed

    Nelson, Brady D; Jackson, Felicia; Amir, Nader; Hajcak, Greg

    2015-12-01

    An attentional bias to threat has been implicated in the etiology and maintenance of anxiety disorders. Recently, attention bias modification (ABM) has been shown to reduce threat biases and decrease anxiety. However, it is unclear whether ABM modifies neural activity linked to anxiety and risk. The current study examined the relationship between ABM and the error-related negativity (ERN), a putative biomarker of risk for anxiety disorders, and the relationship between the ERN and ABM-based changes in attention to threat. Fifty-nine participants completed a single-session of ABM and a flanker task to elicit the ERN--in counterbalanced order (i.e., ABM-before vs. ABM-after the ERN was measured). Results indicated that the ERN was smaller (i.e., less negative) among individuals who completed ABM-before relative to those who completed ABM-after. Furthermore, greater attentional disengagement from negative stimuli during ABM was associated with a smaller ERN among ABM-before and ABM-after participants. The present study suggests a direct relationship between the malleability of negative attention bias and the ERN. Explanations are provided for how ABM may contribute to reductions in the ERN. Overall, the present study indicates that a single-session of ABM may be related to a decrease in neural activity linked to anxiety and risk.

  7. Single-Session Attention Bias Modification and Error-Related Brain Activity

    PubMed Central

    Nelson, Brady D.; Jackson, Felicia; Amir, Nader; Hajcak, Greg

    2015-01-01

    An attentional bias to threat has been implicated in the etiology and maintenance of anxiety disorders. Recently, attention bias modification (ABM) has been shown to reduce threat biases and decrease anxiety. However, it is unclear whether ABM modifies neural activity linked to anxiety and risk. The current study examined the relationship between ABM and the error-related negativity (ERN), a putative biomarker of risk for anxiety disorders, and the relationship between the ERN and ABM-based changes in attention to threat. Fifty-nine participants completed a single-session of ABM and a flanker task to elicit the ERN—in counterbalanced order (i.e., ABM-before vs. ABM-after the ERN was measured). Results indicated that the ERN was smaller (i.e., less negative) among individuals who completed ABM-before relative to those who completed ABM-after. Furthermore, greater attentional disengagement from negative stimuli during ABM was associated with a smaller ERN among ABM-before and ABM-after participants. The present study suggests a direct relationship between the malleability of negative attention bias and the ERN. Explanations are provided for how ABM may contribute to reductions in the ERN. Overall, the present study indicates that a single-session of ABM may be related to a decrease in neural activity linked to anxiety and risk. PMID:26063611

  8. Individual Differences in Relative Metacomprehension Accuracy: Variation within and across Task Manipulations

    ERIC Educational Resources Information Center

    Chiang, Evelyn S.; Therriault, David J.; Franks, Bridget A.

    2010-01-01

    In recent decades, increasing numbers of studies have focused on metacomprehension accuracy, or readers' ability to distinguish between texts comprehended more vs. less well. Following early findings that suggested readers are fairly poor at doing so, a number of studies have identified specific tasks to supplement a single reading of text that…

  9. How Does Speed and Accuracy in Reading Relate to Reading Comprehension in Arabic?

    ERIC Educational Resources Information Center

    Abu-Leil, Aula Khateeb; Share, David L.; Ibrahim, Raphiq

    2014-01-01

    The purpose of this study was to investigate the potential contribution of decoding efficiency to the development of reading comprehension among skilled adult native Arabic speakers. In addition, we tried to investigate the influence of Arabic vowels on reading accuracy, reading speed, and therefore to reading comprehension. Seventy-five Arabic…

  10. Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test

    NASA Astrophysics Data System (ADS)

    Christophides, Damianos; Davies, Alex; Fleckney, Mark

    2016-12-01

    Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors  ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.

  11. Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test.

    PubMed

    Christophides, Damianos; Davies, Alex; Fleckney, Mark

    2016-12-07

    Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors  ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.

  12. Measurement errors related to contact angle analysis of hydrogel and silicone hydrogel contact lenses.

    PubMed

    Read, Michael L; Morgan, Philip B; Maldonado-Codina, Carole

    2009-11-01

    This work sought to undertake a comprehensive investigation of the measurement errors associated with contact angle assessment of curved hydrogel contact lens surfaces. The contact angle coefficient of repeatability (COR) associated with three measurement conditions (image analysis COR, intralens COR, and interlens COR) was determined by measuring the contact angles (using both sessile drop and captive bubble methods) for three silicone hydrogel lenses (senofilcon A, balafilcon A, lotrafilcon A) and one conventional hydrogel lens (etafilcon A). Image analysis COR values were about 2 degrees , whereas intralens COR values (95% confidence intervals) ranged from 4.0 degrees (3.3 degrees , 4.7 degrees ) (lotrafilcon A, captive bubble) to 10.2 degrees (8.4 degrees , 12.1 degrees ) (senofilcon A, sessile drop). Interlens COR values ranged from 4.5 degrees (3.7 degrees , 5.2 degrees ) (lotrafilcon A, captive bubble) to 16.5 degrees (13.6 degrees , 19.4 degrees ) (senofilcon A, sessile drop). Measurement error associated with image analysis was shown to be small as an absolute measure, although proportionally more significant for lenses with low contact angle. Sessile drop contact angles were typically less repeatable than captive bubble contact angles. For sessile drop measures, repeatability was poorer with the silicone hydrogel lenses when compared with the conventional hydrogel lens; this phenomenon was not observed for the captive bubble method, suggesting that methodological factors related to the sessile drop technique (such as surface dehydration and blotting) may play a role in the increased variability of contact angle measurements observed with silicone hydrogel contact lenses.

  13. Performance error-related activity in monkey striatum during social interactions

    PubMed Central

    Báez-Mendoza, Raymundo; Schultz, Wolfram

    2016-01-01

    Monitoring our performance is fundamental to motor control while monitoring other’s performance is fundamental to social coordination. The striatum is hypothesized to play a role in action selection, action initiation, and action parsing, but we know little of its role in performance monitoring. Furthermore, the striatum contains neurons that respond to own and other’s actions. Therefore, we asked if striatal neurons signal own and conspecific’s performance errors. Two macaque monkeys sitting across a touch-sensitive table in plain view of each other took turns performing a simple motor task to obtain juice rewards while we recorded single striatal neurons from one monkey at a time. Both monkeys made more errors after individually making an error but made fewer errors after a conspecific error. Thus, monkeys’ behavior was influenced by their own and their conspecific’s past behavior. A population of striatal neurons responded to own and conspecific’s performance errors independently of a negative reward prediction error signal. Overall, these data suggest that monkeys are influenced by social errors and that striatal neurons signal performance errors. These signals might be important for social coordination, observational learning and adjusting to an ever-changing social landscape. PMID:27849004

  14. Maternal Accuracy and Behavior in Anticipating Children's Responses to Novelty: Relations to Fearful Temperament and Implications for Anxiety Development

    ERIC Educational Resources Information Center

    Kiel, Elizabeth J.; Buss, Kristin A.

    2010-01-01

    Previous research has suggested that mothers' behaviors may serve as a mechanism in the development from toddler fearful temperament to childhood anxiety. The current study examined the maternal characteristic of accuracy in predicting toddlers' distress reactions to novelty in relation to temperament, parenting, and anxiety development.…

  15. Children's Age-Related Speed--Accuracy Strategies in Intercepting Moving Targets in Two Dimensions

    ERIC Educational Resources Information Center

    Rothenberg-Cunningham, Alek; Newell, Karl M.

    2013-01-01

    Purpose: This study investigated the age-related speed--accuracy strategies of children, adolescents, and adults in performing a rapid striking task that allowed the self-selection of the interception position in a virtual, two-dimensional environment. Method: The moving target had curvilinear trajectories that were determined by combinations of…

  16. Motivation and semantic context affect brain error-monitoring activity: an event-related brain potentials study.

    PubMed

    Ganushchak, Lesya Y; Schiller, Niels O

    2008-01-01

    During speech production, we continuously monitor what we say. In situations in which speech errors potentially have more severe consequences, e.g. during a public presentation, our verbal self-monitoring system may pay special attention to prevent errors than in situations in which speech errors are more acceptable, such as a casual conversation. In an event-related potential study, we investigated whether or not motivation affected participants' performance using a picture naming task in a semantic blocking paradigm. Semantic context of to-be-named pictures was manipulated; blocks were semantically related (e.g., cat, dog, horse, etc.) or semantically unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated independently by monetary reward. The motivation manipulation did not affect error rate during picture naming. However, the high-motivation condition yielded increased amplitude and latency values of the error-related negativity (ERN) compared to the low-motivation condition, presumably indicating higher monitoring activity. Furthermore, participants showed semantic interference effects in reaction times and error rates. The ERN amplitude was also larger during semantically related than unrelated blocks, presumably indicating that semantic relatedness induces more conflict between possible verbal responses.

  17. Relation of probability of causation to relative risk and doubling dose: a methodologic error that has become a social problem.

    PubMed Central

    Greenland, S

    1999-01-01

    Epidemiologists, biostatisticians, and health physicists frequently serve as expert consultants to lawyers, courts, and administrators. One of the most common errors committed by experts is to equate, without qualification, the attributable fraction estimated from epidemiologic data to the probability of causation requested by courts and administrators. This error has become so pervasive that it has been incorporated into judicial precedents and legislation. This commentary provides a brief overview of the error and the context in which it arises. PMID:10432900

  18. Individual differences in reward-prediction-error: extraversion and feedback-related negativity.

    PubMed

    Smillie, Luke D; Cooper, Andrew J; Pickering, Alan D

    2011-10-01

    Medial frontal scalp-recorded negativity occurring ∼200-300 ms post-stimulus [known as feedback-related negativity (FRN)] is attenuated following unpredicted reward and potentiated following unpredicted non-reward. This encourages the view that FRN may partly reflect dopaminergic 'reward-prediction-error' signalling. We examined the influence of a putatively dopamine-based personality trait, extraversion (N = 30), and a dopamine-related gene polymorphism, DRD2/ANKK1 (N = 24), on FRN during an associative reward-learning paradigm. FRN was most negative following unpredicted non-reward and least-negative following unpredicted reward. A difference wave contrasting these conditions was significantly more pronounced for extraverted participants than for introverts, with a similar but non-significant trend for participants carrying at least one copy of the A1 allele of the DRD2/ANKK1 gene compared with those without the allele. Extraversion was also significantly higher in A1 allele carriers. Results have broad relevance to neuroscience and personality research concerning reward processing and dopamine function.

  19. Driving error and anxiety related to iPod mp3 player use in a simulated driving experience.

    PubMed

    Harvey, Ashley R; Carden, Randy L

    2009-08-01

    Driver distraction due to cellular phone usage has repeatedly been shown to increase the risk of vehicular accidents; however, the literature regarding the use of other personal electronic devices while driving is relatively sparse. It was hypothesized that the usage of an mp3 player would result in an increase in not only driving error while operating a driving simulator, but driver anxiety scores as well. It was also hypothesized that anxiety scores would be positively related to driving errors when using an mp3 player. 32 participants drove through a set course in a driving simulator twice, once with and once without an iPod mp3 player, with the order counterbalanced. Number of driving errors per course, such as leaving the road, impacts with stationary objects, loss of vehicular control, etc., and anxiety were significantly higher when an iPod was in use. Anxiety scores were unrelated to number of driving errors.

  20. Assessment of relative accuracy of AHN-2 laser scanning data using planar features.

    PubMed

    van der Sande, Corné; Soudarissanane, Sylvie; Khoshelham, Kourosh

    2010-01-01

    AHN-2 is the second part of the Actueel Hoogtebestand Nederland project, which concerns the acquisition of high-resolution altimetry data over the entire Netherlands using airborne laser scanning. The accuracy assessment of laser altimetry data usually relies on comparing corresponding tie elements, often points or lines, in the overlapping strips. This paper proposes a new approach to strip adjustment and accuracy assessment of AHN-2 data by using planar features. In the proposed approach a transformation is estimated between two overlapping strips by minimizing the distances between points in one strip and their corresponding planes in the other. The planes and the corresponding points are extracted in an automated segmentation process. The point-to-plane distances are used as observables in an estimation model, whereby the parameters of a transformation between the two strips and their associated quality measures are estimated. We demonstrate the performance of the method for the accuracy assessment of the AHN-2 dataset over Zeeland province of The Netherlands. The results show vertical offsets of up to 4 cm between the overlapping strips, and horizontal offsets ranging from 2 cm to 34 cm.

  1. Diagnostic accuracy of a bayesian latent group analysis for the detection of malingering-related poor effort.

    PubMed

    Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina

    2013-01-01

    In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.

  2. Theta and Alpha Band Modulations Reflect Error-Related Adjustments in the Auditory Condensation Task

    PubMed Central

    Novikov, Nikita A.; Bryzgalov, Dmitri V.; Chernyshev, Boris V.

    2015-01-01

    Error commission leads to adaptive adjustments in a number of brain networks that subserve goal-directed behavior, resulting in either enhanced stimulus processing or increased motor threshold depending on the nature of errors committed. Here, we studied these adjustments by analyzing post-error modulations of alpha and theta band activity in the auditory version of the two-choice condensation task, which is highly demanding for sustained attention while involves no inhibition of prepotent responses. Errors were followed by increased frontal midline theta (FMT) activity, as well as by enhanced alpha band suppression in the parietal and the left central regions; parietal alpha suppression correlated with the task performance, left central alpha suppression correlated with the post-error slowing, and FMT increase correlated with both behavioral measures. On post-error correct trials, left-central alpha band suppression started earlier before the response, and the response was followed by weaker FMT activity, as well as by enhanced alpha band suppression distributed over the entire scalp. These findings indicate that several separate neuronal networks are involved in post-error adjustments, including the midfrontal performance monitoring network, the parietal attentional network, and the sensorimotor network. Supposedly, activity within these networks is rapidly modulated after errors, resulting in optimization of their functional state on the subsequent trials, with corresponding changes in behavioral measures. PMID:26733266

  3. Minimizing the Error Associated With Measurements of Migration-Related Sediment Exchange on Meandering Rivers

    NASA Astrophysics Data System (ADS)

    Lauer, J. W.; Parker, G.

    2005-05-01

    The floodplains of meandering rivers represent reservoirs that both store and release sediment. Bed material is generally released from cut banks and replaced in nearby point bars wherever migration occurs. Measuring the associated bed material flux is important for tracing the movement of contaminants that may be mixed with the bed material. Approximations of this flux can be made using a representative channel depth and sequences of aerial photography to estimate average absolute migration rates (or reworked areas) between photographs. Error in the aerial photographs leads to a positive bias in computed release rates. A method for removing this bias is introduced that uses the apparent offset of fixed linear features such as roads (along smaller rivers) or abandoned channel courses (along larger rivers). Measuring the rate of release of fine sediment is important both for predicting the long term morphodynamic evolution of the channel/floodplain system and for tracing the movement of contaminants that may be adsorbed to the fine sediment. While fine sediment can be mixed throughout the depth of the floodplain, it is most concentrated in the upper portion of older parts of the floodplain where it has had time to accumulate through overbank deposition. Its release rate can be estimated using migration rates computed from aerial photography in combination with local measurements of bank topography, both of which are highly variable even within a given reach. Where detailed bank topography is available for an entire reach, estimating the release of fine sediment is relatively straightforward. However, detailed topography is often unavailable along the banks of large lowland rivers, forcing estimates of the fine material flux to be made using a relatively small number of physically surveyed cross-sections. It is not immediately clear how many cross sections are required for a good estimate. This study performs Monte Carlo simulations on a detailed topographic dataset

  4. Reduced Error-Related Activation in Two Anterior Cingulate Circuits Is Related to Impaired Performance in Schizophrenia

    ERIC Educational Resources Information Center

    Polli, Frida E.; Barton, Jason J. S.; Thakkar, Katharine N.; Greve, Douglas N.; Goff, Donald C.; Rauch, Scott L.; Manoach, Dara S.

    2008-01-01

    To perform well on any challenging task, it is necessary to evaluate your performance so that you can learn from errors. Recent theoretical and experimental work suggests that the neural sequellae of error commission in a dorsal anterior cingulate circuit index a type of contingency- or reinforcement-based learning, while activation in a rostral…

  5. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    PubMed Central

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835

  6. Correcting a fundamental error in greenhouse gas accounting related to bioenergy.

    PubMed

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-06-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  7. Methods for detecting and estimating population threshold concentrations for air pollution-related mortality with exposure measurement error

    SciTech Connect

    Cakmak, S.; Burnett, R.T.; Krewski, D.

    1999-06-01

    The association between daily fluctuations in ambient particulate matter and daily variations in nonaccidental mortality have been extensively investigated. Although it is now widely recognized that such an association exists, the form of the concentration-response model is still in question. Linear, no threshold and linear threshold models have been most commonly examined. In this paper the authors considered methods to detect and estimate threshold concentrations using time series data of daily mortality rates and air pollution concentrations. Because exposure is measured with error, they also considered the influence of measurement error in distinguishing between these two completing model specifications. The methods were illustrated on a 15-year daily time series of nonaccidental mortality and particulate air pollution data in Toronto, Canada. Nonparametric smoothed representations of the association between mortality and air pollution were adequate to graphically distinguish between these two forms. Weighted nonlinear regression methods for relative risk models were adequate to give nearly unbiased estimates of threshold concentrations even under conditions of extreme exposure measurement error. The uncertainty in the threshold estimates increased with the degree of exposure error. Regression models incorporating threshold concentrations could be clearly distinguished from linear relative risk models in the presence of exposure measurement error. The assumption of a linear model given that a threshold model was the correct form usually resulted in overestimates in the number of averted premature deaths, except for low threshold concentrations and large measurement error.

  8. Point Cloud Derived Fromvideo Frames: Accuracy Assessment in Relation to Terrestrial Laser Scanningand Digital Camera Data

    NASA Astrophysics Data System (ADS)

    Delis, P.; Zacharek, M.; Wierzbicki, D.; Grochala, A.

    2017-02-01

    The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object) and Sony NEX-VG10 E (for the historic building). In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  9. TEPEE/GReAT (General Relativity Accuracy Test in an Einstein Elevator): experiment development status

    NASA Astrophysics Data System (ADS)

    Peron, Roberto

    TEPEE/GReAT is an ongoing experiment aimed at testing the Principle of Equivalence (PE) at a level of accuracy equal to 5 parts in 101 5 by means of a differential acceleration detector free falling inside a co-moving, cryogenic, evacuated capsule released from a stratospheric balloon. The detector is spun about a horizontal axis during the fall to modulate the PE violation signal at the spin frequency. The high accuracy requires resolving a very small signal out of the instrument's intrinsic noise and those noise components associated with the detector's motion and gravity gradients. Imperfections in the construction of the detector produce the latter noise components which, however, can be separated in frequency from the PE violation signal with specific configurations of the detector and its sensing masses. The following points will be discussed in the paper: i) those configurations of the differential acceleration detector that are capable of providing a remarkable frequency separation between the noise components mentioned above and the PE violation signal; ii) the latest advances in the development of detector's prototypes and specifically the electronic set-up that provides a high common mode rejection factor; iii) the experimental results obtained with instrument prototypes that show high sensitivity to differential accelerations and a strong common-mode-rejection factor.

  10. Implications of Ongoing Neural Development for the Measurement of the Error-Related Negativity in Childhood

    PubMed Central

    DuPuis, David; Ram, Nilam; Willner, Cynthia J.; Karalunas, Sarah; Segalowitz, Sidney J.; Gatzke-Kopp, Lisa M.

    2014-01-01

    Event-related potentials (ERPs) have been proposed as biomarkers capable of reflecting individual differences in neural processing not necessarily detectable at the behavioral level. However, the role of ERPs in developmental research could be hampered by current methodological approaches to quantification. ERPs are extracted as an average waveform over many trials, however, actual amplitudes would be misrepresented by an average if there was high trial-to-trial variability in signal latency. Low signal temporal consistency is thought to be a characteristic of immature neural systems, although consistency is not routinely measured in ERP research. The present study examined the differential contributions of signal strength and temporal consistency across trials in the error-related negativity (ERN) in 6-year-old children, as well as the developmental changes that occur in these measures. The 234 children were assessed annually in kindergarten, 1st, and 2nd grade. At all assessments signal strength and temporal consistency were highly correlated with the average ERN amplitude, and were not correlated with each other. Consistent with previous findings, ERN deflections in the averaged waveform increased with age. This was found to be a function of developmental increases in signal temporal consistency, whereas signal strength showed a significant decline across this time period. Additionally, average ERN amplitudes showed low-to-moderate stability across the three assessments whereas signal strength was highly stable. In contrast, signal temporal consistency did not evidence rank order stability across these ages. Signal strength appears to reflect a stable individual trait whereas developmental changes in temporal consistency may be experientially influenced. PMID:25209462

  11. Impaired rapid error monitoring but intact error signaling following rostral anterior cingulate cortex lesions in humans

    PubMed Central

    Maier, Martin E.; Di Gregorio, Francesco; Muricchio, Teresa; Di Pellegrino, Giuseppe

    2015-01-01

    Detecting one’s own errors and appropriately correcting behavior are crucial for efficient goal-directed performance. A correlate of rapid evaluation of behavioral outcomes is the error-related negativity (Ne/ERN) which emerges at the time of the erroneous response over frontal brain areas. However, whether the error monitoring system’s ability to distinguish between errors and correct responses at this early time point is a necessary precondition for the subsequent emergence of error awareness remains unclear. The present study investigated this question using error-related brain activity and vocal error signaling responses in seven human patients with lesions in the rostral anterior cingulate cortex (rACC) and adjoining ventromedial prefrontal cortex, while they performed a flanker task. The difference between errors and correct responses was severely attenuated in these patients indicating impaired rapid error monitong, but they showed no impairment in error signaling. However, impaired rapid error monitoring coincided with a failure to increase response accuracy on trials following errors. These results demonstrate that the error monitoring system’s ability to distinguish between errors and correct responses at the time of the response is crucial for adaptive post-error adjustments, but not a necessary precondition for error awareness. PMID:26136674

  12. The theoretical accuracy of Runge-Kutta time discretizations for the initial boundary value problem: A careful study of the boundary error

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun

    1993-01-01

    The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.

  13. The impact of a brief mindfulness meditation intervention on cognitive control and error-related performance monitoring.

    PubMed

    Larson, Michael J; Steffen, Patrick R; Primosch, Mark

    2013-01-01

    Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN) and error positivity (Pe) components of the scalp-recorded event-related potential (ERP) represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group) to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state vs. trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring.

  14. Can students evaluate their understanding of cause-and-effect relations? The effects of diagram completion on monitoring accuracy.

    PubMed

    van Loon, Mariëtte H; de Bruin, Anique B H; van Gog, Tamara; van Merriënboer, Jeroen J G; Dunlosky, John

    2014-09-01

    For effective self-regulated study of expository texts, it is crucial that learners can accurately monitor their understanding of cause-and-effect relations. This study aimed to improve adolescents' monitoring accuracy using a diagram completion task. Participants read six texts, predicted performance, selected texts for restudy, and were tested for comprehension. Three groups were compared, in which learners either completed causal diagrams immediately after reading, completed them after a delay, or received no-diagram control instructions. Accuracy of predictions of performance was highest for learning of causal relations following delayed diagram completion. Completing delayed diagrams focused learners specifically on their learning of causal relations, so this task did not improve monitoring of learning of factual information. When selecting texts for restudy, the participants followed their predictions of performance to the same degree, regardless of monitoring accuracy. Fine-grained analyses also showed that, when completing delayed diagrams, learners based judgments on diagnostic cues that indicated actual understanding of connections between events in the text. Most important, delayed diagram completion can improve adolescents' ability to monitor their learning of cause-and-effect relations.

  15. Global Medical Device Nomenclature: The Concept for Reducing Device-Related Medical Errors

    PubMed Central

    Anand, K; Saini, SK; Singh, BK; Veermaram, C

    2010-01-01

    In the medical device field, there are a number of players, having quite different responsibilities and levels of understanding of the processes, but all with one common interest, that of ensuring the availability of sound medical devices to the general public. To assist in this very important process, there is a need for a common method for describing and identifying these medical devices in an unambiguous manner. The Global Medical Device Nomenclature (GMDN) now provides, for the first time, an international tool for identifying all medical devices, at the generic level, in a meaningful manner that can be understood by all users. Prior to the GMDN, many nomenclature systems existed, all built upon different structures, and used locally or nationally for special purposes, with unusual approaches. These diverse systems, although often workable in their own right, have had no impact on improving the overall situation of providing a common platform, whereby, medical devices could be correctly identified and the related data safely exchanged between the involved parties. Work by standard organizations such as, CEN (European Committee for Standardization) and ISO (International Organization for Standardization), from 1993 to 1996, resulted in a standard that specified a structure for a new nomenclature, for medical devices. In this article we are trying to explain GMDN as the prime method to reduce medical device errors, and to understand the concept of GMDN, to regulate the medical device throughout the globe. Here we also make an attempt to explain various aspects of the GMDN system, such as, the process of development of the GMDN-CEN report, purpose, benefits, and their structural considerations. In addition, there will be an explanation of the coding system, role of the GMDN agency, and their utilization in the unique device identification (UDI) System. Finally, the current area of focus and vision for the future are also mentioned. PMID:21264103

  16. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    SciTech Connect

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  17. In Your Face: Risk of Punishment Enhances Cognitive Control and Error-Related Activity in the Corrugator Supercilii Muscle.

    PubMed

    Lindström, Björn R; Mattsson-Mårn, Isak Berglund; Golkar, Armita; Olsson, Andreas

    2013-01-01

    Cognitive control is needed when mistakes have consequences, especially when such consequences are potentially harmful. However, little is known about how the aversive consequences of deficient control affect behavior. To address this issue, participants performed a two-choice response time task where error commissions were expected to be punished by electric shocks during certain blocks. By manipulating (1) the perceived punishment risk (no, low, high) associated with error commissions, and (2) response conflict (low, high), we showed that motivation to avoid punishment enhanced performance during high response conflict. As a novel index of the processes enabling successful cognitive control under threat, we explored electromyographic activity in the corrugator supercilii (cEMG) muscle of the upper face. The corrugator supercilii is partially controlled by the anterior midcingulate cortex (aMCC) which is sensitive to negative affect, pain and cognitive control. As hypothesized, the cEMG exhibited several key similarities with the core temporal and functional characteristics of the Error-Related Negativity (ERN) ERP component, the hallmark index of cognitive control elicited by performance errors, and which has been linked to the aMCC. The cEMG was amplified within 100 ms of error commissions (the same time-window as the ERN), particularly during the high punishment risk condition where errors would be most aversive. Furthermore, similar to the ERN, the magnitude of error cEMG predicted post-error response time slowing. Our results suggest that cEMG activity can serve as an index of avoidance motivated control, which is instrumental to adaptive cognitive control when consequences are potentially harmful.

  18. Predictive accuracy of three field methods for estimating relative body fatness of nonobese and obese women.

    PubMed

    Heyward, V H; Cook, K L; Hicks, V L; Jenkins, K A; Quatrochi, J A; Wilson, W L

    1992-03-01

    Three methods of body composition assessment were used to estimate percent body fat (%BF) in nonobese (n = 77) and obese (n = 71) women, 20-72 yrs of age. Skinfolds (SKF), bioelectrical impedance (BIA), and near-infrared interactance (NIR) methods were compared to criterion-derived %BF from hydrostatic weighing (%BFHW). Nonobese subjects had < 30% BFHW and obese subjects had > or = 30% BFHW. The Jackson, Pollock, and Ward SKF equation and the manufacturer's equations for BIA (Valhalla) and NIR (Futrex-5000) were used. For nonobese women there were no significant differences between mean %BFHW and %BFSKF, %BFBIA, and %BFNIR. The rs and SEEs were 0.65 and 3.4% BF for SKF, 0.61 and 3.6% BF for BIA, and 0.58 and 3.7% BF for NIR for nonobese subjects. For obese women, mean %BFHW was significantly underestimated by the SKF, BIA, and NIR methods. The rs and SEEs for the obese group were 0.59 and 3.4% BF for SKF, 0.56 and 3.5% BF for BIA, and 0.36 and 3.9% BF for NIR. The total errors of the equations ranged from 5.6 to 8.0% BF in the obese group. It is concluded that all three field methods accurately estimate %BF for nonobese women; however, none of the methods is suitable for estimating %BF for obese women.

  19. The relative accuracy of mercury, Tempa-DOT and FeverScan thermometers.

    PubMed

    Morley, C; Murray, M; Whybrew, K

    1998-12-01

    This project aimed to assess the accuracy of Tempa-DOT and FeverScan for measuring children's temperatures. Tempa-DOT is a small flat chemical thermometer with 50 dots that change colour at specific temperatures. FeverScan is a liquid crystal strip thermometer with temperature sensitive colour bars that change colour when held against the forehead. Two medical students undertook this study in a hospital in Zambia. They saw most children presented to the hospital over a six-week period and on the children's ward. A mercury thermometer was placed in one axilla, a Tempa-DOT thermometer in the other, and the FeverScan was held on the child's forehead. Data were obtained from 1090 children with a median age of two years. The sensitivity of FeverScan to correctly identify febrile children was 89% and the positive predictive value to detect a fever was 57%. The sensitivity of Tempa-DOT to correctly identify febrile children was 92% and the positive predictive value for detecting febrile children was 86%. Tempa-DOT has a much better predictive value than FeverScan for detecting fever.

  20. Reduction of medication errors related to sliding scale insulin by the introduction of a standardized order sheet.

    PubMed

    Harada, Saki; Suzuki, Akio; Nishida, Shohei; Kobayashi, Ryo; Tamai, Sayuri; Kumada, Keisuke; Murakami, Nobuo; Itoh, Yoshinori

    2016-12-07

    Insulin is frequently used for glycemic control. Medication errors related to insulin are a common problem for medical institutions. Here, we prepared a standardized sliding scale insulin (SSI) order sheet and assessed the effect of its introduction. Observations before and after the introduction of the standardized SSI template were conducted at Gifu University Hospital. The incidence of medication errors, hyperglycemia, and hypoglycemia related to SSI were obtained from the electronic medical records. The introduction of the standardized SSI order sheet significantly reduced the incidence of medication errors related to SSI compared with that prior to its introduction (12/165 [7.3%] vs 4/159 [2.1%], P = .048). However, the incidence of hyperglycemia (≥250 mg/dL) and hypoglycemia (≤50 mg/dL) in patients who received SSI was not significantly different between the 2 groups. The introduction of the standardized SSI order sheet reduced the incidence of medication errors related to SSI.

  1. Medial prefrontal functional connectivity--relation to memory self-appraisal accuracy in older adults with and without memory disorders.

    PubMed

    Ries, Michele L; McLaren, Donald G; Bendlin, Barbara B; Guofanxu; Rowley, Howard A; Birn, Rasmus; Kastman, Erik K; Sager, Mark A; Asthana, Sanjay; Johnson, Sterling C

    2012-04-01

    It is tentatively estimated that 25% of people with early Alzheimer's disease (AD) show impaired awareness of disease-related changes in their own cognition. Research examining both normative self-awareness and altered awareness resulting from brain disease or injury points to the central role of the medial prefrontal cortex (MPFC) in generating accurate self-appraisals. The current project builds on this work - examining changes in MPFC functional connectivity that correspond to impaired self-appraisal accuracy early in the AD time course. Our behavioral focus was self-appraisal accuracy for everyday memory function, and this was measured using the Memory Function Scale of the Memory Awareness Rating Scale - an instrument psychometrically validated for this purpose. Using regression analysis of data from people with healthy memory (n=12) and people with impaired memory due to amnestic mild cognitive impairment or early AD (n=12), we tested the hypothesis that altered MPFC functional connectivity - particularly with other cortical midline structures and dorsolateral prefrontal cortex - explains variation in memory self-appraisal accuracy. We spatially constrained (i.e., explicitly masked) our regression analyses to those regions that work in conjunction with the MPFC to evoke self-appraisals in a normative group. This empirically derived explicit mask was generated from the result of a psychophysiological interaction analysis of fMRI self-appraisal task data in a separate, large group of cognitively healthy individuals. Results of our primary analysis (i.e., the regression of memory self-appraisal accuracy on MPFC functional connectivity) were generally consistent with our hypothesis: people who were less accurate in making memory self-appraisals showed attenuated functional connectivity between the MPFC seed region and proximal areas within the MPFC (including subgenual anterior cingulate cortex), bilateral dorsolateral prefrontal cortex, bilateral caudate, and

  2. Individual Differences in Working Memory Capacity Predict Action Monitoring and the Error-Related Negativity

    ERIC Educational Resources Information Center

    Miller, A. Eve; Watson, Jason M.; Strayer, David L.

    2012-01-01

    Neuroscience suggests that the anterior cingulate cortex (ACC) is responsible for conflict monitoring and the detection of errors in cognitive tasks, thereby contributing to the implementation of attentional control. Though individual differences in frontally mediated goal maintenance have clearly been shown to influence outward behavior in…

  3. Acute low back pain information online: an evaluation of quality, content accuracy and readability of related websites.

    PubMed

    Hendrick, Paul A; Ahmed, Osman H; Bankier, Shane S; Chan, Tze Jieh; Crawford, Sarah A; Ryder, Catherine R; Welsh, Lisa J; Schneiders, Anthony G

    2012-08-01

    The internet is increasingly being used as a source of health information by the general public. Numerous websites exist that provide advice and information on the diagnosis and management of acute low back pain (ALBP), however, the accuracy and utility of this information has yet to be established. The aim of this study was to establish the quality, content and readability of online information relating to the treatment and management of ALBP. The internet was systematically searched using Google search engines from six major English-speaking countries. In addition, relevant national and international low back pain-related professional organisations were also searched. A total of 22 relevant websites was identified. The accuracy of the content of the ALBP information was established using a 13 point guide developed from international guidelines. Website quality was evaluated using the HONcode, and the Flesch-Kincaid Grade level (FKGL) was used to establish readability. The majority of websites lacked accurate information, resulting in an overall mean content accuracy score of 6.3/17. Only 3 websites had a high content accuracy score (>14/17) along with an acceptable readability score (FKGL 6-8) with the majority of websites providing information which exceeded the recommended level for the average person to comprehend. The most accurately reported category was, "Education and reassurance" (98%) while information regarding "manipulation" (50%), "massage" (9%) and "exercise" (0%) were amongst the lowest scoring categories. These results demonstrate the need for more accurate and readable internet-based ALBP information specifically centred on evidence-based guidelines.

  4. The feedback-related negativity reflects “more or less” prediction error in appetitive and aversive conditions

    PubMed Central

    Huang, Yi; Yu, Rongjun

    2014-01-01

    Humans make predictions and use feedback to update their subsequent predictions. The feedback-related negativity (FRN) has been found to be sensitive to negative feedback as well as negative prediction error, such that the FRN is larger for outcomes that are worse than expected. The present study examined prediction errors in both appetitive and aversive conditions. We found that the FRN was more negative for reward omission vs. wins and for loss omission vs. losses, suggesting that the FRN might classify outcomes in a “more-or-less than expected” fashion rather than in the “better-or-worse than expected” dimension. Our findings challenge the previous notion that the FRN only encodes negative feedback and “worse than expected” negative prediction error. PMID:24904254

  5. NASA hydrogen maser accuracy and stability in relation to world standards

    NASA Technical Reports Server (NTRS)

    Peters, H. E.; Percival, D. B.

    1973-01-01

    Frequency comparisons were made among five NASA hydrogen masers in 1969 and again in 1972 to a precision of one part in 10 to the 13th power. Frequency comparisons were also made between these masers and the cesium-beam ensembles of several international standards laboratories. The hydrogen maser frequency stabilities as related to IAT were comparable to the frequency stabilities of individual time scales with respect to IAT. The relative frequency variations among the NASA masers, measured after the three-year interval, were 2 + or - 2 parts in 10 to the 13th power. Thus time scales based on hydrogen masers would have excellent long-term stability and uniformity.

  6. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  7. Localising the auditory N1m with event-related beamformers: localisation accuracy following bilateral and unilateral stimulation

    NASA Astrophysics Data System (ADS)

    Gascoyne, Lauren; Furlong, Paul L.; Hillebrand, Arjan; Worthen, Siân F.; Witton, Caroline

    2016-08-01

    The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses.

  8. Localising the auditory N1m with event-related beamformers: localisation accuracy following bilateral and unilateral stimulation

    PubMed Central

    Gascoyne, Lauren; Furlong, Paul L.; Hillebrand, Arjan; Worthen, Siân F.; Witton, Caroline

    2016-01-01

    The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses. PMID:27545435

  9. Error-related event-related potentials in children with Attention-Deficit Hyperactivity Disorder, Oppositional Defiant Disorder, Reading Disorder, and Math Disorder

    PubMed Central

    Burgio-Murphy, Andrea; Klorman, Rafael; Shaywitz, Sally E.; Fletcher, Jack M.; Marchione, Karen E.; Holahan, John; Stuebing, Karla K.; Thatcher, Joan E.; Shaywitz, Bennett A.

    2009-01-01

    We studied Error-Related Negativity (ERN) and Error Positivity (Pe) during a discrimination task in 319 unmedicated children divided into subtypes of ADHD (Not-ADHD/ Inattentive/ Combined), Learning Disorder (Not-LD/Reading/Math/Reading+Math), and Oppositional Defiant Disorder. Response-locked ERPs contained a frontocentral ERN and posterior Pe. Error-related Negativity and Positivity exhibited larger amplitude and later latency than corresponding waves for correct responses matched on reaction time. ADHD did not affect performance on the task. The ADHD/Combined sample exceeded controls in ERN amplitude, perhaps reflecting patients’ adaptive monitoring efforts. Compared with controls, subjects with Reading Disorder and Reading+Math Disorder performed worse on the task and had marginally more negative Correct-Related Negativities. In contrast, Pe/Pc was smaller in children with Reading+Math Disorder than among subjects with Reading Disorder and Not-LD participants; this nonspecific finding is not attributable to error processing. The results reflect anomalies in error processing in these disorders but further research is needed to address inconsistencies in the literature. PMID:17257731

  10. Error-related event-related potentials in children with attention-deficit hyperactivity disorder, oppositional defiant disorder, reading disorder, and math disorder.

    PubMed

    Burgio-Murphy, Andrea; Klorman, Rafael; Shaywitz, Sally E; Fletcher, Jack M; Marchione, Karen E; Holahan, John; Stuebing, Karla K; Thatcher, Joan E; Shaywitz, Bennett A

    2007-04-01

    We studied error-related negativity (ERN) and error positivity (Pe) during a discrimination task in 319 unmedicated children divided into subtypes of ADHD (Not-ADHD/inattentive/combined), learning disorder (Not-LD/reading/math/reading+math), and oppositional defiant disorder. Response-locked ERPs contained a frontocentral ERN and posterior Pe. Error-related negativity and positivity exhibited larger amplitude and later latency than corresponding waves for correct responses matched on reaction time. ADHD did not affect performance on the task. The ADHD/combined sample exceeded controls in ERN amplitude, perhaps reflecting patients' adaptive monitoring efforts. Compared with controls, subjects with reading disorder and reading+math disorder performed worse on the task and had marginally more negative correct-related negativities. In contrast, Pe/Pc was smaller in children with reading+math disorder than among subjects with reading disorder and Not-LD participants; this nonspecific finding is not attributable to error processing. The results reflect anomalies in error processing in these disorders but further research is needed to address inconsistencies in the literature.

  11. Challenging the 10-year rule: The accuracy of patient life expectancy predictions by physicians in relation to prostate cancer management

    PubMed Central

    Leung, Kevin M.Y.B.; Hopman, Wilma M; Kawakami, Jun

    2012-01-01

    Introduction: We assess physicians’ ability to accurately predict life expectancies. In prostate cancer this prediction is especially important as it affects screening decisions. No previous studies have examined accuracy in the context of real cases and concrete end points. Methods: Seven clinical scenarios were summarized from charts of deceased patients. We recruited 100 medical professionals to review these scenarios and estimate each patient’s life expectancy. Responses were analyzed with respect to the patients’ actual survival end points, then stratified based on the demographic information provided. Results: Respondent factors, such as sex, level of training, location of work or specialty, made no significant difference on prediction accuracy. Furthermore, respondents were typically pessimistic in their estimations with a negative linear trend between estimated life expectancy and actual survival. Overall, respondents were within 1 year of actual life expectancy only 15.9% of the time; on average, respondents were 67.4% inaccurate in relation to actual survival. If framed in terms of correctly identifying which patients would live more than or less than 10 years (dichotomous accuracy), physicians were correct 68.3% of the time. Conclusions: Physicians do poorly at predicting life expectancy and tend to underestimate how long patients have left to live. This overall inaccuracy raises the question of whether physicians should refine screening and treatment criteria, find a better proxy or dispose of the criteria altogether. PMID:23093629

  12. Approximating relational observables by absolute quantities: a quantum accuracy-size trade-off

    NASA Astrophysics Data System (ADS)

    Miyadera, Takayuki; Loveridge, Leon; Busch, Paul

    2016-05-01

    The notion that any physical quantity is defined and measured relative to a reference frame is traditionally not explicitly reflected in the theoretical description of physical experiments where, instead, the relevant observables are typically represented as ‘absolute’ quantities. However, the emergence of the resource theory of quantum reference frames as a new branch of quantum information science in recent years has highlighted the need to identify the physical conditions under which a quantum system can serve as a good reference. Here we investigate the conditions under which, in quantum theory, an account in terms of absolute quantities can provide a good approximation of relative quantities. We find that this requires the reference system to be large in a suitable sense.

  13. Investigation of Reversal Errors in Reading in Normal and Poor Readers as Related to Critical Factors in Reading Materials. Final Report.

    ERIC Educational Resources Information Center

    Liberman, Isabelle Y.; Shankweiler, Donald

    Reversals in poor and normal second-grade readers were studied in relation to their whole phonological error pattern in reading real words and nonsense syllables. Error categories included sequence and orientation reversals, other consonants, vowels, and total error. Reversals occurred in quantity only in poor readers, with large individual…

  14. Diagnostic Accuracy Study of an Oscillometric Ankle-Brachial Index in Peripheral Arterial Disease: The Influence of Oscillometric Errors and Calcified Legs

    PubMed Central

    Martínez-Vizcaíno, Vicente; Cavero-Redondo, Iván; Álvarez-Bueno, Celia; Garrido-Miguel, Miriam; Notario-Pacheco, Blanca

    2016-01-01

    Background Peripheral arterial disease (PAD) is an indicator of widespread atherosclerosis. However, most individuals with PAD, in spite of being at high cardiovascular risk, are asymptomatic. This fact, together with the limitations of the Doppler ankle-brachial index (ABI), contributes to PAD underdiagnose. The aim of this study was to compare oscillometric ABI and Doppler ABI to diagnose peripheral arterial disease, and also to examine the influence of oscillometric errors and calcified legs on the PAD diagnoses. Methods and Findings We measured the ankle-brachial indexes of 90 volunteers (n = 180 legs, age 70 ± 14 years, 43% diabetics) using both oscillometer OMRON-M3 and Doppler. For concordance analyses we used the Bland and Altman method, and also estimated the intraclass correlation coefficient. Receiver Operating Characteristic Curves were used to examine the diagnostic performance of both methods. The ABI means were 1.06 ± 0.14 and 1.04 ± 0.16 (p = 0.034) measured by oscillometer and Doppler ABIs respectively, with limits of agreement of ± 0.20 and intraclass correlation coefficient = 0.769. Oscillometer yielded 23 “error” measurements, and also overestimated the measurements in low ankle pressures. Using Doppler as gold standard, oscillometer performance for diagnosis of PAD showed an Area Under Curve = 0.944 (sensitivity: 66.7%, specificity: 96.8%). Moreover, when considered calcified legs and oscillometric “error” readings as arteriopathy equivalents, sensitivity rose to 78.2%, maintaining specificity in 96%. The best oscillometer cut-off point was 0.96 (sensitivity: 87%, specificity: 91%, positive likelihood ratio: 9.66 and negative likelihood ratio: 0.14). Conclusion Despite its limitations, oscillometric ABI could be a useful tool for the diagnosis of PAD, particularly when considering calcified legs and oscillometric “errors” readings as peripheral arterial disease equivalents. PMID:27898734

  15. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  16. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis.

    PubMed

    Kang, Mengjun; Wang, Mingjun; Du, Qingyun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction.

  17. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  18. Error-related processing following severe traumatic brain injury: an event-related functional magnetic resonance imaging (fMRI) study.

    PubMed

    Sozda, Christopher N; Larson, Michael J; Kaufman, David A S; Schmalfuss, Ilona M; Perlstein, William M

    2011-10-01

    Continuous monitoring of one's performance is invaluable for guiding behavior towards successful goal attainment by identifying deficits and strategically adjusting responses when performance is inadequate. In the present study, we exploited the advantages of event-related functional magnetic resonance imaging (fMRI) to examine brain activity associated with error-related processing after severe traumatic brain injury (sTBI). fMRI and behavioral data were acquired while 10 sTBI participants and 12 neurologically-healthy controls performed a task-switching cued-Stroop task. fMRI data were analyzed using a random-effects whole-brain voxel-wise general linear model and planned linear contrasts. Behaviorally, sTBI patients showed greater error-rate interference than neurologically-normal controls. fMRI data revealed that, compared to controls, sTBI patients showed greater magnitude error-related activation in the anterior cingulate cortex (ACC) and an increase in the overall spatial extent of error-related activation across cortical and subcortical regions. Implications for future research and potential limitations in conducting fMRI research in neurologically-impaired populations are discussed, as well as some potential benefits of employing multimodal imaging (e.g., fMRI and event-related potentials) of cognitive control processes in TBI.

  19. Evaluation of the Quantitative Accuracy of 3D Reconstruction of Edentulous Jaw Models with Jaw Relation Based on Reference Point System Alignment

    PubMed Central

    Li, Weiwei; Yuan, Fusong; Lv, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Objectives To apply contact measurement and reference point system (RPS) alignment techniques to establish a method for 3D reconstruction of the edentulous jaw models with centric relation and to quantitatively evaluate its accuracy. Methods Upper and lower edentulous jaw models were clinically prepared, 10 pairs of resin cylinders with same size were adhered to axial surfaces of upper and lower models. The occlusal bases and the upper and lower jaw models were installed in the centric relation position. Faro Edge 1.8m was used to directly obtain center points of the base surface of the cylinders (contact method). Activity 880 dental scanner was used to obtain 3D data of the cylinders and the center points were fitted (fitting method). 3 pairs of center points were used to align the virtual model to centric relation. An observation coordinate system was interactively established. The straight-line distances in the X (horizontal left/right), Y (horizontal anterior/posterior), and Z (vertical) between the remaining 7 pairs of center points derived from contact method and fitting method were measured respectively and analyzed using a paired t-test. Results The differences of the straight-line distances of the remaining 7 pairs of center points between the two methods were X: 0.074 ± 0.107 mm, Y: 0.168 ± 0.176 mm, and Z: −0.003± 0.155 mm. The results of paired t-test were X and Z: p >0.05, Y: p <0.05. Conclusion By using contact measurement and the reference point system alignment technique, highly accurate reconstruction of the vertical distance and centric relation of a digital edentulous jaw model can be achieved, which meets the design and manufacturing requirements of the complete dentures. The error of horizontal anterior/posterior jaw relation was relatively large. PMID:25659133

  20. Evaluation of measurement errors of temperature and relative humidity from HOBO data logger under different conditions of exposure to solar radiation.

    PubMed

    da Cunha, Antonio Ribeiro

    2015-05-01

    This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.

  1. Correction of confidence intervals in excess relative risk models using Monte Carlo dosimetry systems with shared errors

    PubMed Central

    Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce A.; Degteva, Marina; Moroz, Brian; Vostrotin, Vadim; Shiskina, Elena; Birchall, Alan; Stram, Daniel O.

    2017-01-01

    In epidemiological studies, exposures of interest are often measured with uncertainties, which may be independent or correlated. Independent errors can often be characterized relatively easily while correlated measurement errors have shared and hierarchical components that complicate the description of their structure. For some important studies, Monte Carlo dosimetry systems that provide multiple realizations of exposure estimates have been used to represent such complex error structures. While the effects of independent measurement errors on parameter estimation and methods to correct these effects have been studied comprehensively in the epidemiological literature, the literature on the effects of correlated errors, and associated correction methods is much more sparse. In this paper, we implement a novel method that calculates corrected confidence intervals based on the approximate asymptotic distribution of parameter estimates in linear excess relative risk (ERR) models. These models are widely used in survival analysis, particularly in radiation epidemiology. Specifically, for the dose effect estimate of interest (increase in relative risk per unit dose), a mixture distribution consisting of a normal and a lognormal component is applied. This choice of asymptotic approximation guarantees that corrected confidence intervals will always be bounded, a result which does not hold under a normal approximation. A simulation study was conducted to evaluate the proposed method in survival analysis using a realistic ERR model. We used both simulated Monte Carlo dosimetry systems (MCDS) and actual dose histories from the Mayak Worker Dosimetry System 2013, a MCDS for plutonium exposures in the Mayak Worker Cohort. Results show our proposed methods provide much improved coverage probabilities for the dose effect parameter, and noticeable improvements for other model parameters. PMID:28369141

  2. Unintentional Pharmaceutical-Related Medication Errors Caused by Laypersons Reported to the Toxicological Information Centre in the Czech Republic.

    PubMed

    Urban, Michal; Leššo, Roman; Pelclová, Daniela

    2016-07-01

    The purpose of the article was to study unintentional pharmaceutical-related poisonings committed by laypersons that were reported to the Toxicological Information Centre in the Czech Republic. Identifying frequency, sources, reasons and consequences of the medication errors in laypersons could help to reduce the overall rate of medication errors. Records of medication error enquiries from 2013 to 2014 were extracted from the electronic database, and the following variables were reviewed: drug class, dosage form, dose, age of the subject, cause of the error, time interval from ingestion to the call, symptoms, prognosis at the time of the call and first aid recommended. Of the calls, 1354 met the inclusion criteria. Among them, central nervous system-affecting drugs (23.6%), respiratory drugs (18.5%) and alimentary drugs (16.2%) were the most common drug classes involved in the medication errors. The highest proportion of the patients was in the youngest age subgroup 0-5 year-old (46%). The reasons for the medication errors involved the leaflet misinterpretation and mistaken dose (53.6%), mixing up medications (19.2%), attempting to reduce pain with repeated doses (6.4%), erroneous routes of administration (2.2%), psychiatric/elderly patients (2.7%), others (9.0%) or unknown (6.9%). A high proportion of children among the patients may be due to the fact that children's dosages for many drugs vary by their weight, and more medications come in a variety of concentrations. Most overdoses could be prevented by safer labelling, proper cap closure systems for liquid products and medication reconciliation by both physicians and pharmacists.

  3. The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals

    PubMed Central

    Westbrook, Johanna I; Baysari, Melissa T; Li, Ling; Burke, Rosemary; Richardson, Katrina L; Day, Richard O

    2013-01-01

    Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated. Results 42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)). Conclusions System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk. PMID:23721982

  4. Sampling of soil moisture fields and related errors: implications to the optimal sampling design

    NASA Astrophysics Data System (ADS)

    Yoo, Chulsang

    Adequate knowledge of soil moisture storage as well as evaporation and transpiration at the land surface is essential to the understanding and prediction of the reciprocal influences between land surface processes and weather and climate. Traditional techniques for soil moisture measurements are ground-based, but space-based sampling is becoming available due to recent improvement of remote sensing techniques. A fundamental question regarding the soil moisture observation is to estimate the sampling error for a given sampling scheme [G.R. North, S. Nakamoto, J Atmos. Ocean Tech. 6 (1989) 985-992; G. Kim, J.B. Valdes, G.R. North, C. Yoo, J. Hydrol., submitted]. In this study we provide the formalism for estimating the sampling errors for the cases of ground-based sensors and space-based sensors used both separately and together. For the study a model for soil moisture dynamics by D. Entekhabi, I. Rodriguez-Iturbe [Adv. Water Res. 17 (1994) 35-45] is introduced and an example application is given to the Little Washita basin using the Washita '92 soil moisture data. As a result of the study we found that the ground-based sensor network is ineffective for large or continental scale observation, but should be limited to a small-scale intensive observation such as for a preliminary study.

  5. Motor imagery, P300 and error-related EEG-based robot arm movement control for rehabilitation purpose.

    PubMed

    Bhattacharyya, Saugat; Konar, Amit; Tibarewala, D N

    2014-12-01

    The paper proposes a novel approach toward EEG-driven position control of a robot arm by utilizing motor imagery, P300 and error-related potentials (ErRP) to align the robot arm with desired target position. In the proposed scheme, the users generate motor imagery signals to control the motion of the robot arm. The P300 waveforms are detected when the user intends to stop the motion of the robot on reaching the goal position. The error potentials are employed as feedback response by the user. On detection of error the control system performs the necessary corrections on the robot arm. Here, an AdaBoost-Support Vector Machine (SVM) classifier is used to decode the 4-class motor imagery and an SVM is used to decode the presence of P300 and ErRP waveforms. The average steady-state error, peak overshoot and settling time obtained for our proposed approach is 0.045, 2.8% and 44 s, respectively, and the average rate of reaching the target is 95%. The results obtained for the proposed control scheme make it suitable for designs of prosthetics in rehabilitative applications.

  6. Study on the error in the dynamic spectrum method relative with the pathlength factor as a function of wavelength.

    PubMed

    Wang, Y; Li, G; Lin, L; Liu, Y L; Li, X X

    2005-01-01

    Utilizing near-infrared spectroscopy for non-invasive blood component concentration sensing has been a focusing topic in biomedical optics applications. The ease of use, low cost and portability of these methods is a clear advantage over the invasive blood component concentration sensing which is the main sensing method in the clinic application. However, there is no report about any successful non-invasive blood components (except the artery blood oxygen saturation) concentration detection techniques that can meet the requirements of clinic application. One of the most key difficulties is the influence of the individual discrepancy. Dynamic spectrum (DS) is a new measure method of non-invasive blood components concentration sensing presented recently. It can eliminate the individual discrepancy of the tissues except the pulsatile component of the artery blood (PCAB) theoretically. This indicates a brand new way to measure the blood components concentration and a potential to provide absolute quantitation of hemodynamic variables. One of the systematic errors in the calculation of the component changes from NIRS data of the dynamic spectrum is the absolute magnitudes and relative differences in pathlength factors as a function of wavelength. Monte Carlo simulations are used in this paper to examine the importance and mitigation methods of this error while the photoelectric pulse wave is detected on the finger tip. We found wavelength selection to be important variables in minimizing such errors, and replacing the average pathlength factor with the subsection pathlength factor appropriately could reduce the error to a small fraction (10%).

  7. Modeling and calibration of pointing errors with alt-az telescope

    NASA Astrophysics Data System (ADS)

    Huang, Long; Ma, Wenli; Huang, Jinlong

    2016-08-01

    This paper presents a new model for improving the pointing accuracy of a telescope. The Denavit-Hartenberg (D-H) convention was used to perform an error analysis of the telescope's kinematics. A kinematic model was used to relate pointing errors to mechanical errors and the parameters of the kinematic model were estimated with a statistical model fit using data from two large astronomical telescopes. The model illustrates the geometric errors caused by imprecision in manufacturing and assembly processes and their effects on the pointing accuracy of the telescope. A kinematic model relates pointing error to axis position when certain geometric errors are assumed to be present in a telescope. In the parameter estimation portion, the semi-parametric regression model was introduced to compensate for remaining nonlinear errors. The experimental results indicate that the proposed semi-parametric regression model eliminates both geometric and nonlinear errors, and that the telescope's pointing accuracy significantly improves after this calibration.

  8. Heat production and error probability relation in Landauer reset at effective temperature

    PubMed Central

    Neri, Igor; López-Suárez, Miquel

    2016-01-01

    The erasure of a classical bit of information is a dissipative process. The minimum heat produced during this operation has been theorized by Rolf Landauer in 1961 to be equal to kBT ln2 and takes the name of Landauer limit, Landauer reset or Landauer principle. Despite its fundamental importance, the Landauer limit remained untested experimentally for more than fifty years until recently when it has been tested using colloidal particles and magnetic dots. Experimental measurements on different devices, like micro-mechanical systems or nano-electronic devices are still missing. Here we show the results obtained in performing the Landauer reset operation in a micro-mechanical system, operated at an effective temperature. The measured heat exchange is in accordance with the theory reaching values close to the expected limit. The data obtained for the heat production is then correlated to the probability of error in accomplishing the reset operation. PMID:27669898

  9. Heat production and error probability relation in Landauer reset at effective temperature.

    PubMed

    Neri, Igor; López-Suárez, Miquel

    2016-09-27

    The erasure of a classical bit of information is a dissipative process. The minimum heat produced during this operation has been theorized by Rolf Landauer in 1961 to be equal to kBT ln2 and takes the name of Landauer limit, Landauer reset or Landauer principle. Despite its fundamental importance, the Landauer limit remained untested experimentally for more than fifty years until recently when it has been tested using colloidal particles and magnetic dots. Experimental measurements on different devices, like micro-mechanical systems or nano-electronic devices are still missing. Here we show the results obtained in performing the Landauer reset operation in a micro-mechanical system, operated at an effective temperature. The measured heat exchange is in accordance with the theory reaching values close to the expected limit. The data obtained for the heat production is then correlated to the probability of error in accomplishing the reset operation.

  10. A hardware error estimate for floating-point computations

    NASA Astrophysics Data System (ADS)

    Lang, Tomás; Bruguera, Javier D.

    2008-08-01

    We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However

  11. Futsal Match-Related Fatigue Affects Running Performance and Neuromuscular Parameters but Not Finishing Kick Speed or Accuracy

    PubMed Central

    Milioni, Fabio; Vieira, Luiz H. P.; Barbieri, Ricardo A.; Zagatto, Alessandro M.; Nordsborg, Nikolai B.; Barbieri, Fabio A.; dos-Santos, Júlio W.; Santiago, Paulo R. P.; Papoti, Marcelo

    2016-01-01

    Purpose: The aim of the present study was to investigate the influence of futsal match-related fatigue on running performance, neuromuscular variables, and finishing kick speed and accuracy. Methods: Ten professional futsal players participated in the study (age: 22.2 ± 2.5 years) and initially performed an incremental protocol to determine maximum oxygen uptake (V˙O2max: 50.6 ± 4.9 mL.kg−1.min−1). Next, simulated games were performed, in four periods of 10 min during which heart rate and blood lactate concentration were monitored. The entire games were video recorded for subsequent automatic tracking. Before and immediately after the simulated game, neuromuscular function was measured by maximal isometric force of knee extension, voluntary activation using twitch interpolation technique, and electromyographic activity. Before, at half time, and immediately after the simulated game, the athletes also performed a set of finishing kicks for ball speed and accuracy measurements. Results: Total distance covered (1st half: 1986.6 ± 74.4 m; 2nd half: 1856.0 ± 129.7 m, P = 0.00) and distance covered per minute (1st half: 103.2 ± 4.4 m.min−1; 2nd half: 96.4 ± 7.5 m.min−1, P = 0.00) demonstrated significant declines during the simulated game, as well as maximal isometric force of knee extension (Before: 840.2 ± 66.2 N; After: 751.6 ± 114.3 N, P = 0.04) and voluntary activation (Before: 85.9 ± 7.5%; After: 74.1 ± 12.3%, P = 0.04), however ball speed and accuracy during the finishing kicks were not significantly affected. Conclusion: Therefore, we conclude that despite the decline in running performance and neuromuscular variables presenting an important manifestation of central fatigue, this condition apparently does not affect the speed and accuracy of finishing kicks. PMID:27872598

  12. Bottom-Up Mechanisms Are Involved in the Relation between Accuracy in Timing Tasks and Intelligence--Further Evidence Using Manipulations of State Motivation

    ERIC Educational Resources Information Center

    Ullen, Fredrik; Soderlund, Therese; Kaaria, Lenita; Madison, Guy

    2012-01-01

    Intelligence correlates with accuracy in various timing tasks. Such correlations could be due to both bottom-up mechanisms, e.g. neural properties that influence both temporal accuracy and cognitive processing, and differences in top-down control. We have investigated the timing-intelligence relation using a simple temporal motor task, isochronous…

  13. Modulation of feedback-related negativity during trial-and-error exploration and encoding of behavioral shifts

    PubMed Central

    Sallet, Jérôme; Camille, Nathalie; Procyk, Emmanuel

    2013-01-01

    The feedback-related negativity (FRN) is a mid-frontal event-related potential (ERP) recorded in various cognitive tasks and associated with the onset of sensory feedback signaling decision outcome. Some properties of the FRN are still debated, notably its sensitivity to positive and negative reward prediction error (RPE)—i.e., the discrepancy between the expectation and the actual occurrence of a particular feedback,—and its role in triggering the post-feedback adjustment. In the present study we tested whether the FRN is modulated by both positive and negative RPE. We also tested whether an instruction cue indicating the need for behavioral adjustment elicited the FRN. We asked 12 human subjects to perform a problem-solving task where they had to search by trial and error which of five visual targets, presented on a screen, was associated with a correct feedback. After exploration and discovery of the correct target, subjects could repeat their correct choice until the onset of a visual signal to change (SC) indicative of a new search. Analyses showed that the FRN was modulated by both negative and positive prediction error (RPE). Finally, we found that the SC elicited an FRN-like potential on the frontal midline electrodes that was not modulated by the probability of that event. Collectively, these results suggest the FRN may reflect a mechanism that evaluates any event (outcome, instruction cue) signaling the need to engage adaptive actions. PMID:24294190

  14. Error-related brain activity is related to aversive potentiation of the startle response in children, but only the ERN is associated with anxiety disorders.

    PubMed

    Meyer, Alexandria; Hajcak, Greg; Glenn, Catherine R; Kujawa, Autumn J; Klein, Daniel N

    2017-04-01

    Identifying biomarkers that characterize developmental trajectories leading to anxiety disorders will likely improve early intervention strategies as well as increase our understanding of the etiopathogenesis of these disorders. The error-related negativity (ERN), an event-related potential that occurs during error commission, is increased in anxious adults and children-and has been shown to predict the onset of anxiety disorders across childhood. The ERN has therefore been suggested as a biomarker of anxiety. However, it remains unclear what specific processes a potentiated ERN may reflect. We have recently proposed that the ERN may reflect trait-like differences in threat sensitivity; however, very few studies have examined the ERN in relation to other indices of this construct. In the current study, the authors measured the ERN, as well as affective modulation of the startle reflex, in a large sample (N = 155) of children. Children characterized by a large ERN also exhibited greater potentiation of the startle response in the context of unpleasant images, but not in the context of neutral or pleasant images. In addition, the ERN, but not startle response, related to child anxiety disorder status. These results suggest a relationship between error-related brain activity and aversive potentiation of the startle reflex during picture viewing-consistent with the notion that both measures may reflect individual differences in threat sensitivity. However, results suggest the ERN may be a superior biomarker of anxiety in children. (PsycINFO Database Record

  15. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI).

    PubMed

    Logan, Dustin M; Hill, Kyle R; Larson, Michael J

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI.

  16. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI)

    PubMed Central

    Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  17. The Influence of Relatives on the Efficiency and Error Rate of Familial Searching

    PubMed Central

    Rohlfs, Rori V.; Murphy, Erin; Song, Yun S.; Slatkin, Montgomery

    2013-01-01

    We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability () of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases. PMID:23967076

  18. Characterization and mitigation of relative edge placement errors (rEPE) in full-chip computational lithography

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Gupta, Rachit; Shang, Shumay; Liubich, Vlad; Word, James

    2015-10-01

    Edge placement error (EPE) was a term initially introduced to describe the difference between predicted pattern contour edge and the design target. Strictly speaking this quantity is not directly measurable in the fab, and furthermore it is not ultimately the most important metric for chip yield. What is of vital importance is the relative EPE (rEPE) between different design layers, and in the era of multi-patterning, the different constituent mask sublayers for a single design layer. There has always been a strong emphasis on measurement and control of misalignment between design layers, and the progress in this realm has been remarkable, spurned in part at least by the proliferation of multi-patterning which reduces the available overlay budget by introducing a coupling of alignment and CD errors for the target layer. In-line CD and overlay metrology specifications are typically established by starting with design rules and making certain assumptions about error distributions which might be encountered in manufacturing. Lot disposition criteria in photo metrology (rework or pass to etch) are set assuming worst case assumptions for CD and overlay respectively. For example poly to active overlay specs start with poly endcap design rules and make assumptions about active and poly lot average and across lot CDs, and incorporate general knowledge about poly line end rounding to ensure that leakage current is maintained within specification. This worst case guard banding does not consider specific chip designs, however and as we have previously shown full-chip simulation can elucidate the most critical "hot spots" for interlayer process variability comprehending the two-layer CD and misalignment process window. It was shown that there can be differences in X versus Y misalignment process windows as well as positive versus negative directional misalignment process windows and that such design specific information might be leveraged for manufacturing disposition and

  19. Error-related negativity (ERN) and sustained threat: Conceptual framework and empirical evaluation in an adolescent sample.

    PubMed

    Weinberg, Anna; Meyer, Alexandria; Hale-Rude, Emily; Perlman, Greg; Kotov, Roman; Klein, Daniel N; Hajcak, Greg

    2016-03-01

    The error-related negativity (ERN) currently appears as a physiological measure in relation to three Research Domain Criteria (RDoC) constructs: Cognitive Control, Sustained Threat, and Reward Learning. We propose a conceptual model in which variance in the ERN reflects individual differences in the degree to which errors are evaluated as threatening. We also discuss evidence for the placement of the ERN in the "Sustained Threat" construct, as well as evidence that the ERN may more specifically reflect sensitivity to endogenous threat. Following this, we present data from a sample of 515 adolescent females demonstrating a larger ERN in relation to self-reported checking behaviors, but only in older adolescents, suggesting that sensitivity to internal threat and the ERN-checking relationship may follow a developmental course as adolescents develop behavioral control. In contrast, depressive symptoms were linked to a smaller ERN, and this association was invariant with respect to age. Collectively, these data suggest that the magnitude of the ERN is sensitive both to specific anxiety-related processes and depression, in opposing directions that may reflect variation in internal threat sensitivity. We discuss directions for future research, as well as ways in which findings for the ERN complement and challenge aspects of the current RDoC matrix.

  20. Error-related negativity (ERN) and sustained threat: Conceptual framework and empirical evaluation in an adolescent sample

    PubMed Central

    Weinberg, Anna; Meyer, Alexandria; Hale-Rude, Emily; Perlman, Greg; Kotov, Roman; Klein, Daniel N.; Hajcak, Greg

    2015-01-01

    The Error-related Negativity (ERN) currently appears as a physiological measure in relation to three RDoC constructs: Cognitive Control, Sustained Threat, and Reward Learning. We propose a conceptual model in which variance in the ERN reflects individual differences in the degree to which errors are evaluated as threatening. We also discuss evidence for the placement of the ERN in the ‘Sustained Threat’ construct, as well as evidence that the ERN may more specifically reflect sensitivity to endogenous threat. Following this, we present data from a sample of 515 adolescent females demonstrating larger ERN in relation to self-reported checking behaviors, but only in older adolescents, suggesting that sensitivity to internal threat and the ERN-checking relationship may follow a developmental course as adolescents develop behavioral control. In contrast, depressive symptoms were linked to smaller ERN, and this association was invariant with respect to age. Collectively, these data suggest that the magnitude of the ERN is sensitive both to specific anxiety-related processes and depression, in opposing directions that may reflect variation in internal threat sensitivity. We discuss directions for future research, as well as ways in which findings for the ERN complement and challenge aspects of the current RDoC matrix. PMID:26877129

  1. Most Frequent Errors in Judo Uki Goshi Technique and the Existing Relations among Them Analysed through T-Patterns

    PubMed Central

    Gutiérrez, Alfonso; Prieto, Iván; Cancela, José M.

    2009-01-01

    The purpose of this study is to provide a tool, based on the knowledge of technical errors, which helps to improve the teaching and learning process of the Uki Goshi technique. With this aim, we set out to determine the most frequent errors made by 44 students when performing this technique and how these mistakes relate. In order to do so, an observational analysis was carried out using the OSJUDO-UKG instrument and the data were registered using Match Vision Studio (Castellano, Perea, Alday and Hernández, 2008). The results, analyzed through descriptive statistics, show that the absence of a correct initial unbalancing movement (45,5%), the lack of proper right-arm pull (56,8%), not blocking the faller’s body (Uke) against the thrower’s hip -Tori- (54,5%) and throwing the Uke through the Tori’s side are the most usual mistakes (72,7%). Through the sequencial analysis of T-Patterns obtained with the THÈME program (Magnusson, 1996, 2000) we have concluded that not blocking the body with the Tori’s hip provokes the Uke’s throw through the Tori’s side during the final phase of the technique (95,8%), and positioning the right arm on the dorsal region of the Uke’s back during the Tsukuri entails the absence of a subsequent pull of the Uke’s body (73,3%). Key Points In this study, the most frequent errors in the performance of the Uki Goshi technique have been determined and the existing relations among these mistakes have been shown through T-Patterns. The SOBJUDO-UKG is an observation instrument for detecting mistakes in the aforementioned technique. The results show that those mistakes related to the initial imbalancing movement and the main driving action of the technique are the most frequent. The use of T-Patterns turns out to be effective in order to obtain the most important relations among the observed errors. PMID:24474885

  2. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    SciTech Connect

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)

  3. Negative Cognitive Errors and Positive Illusions: Moderators of Relations between Divorce Events and Children's Psychological Adjustment.

    ERIC Educational Resources Information Center

    Mazur, Elizabeth; Wolchik, Sharlene

    Building on prior literature on adults' and children's appraisals of stressors, this study investigated relations among negative and positive appraisal biases, negative divorce events, and children's post-divorce adjustment. Subjects were 79 custodial nonremarried mothers and their children ages 9 to 13 who had experienced parental divorce within…

  4. Accuracy estimation for supervised learning algorithms

    SciTech Connect

    Glover, C.W.; Oblow, E.M.; Rao, N.S.V.

    1997-04-01

    This paper illustrates the relative merits of three methods - k-fold Cross Validation, Error Bounds, and Incremental Halting Test - to estimate the accuracy of a supervised learning algorithm. For each of the three methods we point out the problem they address, some of the important assumptions that are based on, and illustrate them through an example. Finally, we discuss the relative advantages and disadvantages of each method.

  5. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  6. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity.

    PubMed

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing "to buy" or "not to buy," participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the "not to buy" stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process.

  7. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity

    PubMed Central

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing “to buy” or “not to buy,” participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the “not to buy” stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process. PMID:26557139

  8. Performance monitoring in children and adolescents: a review of developmental changes in the error-related negativity and brain maturation.

    PubMed

    Tamnes, Christian K; Walhovd, Kristine B; Torstveit, Mari; Sells, Victoria T; Fjell, Anders M

    2013-10-01

    To realize our goals we continuously adapt our behavior according to internal or external feedback. Errors provide an important source for such feedback and elicit a scalp electrical potential referred to as the error-related negativity (ERN), which is a useful marker for studying typical and atypical development of cognitive control mechanisms involved in performance monitoring. In this review, we survey the available studies on age-related differences in the ERN in children and adolescents. The majority of the studies show that the ERN increases in strength throughout childhood and adolescence, suggesting continued maturation of the neural systems for performance monitoring, but there are still many unresolved questions. We further review recent research in adults that has provided important insights into the neural underpinnings of the ERN and performance monitoring, implicating distributed neural systems than include the dorsal anterior and posterior cingulate cortex, the lateral prefrontal cortex, insula, basal ganglia, thalamus and white matter connections between these regions. Finally, we discuss the possible roles of structural and functional maturation of these brain regions in the development of the ERN. Overall, we argue that future work should use multimodal approaches to give a better understanding of the neurocognitive development of performance monitoring.

  9. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  10. Relative accuracy testing of an X-ray fluorescence-based mercury monitor at coal-fired boilers.

    PubMed

    Hay, K James; Johnsen, Bruce E; Ginochio, Paul R; Cooper, John A

    2006-05-01

    The relative accuracy (RA) of a newly developed mercury continuous emissions monitor, based on X-ray fluorescence, was determined by comparing analysis results at coal-fired plants with two certified reference methods (American Society for Testing and Materials [ASTM] Method D6784-02 and U.S. Environment Protection Agency [EPA] Method 29). During the first determination, the monitor had an RA of 25% compared with ASTM Method D6784-02 (Ontario Hydro Method). However, the Ontario Hydro Method performed poorly, because the mercury concentrations were near the detection limit of the reference method. The mercury in this exhaust stream was primarily elemental. The second test was performed at a U.S. Army boiler against EPA Reference Method 29. Mercury and arsenic were spiked because of expected low mercury concentrations. The monitor had an RA of 16% for arsenic and 17% for mercury, meeting RA requirements of EPA Performance Specification 12a. The results suggest that the sampling stream contained significant percentages of both elemental and oxidized mercury. The monitor was successful at measuring total mercury in particulate and vapor forms.

  11. High accuracy of Karplus equations for relating three-bond J couplings to protein backbone torsion angles.

    PubMed

    Li, Fang; Lee, Jung Ho; Grishaev, Alexander; Ying, Jinfa; Bax, Ad

    2015-02-23

    (3) JC'C' and (3) JHNHα couplings are related to the intervening backbone torsion angle ${\\varphi }$ by standard Karplus equations. Although these couplings are known to be affected by parameters other than ${\\varphi }$, including H-bonding, valence angles and residue type, experimental results and quantum calculations indicate that the impact of these latter parameters is typically very small. The solution NMR structure of protein GB3, newly refined by using extensive sets of residual dipolar couplings, yields 50-60 % better Karplus equation agreement between ${\\varphi }$ angles and experimental (3) JC'C' and (3) JHNHα values than does the high-resolution X-ray structure. In intrinsically disordered proteins, (3) JC'C' and (3) JHNHα couplings can be measured at even higher accuracy, and the impact of factors other than the intervening torsion angle on (3) J will be smaller than in folded proteins, making these couplings exceptionally valuable reporters on the ensemble of ${\\varphi }$ angles sampled by each residue.

  12. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data

    PubMed Central

    Lowther, Andrew D.; Lydersen, Christian; Fedak, Mike A.; Lovell, Phil; Kovacs, Kit M.

    2015-01-01

    Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25km with some producing RMSE of less than 2.50km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution. PMID:25905640

  13. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data.

    PubMed

    Lowther, Andrew D; Lydersen, Christian; Fedak, Mike A; Lovell, Phil; Kovacs, Kit M

    2015-01-01

    Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.

  14. Effects of exposure measurement error in the analysis of health effects from traffic-related air pollution.

    PubMed

    Baxter, Lisa K; Wright, Rosalind J; Paciorek, Christopher J; Laden, Francine; Suh, Helen H; Levy, Jonathan I

    2010-01-01

    In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to evaluate quantitatively these surrogates against measured pollutant concentrations to determine how their use affects the interpretation of epidemiological study results. In this study, we quantified the implications of using exposure models derived from validation studies, and other alternative surrogate models with varying amounts of measurement error on epidemiological study findings. We compared previously developed multiple regression models characterizing residential indoor nitrogen dioxide (NO(2)), fine particulate matter (PM(2.5)), and elemental carbon (EC) concentrations to models with less explanatory power that may be applied in the absence of validation studies. We constructed a hypothetical epidemiological study, under a range of odds ratios, and determined the bias and uncertainty caused by the use of various exposure models predicting residential indoor exposure levels. Our simulations illustrated that exposure models with fairly modest R(2) (0.3 to 0.4 for the previously developed multiple regression models for PM(2.5) and NO(2)) yielded substantial improvements in epidemiological study performance, relative to the application of regression models created in the absence of validation studies or poorer-performing validation study models (e.g., EC). In many studies, models based on validation data may not be possible, so it may be necessary to use a surrogate model with more measurement error. This analysis provides a technique to quantify the implications of applying various exposure models with different degrees of measurement error in epidemiological research.

  15. Exposure Error Masks The Relationship Between Traffic-Related Air Pollution and Heart Rate Variability (HRV)

    PubMed Central

    Suh, Helen H.; Zanobetti, Antonella

    2010-01-01

    Objective We examined whether more precise exposure measures would better detect associations between traffic-related pollution, elemental carbon (EC) and nitrogen dioxide (NO2), and HRV. Methods Repeated 24-h personal and ambient PM2.5, EC, and NO2 were measured for 30 people living in Atlanta, GA. The association between HRV and either ambient concentrations or personal exposures was examined using linear mixed effects models. Results Ambient PM2.5, EC, and NO2 and personal PM2.5 were not associated with HRV. Personal EC and NO2 measured 24-h prior to HRV was associated with decreased rMSSD, PNN50, and HF and with increased LF/HF. RMSSD decreased by 10.97% (95% CI: -18.00,-3.34) for an IQR change in personal EC (0.81 ug/m3). Conclusions Results indicate decreased vagal tone in response to traffic pollutants, which can best be detected with precise personal exposure measures. PMID:20595912

  16. Joint effects of sensory feedback and interoceptive awareness on conscious error detection: Evidence from event related brain potentials.

    PubMed

    Godefroid, Elke; Pourtois, Gilles; Wiersema, Jan R

    2016-02-01

    Error awareness has been argued to depend on sensory feedback and interoceptive awareness (IA) (Ullsperger, Harsay, Wessel, & Ridderinkhof, 2010). We recorded EEG while participants performed a speeded Go/No-Go task in which they signaled error commission. Visibility of the effector was manipulated, while IA was measured with a heartbeat perception task. The late Pe was larger for aware than unaware errors. The ERN was also found to be modulated by error awareness, but only when the hand was visible, suggesting that its sensitivity to error awareness depends on the availability of visual sensory feedback. Only when the response hand was visible, the late Pe amplitude to aware errors correlated with IA, suggesting that sensory feedback and IA synergistically contribute to the emergence of error awareness. These findings underscore the idea that several sources of information accumulate in time following action execution in order to enable errors to break through and reach awareness.

  17. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  18. Assessment of targeting accuracy of a low-energy stereotactic radiosurgery treatment for age-related macular degeneration

    NASA Astrophysics Data System (ADS)

    Taddei, Phillip J.; Chell, Erik; Hansen, Steven; Gertner, Michael; Newhauser, Wayne D.

    2010-12-01

    Age-related macular degeneration (AMD), a leading cause of blindness in the United States, is a neovascular disease that may be controlled with radiation therapy. Early patient outcomes of external beam radiotherapy, however, have been mixed. Recently, a novel multimodality treatment was developed, comprising external beam radiotherapy and concomitant treatment with a vascular endothelial growth factor inhibitor. The radiotherapy arm is performed by stereotactic radiosurgery, delivering a 16 Gy dose in the macula (clinical target volume, CTV) using three external low-energy x-ray fields while adequately sparing normal tissues. The purpose of our study was to test the sensitivity of the delivery of the prescribed dose in the CTV using this technique and of the adequate sparing of normal tissues to all plausible variations in the position and gaze angle of the eye. Using Monte Carlo simulations of a 16 Gy treatment, we varied the gaze angle by ±5° in the polar and azimuthal directions, the linear displacement of the eye ±1 mm in all orthogonal directions, and observed the union of the three fields on the posterior wall of spheres concentric with the eye that had diameters between 20 and 28 mm. In all cases, the dose in the CTV fluctuated <6%, the maximum dose in the sclera was <20 Gy, the dose in the optic disc, optic nerve, lens and cornea were <0.7 Gy and the three-field junction was adequately preserved. The results of this study provide strong evidence that for plausible variations in the position of the eye during treatment, either by the setup error or intrafraction motion, the prescribed dose will be delivered to the CTV and the dose in structures at risk will be kept far below tolerance doses.

  19. Relating Indices of Knowledge Structure Coherence and Accuracy to Skill-Based Performance: Is There Utility in Using a Combination of Indices?

    ERIC Educational Resources Information Center

    Schuelke, Matthew J.; Day, Eric Anthony; McEntire, Lauren E.; Boatman, Paul R.; Boatman, Jazmine Espejo; Kowollik, Vanessa; Wang, Xiaoqian

    2009-01-01

    The authors examined the relative criterion-related validity of knowledge structure coherence and two accuracy-based indices (closeness and correlation) as well as the utility of using a combination of knowledge structure indices in the prediction of skill acquisition and transfer. Findings from an aggregation of 5 independent samples (N = 958)…

  20. [Longer working hours of pharmacists in the ward resulted in lower medication-related errors--survey of national university hospitals in Japan].

    PubMed

    Matsubara, Kazuo; Toyama, Akira; Satoh, Hiroshi; Suzuki, Hiroshi; Awaya, Toshio; Tasaki, Yoshikazu; Yasuoka, Toshiaki; Horiuchi, Ryuya

    2011-04-01

    It is obvious that pharmacists play a critical role as risk managers in the healthcare system, especially in medication treatment. Hitherto, there is not a single multicenter-survey report describing the effectiveness of clinical pharmacists in preventing medical errors from occurring in the wards in Japan. Thus, we conducted a 1-month survey to elucidate the relationship between the number of errors and working hours of pharmacists in the ward, and verified whether the assignment of clinical pharmacists to the ward would prevent medical errors between October 1-31, 2009. Questionnaire items for the pharmacists at 42 national university hospitals and a medical institute included the total and the respective numbers of medication-related errors, beds and working hours of pharmacist in 2 internal medicine and 2 surgical departments in each hospital. Regardless of severity, errors were consecutively reported to the Medical Security and Safety Management Section in each hospital. The analysis of errors revealed that longer working hours of pharmacists in the ward resulted in less medication-related errors; this was especially significant in the internal medicine ward (where a variety of drugs were used) compared with the surgical ward. However, the nurse assignment mode (nurse/inpatients ratio: 1 : 7-10) did not influence the error frequency. The results of this survey strongly indicate that assignment of clinical pharmacists to the ward is critically essential in promoting medication safety and efficacy.

  1. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials

    PubMed Central

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  2. Task engagement and the relationships between the error-related negativity, agreeableness, behavioral shame proneness and cortisol.

    PubMed

    Tops, Mattie; Boksem, Maarten A S; Wester, Anne E; Lorist, Monicque M; Meijman, Theo F

    2006-08-01

    Previous results suggest that both cortisol mobilization and the error-related negativity (ERN/Ne) reflect goal engagement, i.e. the mobilization and allocation of attentional and physiological resources. Personality measures of negative affectivity have been associated both to high cortisol levels and large ERN/Ne amplitudes. However, measures of positive social adaptation and agreeableness have also been related to high cortisol levels and large ERN/Ne amplitudes. We hypothesized that, as long as they relate to concerns over social evaluation and mistakes, both personality measures reflecting positive affectivity (e.g. agreeableness) and those reflecting negative affectivity (e.g. behavioral shame proneness) would be associated with an increased likelihood of high task engagement, and hence to increased cortisol mobilization and ERN/Ne amplitudes. We had female subjects perform a flanker task while EEG was recorded. Additionally, the subjects filled out questionnaires measuring mood and personality, and salivary cortisol immediately before and after task performance was measured. The overall pattern of relationships between our measures supports the hypothesis that cortisol mobilization and ERN/Ne amplitude reflect task engagement, and both relate positively to each other and to the personality traits agreeableness and behavioral shame proneness. We discuss the potential importance of engagement-disengagement and of concerns over social evaluation for research on psychopathology, stress and the ERN/Ne.

  3. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  4. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  5. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  6. Processing of action- but not stimulus-related prediction errors differs between active and observational feedback learning.

    PubMed

    Kobza, Stefan; Bellebaum, Christian

    2015-01-01

    Learning of stimulus-response-outcome associations is driven by outcome prediction errors (PEs). Previous studies have shown larger PE-dependent activity in the striatum for learning from own as compared to observed actions and the following outcomes despite comparable learning rates. We hypothesised that this finding relates primarily to a stronger integration of action and outcome information in active learners. Using functional magnetic resonance imaging, we investigated brain activations related to action-dependent PEs, reflecting the deviation between action values and obtained outcomes, and action-independent PEs, reflecting the deviation between subjective values of response-preceding cues and obtained outcomes. To this end, 16 active and 15 observational learners engaged in a probabilistic learning card-guessing paradigm. On each trial, active learners saw one out of five cues and pressed either a left or right response button to receive feedback (monetary win or loss). Each observational learner observed exactly those cues, responses and outcomes of one active learner. Learning performance was assessed in active test trials without feedback and did not differ between groups. For both types of PEs, activations were found in the globus pallidus, putamen, cerebellum, and insula in active learners. However, only for action-dependent PEs, activations in these structures and the anterior cingulate were increased in active relative to observational learners. Thus, PE-related activity in the reward system is not generally enhanced in active relative to observational learning but only for action-dependent PEs. For the cerebellum, additional activations were found across groups for cue-related uncertainty, thereby emphasising the cerebellum's role in stimulus-outcome learning.

  7. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  8. Teaching Picture-to-Object Relations in Picture-Based Requesting by Children with Autism: A Comparison between Error Prevention and Error Correction Teaching Procedures

    ERIC Educational Resources Information Center

    Carr, D.; Felce, J.

    2008-01-01

    Background: Children who have a combination of language and developmental disabilities with autism often experience major difficulties in learning relations between objects and their graphic representations. Therefore, they would benefit from teaching procedures that minimize their difficulties in acquiring these relations. This study compared two…

  9. Digital reader vs print media: the role of digital technology in reading accuracy in age-related macular degeneration

    PubMed Central

    Gill, K; Mao, A; Powell, A M; Sheidow, T

    2013-01-01

    Purpose To compare patient satisfaction, reading accuracy, and reading speed between digital e-readers (Sony eReader, Apple iPad) and standard paper/print media for patients with stable wet age-related macular degeneration (AMD). Methods Patients recruited for the study were patients with stable wet AMD, in one or both eyes, who would benefit from a low-vision aid. The selected text sizes by patients reflected the spectrum of low vision in regard to their macular disease. Stability of macular degeneration was assessed on a clinical examination with stable visual acuity. Patients recruited for the study were assessed for reading speeds on both digital readers and standard paper text. Standardized and validated texts for reading speeds were used. Font sizes in the study reflected a spectrum from newsprint to large print books. Patients started with the smallest print size they could read on the standardized paper text. They then used digital readers to read the same size standardized text. Reading speed was calculated as words per minute by the formula (correctly read words/reading time (s)·60). The visual analog scale was completed by patients after reading each passage. These included their assessment on ‘ease of use' and ‘clarity of print' for each device and the print paper. Results A total of 27 patients were used in the study. Patients consistently read faster (P<0.0003) on the Apple iPad with larger text sizes (size 24 or greater) when compared with paper, and also on the paper compared with the Sony eReader (P<0.03) in all text group sizes. Patients chose the iPad to have the best clarity and the print paper as the easiest to use. Conclusions This study has demonstrated that digital devices may have a use in visual rehabilitation for low-vision patients. Devices that have larger display screens and offer high contrast ratios will benefit AMD patients who require larger texts to read. PMID:23492860

  10. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    ERIC Educational Resources Information Center

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The…

  11. The Episodic Engram Transformed: Time Reduces Retrieval-Related Brain Activity but Correlates It with Memory Accuracy

    ERIC Educational Resources Information Center

    Furman, Orit; Mendelsohn, Avi; Dudai, Yadin

    2012-01-01

    We took snapshots of human brain activity with fMRI during retrieval of realistic episodic memory over several months. Three groups of participants were scanned during a memory test either hours, weeks, or months after viewing a documentary movie. High recognition accuracy after hours decreased after weeks and remained at similar levels after…

  12. Accuracy and Fluency in List and Context Reading of Skilled and RD Groups: Absolute and Relative Performance Levels.

    ERIC Educational Resources Information Center

    Jenkins, Joseph R.; Fuchs, Lynn S.; van den Broek, Paul; Espin, Christine; Deno, Stanley L.

    2003-01-01

    Twenty-four students with reading difficulties (grade 4) and 85 skilled readers completed a reading comprehension test, read aloud a folktale, and read aloud a list of the folktale's words. Skilled readers read three times more correct words per minute in context and showed higher accuracy and rates on all measures. (Contains references.)…

  13. Cognitive control adjustments in healthy older and younger adults: Conflict adaptation, the error-related negativity (ERN), and evidence of generalized decline with age.

    PubMed

    Larson, Michael J; Clayson, Peter E; Keith, Cierra M; Hunt, Isaac J; Hedges, Dawson W; Nielsen, Brent L; Call, Vaughn R A

    2016-03-01

    Older adults display alterations in neural reflections of conflict-related processing. We examined response times (RTs), error rates, and event-related potential (ERP; N2 and P3 components) indices of conflict adaptation (i.e., congruency sequence effects) a cognitive control process wherein previous-trial congruency influences current-trial performance, along with post-error slowing, correct-related negativity (CRN), error-related negativity (ERN) and error positivity (Pe) amplitudes in 65 healthy older adults and 94 healthy younger adults. Older adults showed generalized slowing, had decreased post-error slowing, and committed more errors than younger adults. Both older and younger adults showed conflict adaptation effects; magnitude of conflict adaptation did not differ by age. N2 amplitudes were similar between groups; younger, but not older, adults showed conflict adaptation effects for P3 component amplitudes. CRN and Pe, but not ERN, amplitudes differed between groups. Data support generalized declines in cognitive control processes in older adults without specific deficits in conflict adaptation.

  14. Impact of reward and punishment motivation on behavior monitoring as indexed by the error-related negativity.

    PubMed

    Potts, Geoffrey F

    2011-09-01

    The error-related negativity (ERN) is thought to index a neural behavior monitoring system with its source in anterior cingulate cortex (ACC). While ACC is involved in a wide variety of cognitive and emotional tasks, there is debate as to what aspects of ACC function are indexed by the ERN. In one model the ERN indexes purely cognitive function, responding to mismatch between intended and executed actions. Another model posits that the ERN is more emotionally driven, elicited when an action is inconsistent with motivational goals. If the ERN indexes mismatch between intended and executed actions, then it should be insensitive to motivational valence, e.g. reward or punishment; in contrast if the ERN indexes the evaluation of responses relative to goals, then it might respond differentially under differing motivational valence. This study used a flanker task motivated by potential reward and potential punishment on different trials and also examined the N2 and P3 to the imperative stimulus, the response Pe, and the FRN and P3 to the outcome feedback to assess the impact of motivation valence on other stages of information processing in this choice reaction time task. Participants were slower on punishment motivated trials and both the N2 and ERN were larger on punishment motivated trials, indicating that loss aversion has an impact on multiple stages of information processing including behavior monitoring.

  15. Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun

    2016-07-01

    The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.

  16. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  17. Error detection and response adjustment in youth with mild spastic cerebral palsy: an event-related brain potential study.

    PubMed

    Hakkarainen, Elina; Pirilä, Silja; Kaartinen, Jukka; van der Meere, Jaap J

    2013-06-01

    This study evaluated the brain activation state during error making in youth with mild spastic cerebral palsy and a peer control group while carrying out a stimulus recognition task. The key question was whether patients were detecting their own errors and subsequently improving their performance in a future trial. Findings indicated that error responses of the group with cerebral palsy were associated with weak motor preparation, as indexed by the amplitude of the late contingent negative variation. However, patients were detecting their errors as indexed by the amplitude of the response-locked negativity and thus improved their performance in a future trial. Findings suggest that the consequence of error making on future performance is intact in a sample of youth with mild spastic cerebral palsy. Because the study group is small, the present findings need replication using a larger sample.

  18. TEPEE/GReAT (General Relativity Accuracy Test in an Einstein Elevator): Advances in the detector development

    NASA Astrophysics Data System (ADS)

    Iafolla, V.; Fiorenza, E.; Lefevre, C.; Lucchesi, D. M.; Morbidini, A.; Nozzoli, S.; Peron, R.; Persichini, M.; Reale, A.; Santoli, F.; Lorenzini, E. C.; Shapiro, I. I.; Ashenberg, J.; Bombardelli, C.; Glashow, S.

    This paper reports the development of an experiment (TEPEE/GReAT) to test the Equivalence Principle (EP) at a level of accuracy equal to (5 × 10-15), by means of a differential accelerometer free falling in a cryogenic vacuum capsule released from a stratospheric balloon. Such an accuracy requires resolving a very small signal out of the instrument's intrinsic noise and the noise associated with the instrument's motion. Imperfections in the construction of the detector introduce gravity gradient noise that it is possible to separate from the violation signal spinning the detector around an horizontal axis in order to have the EP violation signal and the gravity gradients one modulated at two different frequencies. Experimental results on prototype instruments showing high sensitivity and common mode rejection factor are shown.

  19. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    PubMed

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  20. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  1. Fatal error of the relativity and the research for dark matter-;dark energy -the physical "dark world"

    NASA Astrophysics Data System (ADS)

    Tong, Zhengrong

    2014-07-01

    1. The hard facts we given in text prove that, relativity theory is the fallacy from mathematical errors and experimental perjuries. 2. Conclusion of the study show that one called "fundamental gravitino" (the theoretical mass-energy value given at mw = 3.636 x 10-45 kg) is the Material composition of dark matter in the universe and also it's the material composition of all the elementary particles too. This is the root cause that the gravitation has universality. In-depth research, the results show that the fundamental gravitino" in all space is the material foundation of the electromagnetic interaction and propagation of light and other physical phenomena. Furthermore it shows that Stable elementary particles are the "droplets" under the strong gravitino pressure (strength calculated are consistent with the strong interaction) in the entire universe, similar to the droplets in the saturated gas. There are steady-state solutions in Mathematical models corresponding to the proton, the electron and the neutron.The theory for topics such as the dark matter, the dark energy, and the Higgs particle has the perfect explanation and reasonable conclusion... It seems, Chinese began to keep up with the world's physical trend, started a new physics era of fundamental gravitino the mass energy source of the universe.

  2. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  3. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  4. Effect of ephemeris errors on the accuracy of the computation of the tangent point altitude of a solar scanning ray as measured by the SAGE 1 and 2 instruments

    NASA Technical Reports Server (NTRS)

    Buglia, James J.

    1989-01-01

    An analysis was made of the error in the minimum altitude of a geometric ray from an orbiting spacecraft to the Sun. The sunrise and sunset errors are highly correlated and are opposite in sign. With the ephemeris generated for the SAGE 1 instrument data reduction, these errors can be as large as 200 to 350 meters (1 sigma) after 7 days of orbit propagation. The bulk of this error results from errors in the position of the orbiting spacecraft rather than errors in computing the position of the Sun. These errors, in turn, result from the discontinuities in the ephemeris tapes resulting from the orbital determination process. Data taken from the end of the definitive ephemeris tape are used to generate the predict data for the time interval covered by the next arc of the orbit determination process. The predicted data are then updated by using the tracking data. The growth of these errors is very nearly linear, with a slight nonlinearity caused by the beta angle. An approximate analytic method is given, which predicts the magnitude of the errors and their growth in time with reasonable fidelity.

  5. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  6. Single-plane versus three-plane methods for relative range error evaluation of medium-range 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David K.; Cournoyer, Luc; Beraldin, J.-Angelo

    2015-05-01

    Within the context of the ASTM E57 working group WK12373, we compare the two methods that had been initially proposed for calculating the relative range error of medium-range (2 m to 150 m) optical non-contact 3D imaging systems: the first is based on a single plane (single-plane assembly) and the second on an assembly of three mutually non-orthogonal planes (three-plane assembly). Both methods are evaluated for their utility in generating a metric to quantify the relative range error of medium-range optical non-contact 3D imaging systems. We conclude that the three-plane assembly is comparable to the single-plane assembly with regard to quantification of relative range error while eliminating the requirement to isolate the edges of the target plate face.

  7. Spaceborne scanner imaging system errors

    NASA Technical Reports Server (NTRS)

    Prakash, A.

    1982-01-01

    The individual sensor system design elements which are the priori components in the registration and rectification process, and the potential impact of error budgets on multitemporal registration and side-lap registration are analyzed. The properties of scanner, MLA, and SAR imaging systems are reviewed. Each sensor displays internal distortion properties which to varying degrees make it difficult to generate on orthophoto projection of the data acceptable for multiple pass registration or meeting national map accuracy standards and is also affected to varying degrees by relief displacements in moderate to hilly terrain. Nonsensor related distortions, associated with the accuracy of ephemeris determination and platform stability, have a major impact on local geometric distortions. Platform stability improvements expected from the new multi mission spacecraft series and improved ephemeris and ground control point determination from the NAVSTAR/global positioning satellite systems are reviewed.

  8. Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System-related Errors

    PubMed Central

    Ash, Joan S.; Berg, Marc; Coiera, Enrico

    2004-01-01

    Medical error reduction is an international issue, as is the implementation of patient care information systems (PCISs) as a potential means to achieving it. As researchers conducting separate studies in the United States, The Netherlands, and Australia, using similar qualitative methods to investigate implementing PCISs, the authors have encountered many instances in which PCIS applications seem to foster errors rather than reduce their likelihood. The authors describe the kinds of silent errors they have witnessed and, from their different social science perspectives (information science, sociology, and cognitive science), they interpret the nature of these errors. The errors fall into two main categories: those in the process of entering and retrieving information, and those in the communication and coordination process that the PCIS is supposed to support. The authors believe that with a heightened awareness of these issues, informaticians can educate, design systems, implement, and conduct research in such a way that they might be able to avoid the unintended consequences of these subtle silent errors. PMID:14633936

  9. Diminished error-related brain activity as a promising endophenotype for substance-use disorders: evidence from high-risk offspring.

    PubMed

    Euser, Anja S; Evans, Brittany E; Greaves-Lord, Kirstin; Huizink, Anja C; Franken, Ingmar H A

    2013-11-01

    One of the core features of individuals with a substance-use disorder (SUD) is the reduced ability to successfully process errors and monitor performance, as reflected by diminished error-related negativities (ERN). However, whether these error-related brain abnormalities are caused by chronic substance use or rather predates it remains unclear. The present study elucidated whether hypoactive performance monitoring represents an endophenotypic vulnerability marker for SUD by using a high-risk paradigm. We assessed the behavioral components of error-processing, as well as the amplitude of the ERN in the event-related brain potential (ERP) during performance of the Eriksen Flanker Task among high-risk adolescents of parents with a SUD (HR; n = 28) and normal-risk controls (NR; n = 40). Results revealed that HR offspring were characterized by a higher prevalence of internalizing symptoms and more frequent cannabis use, the latter having a significant influence on the ERN. Interestingly, risk group uniquely predicted the negativity amplitude in response to error trials above and beyond confounding variables. Moreover, we found evidence of smaller ERN amplitudes in (cannabis use-naïve) HR offspring, reflecting impaired early processing of error information and suboptimal performance monitoring, whereas no robust group differences were found for overall behavioral performance. This effect was independent of an overall reduction in brain activity. Taken together, although we cannot rule out alternative explanations, the results of our study may provide evidence for the idea that diminished error-processing represents a promising endophenotype for SUD that may indicate a vulnerability to the disorder.

  10. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  11. Dopaminergic Medication Modulates Learning from Feedback and Error-Related Negativity in Parkinson’s Disease: A Pilot Study

    PubMed Central

    Volpato, Chiara; Schiff, Sami; Facchini, Silvia; Silvoni, Stefano; Cavinato, Marianna; Piccione, Francesco; Antonini, Angelo; Birbaumer, Niels

    2016-01-01

    Dopamine systems mediate key aspects of reward learning. Parkinson’s disease (PD) represents a valuable model to study reward mechanisms because both the disease process and the anti-Parkinson medications influence dopamine neurotransmission. The aim of this pilot study was to investigate whether the level of levodopa differently modulates learning from positive and negative feedback and its electrophysiological correlate, the error related negativity (ERN), in PD. Ten PD patients and ten healthy participants performed a two-stage reinforcement learning task. In the Learning Phase, they had to learn the correct stimulus within a stimulus pair on the basis of a probabilistic positive or negative feedback. Three sets of stimulus pairs were used. In the Testing Phase, the participants were tested with novel combinations of the stimuli previously experienced to evaluate whether they learned more from positive or negative feedback. PD patients performed the task both ON- and OFF-levodopa in two separate sessions while they remained on stable therapy with dopamine agonists. The electroencephalogram (EEG) was recorded during the task. PD patients were less accurate in negative than positive learning both OFF- and ON-levodopa. In the OFF-levodopa state they were less accurate than controls in negative learning. PD patients had a smaller ERN amplitude OFF- than ON-levodopa only in negative learning. In the OFF-levodopa state they had a smaller ERN amplitude than controls in negative learning. We hypothesize that high tonic dopaminergic stimulation due to the dopamine agonist medication, combined to the low level of phasic dopamine due to the OFF-levodopa state, could prevent phasic “dopamine dips” indicated by the ERN needed for learning from negative feedback. PMID:27822182

  12. Accuracy and reliability of GPS devices for measurement of sports-specific movement patterns related to cricket, tennis, and field-based team sports.

    PubMed

    Vickery, William M; Dascombe, Ben J; Baker, John D; Higham, Dean G; Spratford, Wayne A; Duffield, Rob

    2014-06-01

    The aim of this study was to determine the accuracy and reliability of 5, 10, and 15 Hz global positioning system (GPS) devices. Two male subjects (mean ± SD; age, 25.5 ± 0.7 years; height, 1.75 ± 0.01 m; body mass, 74 ± 5.7 kg) completed 10 repetitions of drills replicating movements typical of tennis, cricket, and field-based (football) sports. All movements were completed wearing two 5 and 10 Hz MinimaxX and 2 GPS-Sports 15 Hz GPS devices in a specially designed harness. Criterion movement data for distance and speed were provided from a 22-camera VICON system sampling at 100 Hz. Accuracy was determined using 1-way analysis of variance with Tukey's post hoc tests. Interunit reliability was determined using intraclass correlation (ICC), and typical error was estimated as coefficient of variation (CV). Overall, for the majority of distance and speed measures, as measured using the 5, 10, and 15 Hz GPS devices, were not significantly different (p > 0.05) to the VICON data. Additionally, no improvements in the accuracy or reliability of GPS devices were observed with an increase in the sampling rate. However, the CV for the 5 and 15 Hz devices for distance and speed measures ranged between 3 and 33%, with increasing variability evident in higher speed zones. The majority of ICC measures possessed a low level of interunit reliability (r = -0.35 to 0.39). Based on these results, practitioners of these devices should be aware that measurements of distance and speed may be consistently underestimated, regardless of the movements performed.

  13. Accuracy of Stokes integration for geoid computation

    NASA Astrophysics Data System (ADS)

    Ismail, Zahra; Jamet, Olivier; Altamimi, Zuheir

    2014-05-01

    Geoid determination by remove-compute-restore (RCR) technique involves the application of Stokes's integral on reduced gravity anomalies. Reduced gravity anomalies are obtained through interpolation after removing low degree gravity signal from space spherical harmonic model and high frequency from topographical effects and cover a spectre ranging from degree 150-200. Stokes's integral is truncated to a limited region around the computation point producing an error that will be reducing by a modification of Stokes's kernel. We study Stokes integral accuracy on synthetic signal of various frequency ranges, produced with EGM2008 spherical harmonic coefficients up to degree 2000. We analyse the integration error according to the frequency range of signal, the resolution of gravity anomaly grid and the radius of Stokes integration. The study shows that the behaviour of the relative errors is frequency independent. The standard Stokes kernel is though insufficient to produce 1cm geoid accuracy without a removal of the major part of the gravity signal up to degree 600. The Integration over an area of radius greater than 3 degree does not improve accuracy improvement. The results are compared to a similar experiment using the modified Stokes kernel formula (Ellmann2004, Sjöberg2003). References: Ellmann, A. (2004) The geoid for the Baltic countries determined by least-squares modification of Stokes formula. Sjöberg, LE (2003). A general model of modifying Stokes formula and its least-squares solution Journal of Geodesy, 77. 459-464.

  14. Apoplastic water fraction and rehydration techniques introduce significant errors in measurements of relative water content and osmotic potential in plant leaves.

    PubMed

    Arndt, Stefan K; Irawan, Andi; Sanders, Gregor J

    2015-12-01

    Relative water content (RWC) and the osmotic potential (π) of plant leaves are important plant traits that can be used to assess drought tolerance or adaptation of plants. We estimated the magnitude of errors that are introduced by dilution of π from apoplastic water in osmometry methods and the errors that occur during rehydration of leaves for RWC and π in 14 different plant species from trees, grasses and herbs. Our data indicate that rehydration technique and length of rehydration can introduce significant errors in both RWC and π. Leaves from all species were fully turgid after 1-3 h of rehydration and increasing the rehydration time resulted in a significant underprediction of RWC. Standing rehydration via the petiole introduced the least errors while rehydration via floating disks and submerging leaves for rehydration led to a greater underprediction of RWC. The same effect was also observed for π. The π values following standing rehydration could be corrected by applying a dilution factor from apoplastic water dilution using an osmometric method but not by using apoplastic water fraction (AWF) from pressure volume (PV) curves. The apoplastic water dilution error was between 5 and 18%, while the two other rehydration methods introduced much greater errors. We recommend the use of the standing rehydration method because (1) the correct rehydration time can be evaluated by measuring water potential, (2) overhydration effects were smallest, and (3) π can be accurately corrected by using osmometric methods to estimate apoplastic water dilution.

  15. Accuracy of References in Ten Library Science Journals.

    ERIC Educational Resources Information Center

    Pope, Nancy N.

    1992-01-01

    A study of 100 article citations from 11 library science journals showed only 45 article citations that were completely free of errors, while 11 had major errors--i.e., errors preventing or hindering location of the reference--and the remaining 44 had minor errors. Citation accuracy in library science journals appears similar to accuracy in other…

  16. Relative accuracy of grid references derived from postcode and address in UK epidemiological studies of overhead power lines.

    PubMed

    Swanson, J; Vincent, T J; Bunch, K J

    2014-12-01

    In the UK, the location of an address, necessary for calculating the distance to overhead power lines in epidemiological studies, is available from different sources. We assess the accuracy of each. The grid reference specific to each address, provided by the Ordnance Survey product Address-Point, is generally accurate to a few metres, which will usually be sufficient for calculating magnetic fields from the power lines. The grid reference derived from the postcode rather than the individual address is generally accurate to tens of metres, and may be acceptable for assessing effects that vary in the general proximity of the power line, but is probably not acceptable for assessing magnetic-field effects.

  17. The relative degree of difficulty of L2 Spanish /d, t/, trill, and tap by L1 English speakers: Auditory and acoustic methods of defining pronunciation accuracy

    NASA Astrophysics Data System (ADS)

    Waltmunson, Jeremy C.

    2005-07-01

    This study has investigated the L2 acquisition of Spanish word-medial /d, t, r, (fish hook)/, word-initial /r/, and onset cluster /(fish hook)/. Two similar experiments were designed to address the relative degree of difficulty of the word-medial contrasts, as well as the effect of word-position on /r/ and /(fish hook)/ accuracy scores. In addition, the effect of vowel height on the production of [r] and the L2 emergence of the svarabhakti vowel in onset cluster /(fish hook)/ were investigated. Participants included 34 Ll English speakers from a range of L2 Spanish levels who were recorded in multiple sessions across a 6-month or 2-month period. The criteria for assessing segment accuracy was based on auditory and acoustic features found in productions by native Spanish speakers. In order to be scored as accurate, the L2 productions had to evidence both the auditory and acoustic features found in native speaker productions. L2 participant scores for each target were normalized in order to account for the variation of features found across native speaker productions. The results showed that word-medial accuracy scores followed two significant rankings (from lowest to highest): /r <= d <= (fish hook) <= t/ and /r <= (fish hook) <= d <= t/; however, when scores for /t/ included a voice onset time criterion, only the ranking /r <= (fish hook) <= d <= t/ was significant. These results suggest that /r/ is most difficult for learners while /t/ is least difficult, although individual variation was found. Regarding /r/, there was a strong effect of word position and vowel height on accuracy scores. For productions of /(fish hook)/, there was a strong effect of syllable position on accuracy scores. Acoustic analyses of taps in onset cluster revealed that only the experienced L2 Spanish participants demonstrated svarabhakti vowel emergence with native-like performance, suggesting that its emergence occurs relatively late in L2 acquisition.

  18. Relating indices of knowledge structure coherence and accuracy to skill-based performance: Is there utility in using a combination of indices?

    PubMed

    Schuelke, Matthew J; Day, Eric Anthony; McEntire, Lauren E; Boatman, Jazmine Espejo; Wang, Xiaoqian; Kowollik, Vanessa; Boatman, Paul R

    2009-07-01

    The authors examined the relative criterion-related validity of knowledge structure coherence and two accuracy-based indices (closeness and correlation) as well as the utility of using a combination of knowledge structure indices in the prediction of skill acquisition and transfer. Findings from an aggregation of 5 independent samples (N = 958) whose participants underwent training on a complex computer simulation indicated that coherence and the accuracy-based indices yielded comparable zero-order predictive validities. Support for the incremental validity of using a combination of indices was mixed; the most, albeit small, gain came in pairing coherence and closeness when predicting transfer. After controlling for baseline skill, general mental ability, and declarative knowledge, only coherence explained a statistically significant amount of unique variance in transfer. Overall, the results suggested that the different indices largely overlap in their representation of knowledge organization, but that coherence better reflects adaptable aspects of knowledge organization important to skill transfer.

  19. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  20. Characteristics of patients making serious inhaler errors with a dry powder inhaler and association with asthma-related events in a primary care setting

    PubMed Central

    Westerik, Janine A. M.; Carter, Victoria; Chrystyn, Henry; Burden, Anne; Thompson, Samantha L.; Ryan, Dermot; Gruffydd-Jones, Kevin; Haughney, John; Roche, Nicolas; Lavorini, Federico; Papi, Alberto; Infantino, Antonio; Roman-Rodriguez, Miguel; Bosnic-Anticevich, Sinthia; Lisspers, Karin; Ställberg, Björn; Henrichsen, Svein Høegh; van der Molen, Thys; Hutton, Catherine; Price, David B.

    2016-01-01

    Abstract Objective: Correct inhaler technique is central to effective delivery of asthma therapy. The study aim was to identify factors associated with serious inhaler technique errors and their prevalence among primary care patients with asthma using the Diskus dry powder inhaler (DPI). Methods: This was a historical, multinational, cross-sectional study (2011–2013) using the iHARP database, an international initiative that includes patient- and healthcare provider-reported questionnaires from eight countries. Patients with asthma were observed for serious inhaler errors by trained healthcare providers as predefined by the iHARP steering committee. Multivariable logistic regression, stepwise reduced, was used to identify clinical characteristics and asthma-related outcomes associated with ≥1 serious errors. Results: Of 3681 patients with asthma, 623 (17%) were using a Diskus (mean [SD] age, 51 [14]; 61% women). A total of 341 (55%) patients made ≥1 serious errors. The most common errors were the failure to exhale before inhalation, insufficient breath-hold at the end of inhalation, and inhalation that was not forceful from the start. Factors significantly associated with ≥1 serious errors included asthma-related hospitalization the previous year (odds ratio [OR] 2.07; 95% confidence interval [CI], 1.26–3.40); obesity (OR 1.75; 1.17–2.63); poor asthma control the previous 4 weeks (OR 1.57; 1.04–2.36); female sex (OR 1.51; 1.08–2.10); and no inhaler technique review during the previous year (OR 1.45; 1.04–2.02). Conclusions: Patients with evidence of poor asthma control should be targeted for a review of their inhaler technique even when using a device thought to have a low error rate. PMID:26810934

  1. The Effectiveness of Noninvasive Biomarkers to Predict Hepatitis B-Related Significant Fibrosis and Cirrhosis: A Systematic Review and Meta-Analysis of Diagnostic Test Accuracy

    PubMed Central

    Xu, Xue-Ying; Kong, Hong; Song, Rui-Xiang; Zhai, Yu-Han; Wu, Xiao-Fei; Ai, Wen-Si; Liu, Hong-Bo

    2014-01-01

    Noninvasive biomarkers have been developed to predict hepatitis B virus (HBV)-related fibrosis owing to the significant limitations of liver biopsy. Those biomarkers were initially derived from evaluation of hepatitis C virus (HCV)-related fibrosis, and their accuracy among HBV-infected patients was under constant debate. A systematic review was conducted on records in PubMed, EMBASE and the Cochrane Library electronic databases, up until April 1st, 2013, in order to systematically assess the effectiveness and accuracy of these biomarkers for predicting HBV-related fibrosis. The questionnaire for quality assessment of diagnostic accuracy studies (QUADAS) was used. Out of 115 articles evaluated for eligibility, 79 studies satisfied the pre-determined inclusion criteria for meta-analysis. Eventually, our final data set for the meta-analysis contained 30 studies. The areas under the SROC curve for APRI, FIB-4, and FibroTest of significant fibrosis were 0.77, 0.75, and 0.84, respectively. For cirrhosis, the areas under the SROC curve for APRI, FIB-4 and FibroTest were 0.75, 0.87, and 0.90, respectively. The heterogeneity of FIB-4 and FibroTest were not statistically significant. The heterogeneity of APRI for detecting significant fibrosis was affected by median age (P = 0.0211), and for cirrhosis was affected by etiology (P = 0.0159). Based on the analysis we claim that FibroTest has excellent diagnostic accuracy for identification of HBV-related significant fibrosis and cirrhosis. FIB-4 has modest benefits and may be suitable for wider scope implementation. PMID:24964038

  2. Navigation Accuracy Guidelines for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2004-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  3. GP-B error modeling and analysis

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1982-01-01

    Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

  4. High accuracy radiation efficiency measurement techniques

    NASA Technical Reports Server (NTRS)

    Kozakoff, D. J.; Schuchardt, J. M.

    1981-01-01

    The relatively large antenna subarrays (tens of meters) to be used in the Solar Power Satellite, and the desire to accurately quantify antenna performance, dictate the requirement for specialized measurement techniques. The error contributors associated with both far-field and near-field antenna measurement concepts were quantified. As a result, instrumentation configurations with measurement accuracy potential were identified. In every case, advances in the state of the art of associated electronics were found to be required. Relative cost trade-offs between a candidate far-field elevated antenna range and near-field facility were also performed.

  5. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  6. Female Genital Mutilation in Sierra Leone: Forms, Reliability of Reported Status, and Accuracy of Related Demographic and Health Survey Questions

    PubMed Central

    Grant, Donald S.; Berggren, Vanja

    2013-01-01

    Objective. To determine forms of female genital mutilation (FGM), assess consistency between self-reported and observed FGM status, and assess the accuracy of Demographic and Health Surveys (DHS) FGM questions in Sierra Leone. Methods. This cross-sectional study, conducted between October 2010 and April 2012, enrolled 558 females aged 12–47 from eleven antenatal clinics in northeast Sierra Leone. Data on demography, FGM status, and self-reported anatomical descriptions were collected. Genital inspection confirmed the occurrence and extent of cutting. Results. All participants reported FGM status; 4 refused genital inspection. Using the WHO classification of FGM, 31.7% had type Ib; 64.1% type IIb; and 4.2% type IIc. There was a high level of agreement between reported and observed FGM prevalence (81.2% and 81.4%, resp.). There was no correlation between DHS FGM responses and anatomic extent of cutting, as 2.7% reported pricking; 87.1% flesh removal; and 1.1% that genitalia was sewn closed. Conclusion. Types I and II are the main forms of FGM, with labia majora alterations in almost 5% of cases. Self-reports on FGM status could serve as a proxy measurement for FGM prevalence but not for FGM type. The DHS FGM questions are inaccurate for determining cutting extent. PMID:24204384

  7. Interindividual variation in fornix microstructure and macrostructure is related to visual discrimination accuracy for scenes but not faces.

    PubMed

    Postans, Mark; Hodgetts, Carl J; Mundy, Matthew E; Jones, Derek K; Lawrence, Andrew D; Graham, Kim S

    2014-09-03

    Transection of the nonhuman primate fornix has been shown to impair learning of configurations of spatial features and object-in-scene memory. Although damage to the human fornix also results in memory impairment, it is not known whether there is a preferential involvement of this white-matter tract in spatial learning, as implied by animal studies. Diffusion-weighted MR images were obtained from healthy participants who had completed versions of a task in which they made rapid same/different discriminations to two categories of highly visually similar stimuli: (1) virtual reality scene pairs; and (2) face pairs. Diffusion-MRI measures of white-matter microstructure [fractional anisotropy (FA) and mean diffusivity (MD)] and macrostructure (tissue volume fraction, f) were then extracted from the fornix of each participant, which had been reconstructed using a deterministic tractography protocol. Fornix MD and f measures correlated with scene, but not face, discrimination accuracy in both discrimination tasks. A complementary voxelwise analysis using tract-based spatial statistics suggested the crus of the fornix as a focus for this relationship. These findings extend previous reports of spatial learning impairments after fornix transection in nonhuman primates, critically highlighting the fornix as a source of interindividual variation in scene discrimination in humans.

  8. Interindividual Variation in Fornix Microstructure and Macrostructure Is Related to Visual Discrimination Accuracy for Scenes But Not Faces

    PubMed Central

    Hodgetts, Carl J.; Mundy, Matthew E.; Jones, Derek K.; Lawrence, Andrew D.; Graham, Kim S.

    2014-01-01

    Transection of the nonhuman primate fornix has been shown to impair learning of configurations of spatial features and object-in-scene memory. Although damage to the human fornix also results in memory impairment, it is not known whether there is a preferential involvement of this white-matter tract in spatial learning, as implied by animal studies. Diffusion-weighted MR images were obtained from healthy participants who had completed versions of a task in which they made rapid same/different discriminations to two categories of highly visually similar stimuli: (1) virtual reality scene pairs; and (2) face pairs. Diffusion-MRI measures of white-matter microstructure [fractional anisotropy (FA) and mean diffusivity (MD)] and macrostructure (tissue volume fraction, f) were then extracted from the fornix of each participant, which had been reconstructed using a deterministic tractography protocol. Fornix MD and f measures correlated with scene, but not face, discrimination accuracy in both discrimination tasks. A complementary voxelwise analysis using tract-based spatial statistics suggested the crus of the fornix as a focus for this relationship. These findings extend previous reports of spatial learning impairments after fornix transection in nonhuman primates, critically highlighting the fornix as a source of interindividual variation in scene discrimination in humans. PMID:25186756

  9. Protein NMR structures refined with Rosetta have higher accuracy relative to corresponding X-ray crystal structures.

    PubMed

    Mao, Binchen; Tejero, Roberto; Baker, David; Montelione, Gaetano T

    2014-02-05

    We have found that refinement of protein NMR structures using Rosetta with experimental NMR restraints yields more accurate protein NMR structures than those that have been deposited in the PDB using standard refinement protocols. Using 40 pairs of NMR and X-ray crystal structures determined by the Northeast Structural Genomics Consortium, for proteins ranging in size from 5-22 kDa, restrained Rosetta refined structures fit better to the raw experimental data, are in better agreement with their X-ray counterparts, and have better phasing power compared to conventionally determined NMR structures. For 37 proteins for which NMR ensembles were available and which had similar structures in solution and in the crystal, all of the restrained Rosetta refined NMR structures were sufficiently accurate to be used for solving the corresponding X-ray crystal structures by molecular replacement. The protocol for restrained refinement of protein NMR structures was also compared with restrained CS-Rosetta calculations. For proteins smaller than 10 kDa, restrained CS-Rosetta, starting from extended conformations, provides slightly more accurate structures, while for proteins in the size range of 10-25 kDa the less CPU intensive restrained Rosetta refinement protocols provided equally or more accurate structures. The restrained Rosetta protocols described here can improve the accuracy of protein NMR structures and should find broad and general for studies of protein structure and function.

  10. SU-E-J-19: Accuracy of Dual-Energy CT-Derived Relative Electron Density for Proton Therapy Dose Calculation

    SciTech Connect

    Mullins, J; Duan, X; Kruse, J; Herman, M; Bues, M

    2014-06-01

    Purpose: To determine the suitability of dual-energy CT (DECT) to calculate relative electron density (RED) of tissues for accurate proton therapy dose calculation. Methods: DECT images of RED tissue surrogates were acquired at 80 and 140 kVp. Samples (RED=0.19−2.41) were imaged in a water-equivalent phantom in a variety of configurations. REDs were calculated using the DECT numbers and inputs of the high and low energy spectral weightings. DECT-derived RED was compared between geometric configurations and for variations in the spectral inputs to assess the sensitivity of RED accuracy versus expected values. Results: RED accuracy was dependent on accurate spectral input influenced by phantom thickness and radius from the phantom center. Material samples located at the center of the phantom generally showed the best agreement to reference RED values, but only when attenuation of the surrounding phantom thickness was accounted for in the calculation spectra. Calculated RED changed by up to 10% for some materials when the sample was located at an 11 cm radius from the phantom center. Calculated REDs under the best conditions still differed from reference values by up to 5% in bone and 14% in lung. Conclusion: DECT has previously been used to differentiate tissue types based on RED and Z for binary tissue-type segmentation. To improve upon the current standard of empirical conversion of CT number to RED for treatment planning dose calculation, DECT methods must be able to calculate RED to better than 3% accuracy throughout the image. The DECT method is sensitive to the accuracy of spectral inputs used for calculation, as well as to spatial position in the anatomy. Effort to address adjustments to the spectral calculation inputs based on position and phantom attenuation will be required before DECT-determined RED can achieve a consistent level of accuracy for application in dose calculation.

  11. Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

    NASA Astrophysics Data System (ADS)

    Styers, Diane M.; Moskal, L. Monika; Richardson, Jeffrey J.; Halabisky, Meghan A.

    2014-01-01

    Object-based image analysis (OBIA) is becoming an increasingly common method for producing land use/land cover (LULC) classifications in urban areas. In order to produce the most accurate LULC map, LiDAR data and postclassification procedures are often employed, but their relative contributions to accuracy are unclear. We examined the contribution of LiDAR data and postclassification procedures to increase classification accuracies over using imagery alone and assessed sources of error along an ecologically complex urban-to-rural gradient in Olympia, Washington. Overall classification accuracy and user's and producer's accuracies for individual classes were evaluated. The addition of LiDAR data to the OBIA classification resulted in an 8.34% increase in overall accuracy, while manual postclassification to the imagery+LiDAR classification improved accuracy only an additional 1%. Sources of error in this classification were largely due to edge effects, from which multiple different types of errors result.

  12. Affective and cognitive correlates of PTSD: Electrocortical processing of threat and perseverative errors on the WCST in combat-related PTSD.

    PubMed

    DiGangi, Julia A; Kujawa, Autumn; Aase, Darrin M; Babione, Joseph M; Schroth, Christopher; Levy, David M; Kennedy, Amy E; Greenstein, Justin E; Proescher, Eric; Walters, Robert; Passi, Holly; Langenecker, Scott A; Phan, K Luan

    2017-04-03

    PTSD is characterized by both affective and cognitive dysfunction. Affectively, PTSD is associated with both heightened emotional reactivity and disengagement. Cognitively, perseverative thinking is a core feature of the disorder. In order to assess the interactive effects of affective and cognitive correlates of PTSD symptoms, 47 OEF/OIF/OND veterans completed an emotional faces matching task while EEG (i.e., late positive potential; LPP) was recorded, and separately completed the Wisconsin Card Sorting Test (WCST) to assess perseverative errors. There was no relationship between PTSD symptoms and either perseverative errors or EEG reactivity to faces. However, an interaction was found such that high perseverative errors on the WCST and a relatively enhanced LPP to angry faces was associated with greater PTSD symptoms, while low errors on the WCST and a relatively blunted LPP to angry faces also related to greater PTSD symptoms. These findings suggest that emotion-cognition interactions are important for understanding PTSD, and that distinct emotion-cognition constellations interact with symptoms.

  13. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  14. Geolocation error tracking of ZY-3 three line cameras

    NASA Astrophysics Data System (ADS)

    Pan, Hongbo

    2017-01-01

    The high-accuracy geolocation of high-resolution satellite images (HRSIs) is a key issue for mapping and integrating multi-temporal, multi-sensor images. In this manuscript, we propose a new geometric frame for analysing the geometric error of a stereo HRSI, in which the geolocation error can be divided into three parts: the epipolar direction, cross base direction, and height direction. With this frame, we proved that the height error of three line cameras (TLCs) is independent of nadir images, and that the terrain effect has a limited impact on the geolocation errors. For ZY-3 error sources, the drift error in both the pitch and roll angle and its influence on the geolocation accuracy are analysed. Epipolar and common tie-point constraints are proposed to study the bundle adjustment of HRSIs. Epipolar constraints explain that the relative orientation can reduce the number of compensation parameters in the cross base direction and have a limited impact on the height accuracy. The common tie points adjust the pitch-angle errors to be consistent with each other for TLCs. Therefore, free-net bundle adjustment of a single strip cannot significantly improve the geolocation accuracy. Furthermore, the epipolar and common tie-point constraints cause the error to propagate into the adjacent strip when multiple strips are involved in the bundle adjustment, which results in the same attitude uncertainty throughout the whole block. Two adjacent strips-Orbit 305 and Orbit 381, covering 7 and 12 standard scenes separately-and 308 ground control points (GCPs) were used for the experiments. The experiments validate the aforementioned theory. The planimetric and height root mean square errors were 2.09 and 1.28 m, respectively, when two GCPs were settled at the beginning and end of the block.

  15. Determination of GPS orbits to submeter accuracy

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Lichten, S. M.; Katsigris, E. C.

    1988-01-01

    Orbits for satellites of the Global Positioning System (GPS) were determined with submeter accuracy. Tests used to assess orbital accuracy include orbit comparisons from independent data sets, orbit prediction, ground baseline determination, and formal errors. One satellite tracked 8 hours each day shows rms error below 1 m even when predicted more than 3 days outside of a 1-week data arc. Differential tracking of the GPS satellites in high Earth orbit provides a powerful relative positioning capability, even when a relatively small continental U.S. fiducial tracking network is used with less than one-third of the full GPS constellation. To demonstrate this capability, baselines of up to 2000 km in North America were also determined with the GPS orbits. The 2000 km baselines show rms daily repeatability of 0.3 to 2 parts in 10 to the 8th power and agree with very long base interferometry (VLBI) solutions at the level of 1.5 parts in 10 to the 8th power. This GPS demonstration provides an opportunity to test different techniques for high-accuracy orbit determination for high Earth orbiters. The best GPS orbit strategies included data arcs of at least 1 week, process noise models for tropospheric fluctuations, estimation of GPS solar pressure coefficients, and combine processing of GPS carrier phase and pseudorange data. For data arc of 2 weeks, constrained process noise models for GPS dynamic parameters significantly improved the situation.

  16. A SEASAT SASS simulation experiment to quantify the errors related to a + or - 3 hour intermittent assimilation technique

    NASA Technical Reports Server (NTRS)

    Sylvester, W. B.

    1984-01-01

    A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.

  17. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  18. Prospective Relations among Fearful Temperament, Protective Parenting, and Social Withdrawal: The Role of Maternal Accuracy in a Moderated Mediation Framework

    ERIC Educational Resources Information Center

    Kiel, Elizabeth J.; Buss, Kristin A.

    2011-01-01

    Early social withdrawal and protective parenting predict a host of negative outcomes, warranting examination of their development. Mothers' accurate anticipation of their toddlers' fearfulness may facilitate transactional relations between toddler fearful temperament and protective parenting, leading to these outcomes. Currently, we followed 93…

  19. Medial Prefrontal Functional Connectivity--Relation to Memory Self-Appraisal Accuracy in Older Adults with and without Memory Disorders

    ERIC Educational Resources Information Center

    Ries, Michele L.; McLaren, Donald G.; Bendlin, Barbara B.; Xu, Guofan; Rowley, Howard A.; Birn, Rasmus; Kastman, Erik K.; Sager, Mark A.; Asthana, Sanjay; Johnson, Sterling C.

    2012-01-01

    It is tentatively estimated that 25% of people with early Alzheimer's disease (AD) show impaired awareness of disease-related changes in their own cognition. Research examining both normative self-awareness and altered awareness resulting from brain disease or injury points to the central role of the medial prefrontal cortex (MPFC) in generating…

  20. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  1. Medication Errors

    MedlinePlus

    ... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...

  2. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  3. An Implicit Measure of Associations with Mental Illness versus Physical Illness: Response Latency Decomposition and Stimuli Differential Functioning in Relation to IAT Order of Associative Conditions and Accuracy

    PubMed Central

    Mannarini, Stefania; Boffo, Marilisa

    2014-01-01

    The present study aimed at the definition of a latent measurement dimension underlying an implicit measure of automatic associations between the concept of mental illness and the psychosocial and biogenetic causal explanatory attributes. To this end, an Implicit Association Test (IAT) assessing the association between the Mental Illness and Physical Illness target categories to the Psychological and Biologic attribute categories, representative of the causal explanation domains, was developed. The IAT presented 22 stimuli (words and pictures) to be categorized into the four categories. After 360 university students completed the IAT, a Many-Facet Rasch Measurement (MFRM) modelling approach was applied. The model specified a person latency parameter and a stimulus latency parameter. Two additional parameters were introduced to denote the order of presentation of the task associative conditions and the general response accuracy. Beyond the overall definition of the latent measurement dimension, the MFRM was also applied to disentangle the effect of the task block order and the general response accuracy on the stimuli response latency. Further, the MFRM allowed detecting any differential functioning of each stimulus in relation to both block ordering and accuracy. The results evidenced: a) the existence of a latency measurement dimension underlying the Mental Illness versus Physical Illness - Implicit Association Test; b) significant effects of block order and accuracy on the overall latency; c) a differential functioning of specific stimuli. The results of the present study can contribute to a better understanding of the functioning of an implicit measure of semantic associations with mental illness and give a first blueprint for the examination of relevant issues in the development of an IAT. PMID:25000406

  4. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  5. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  6. TU-C-BRE-07: Quantifying the Clinical Impact of VMAT Delivery Errors Relative to Prior Patients’ Plans and Adjusted for Anatomical Differences

    SciTech Connect

    Stanhope, C; Wu, Q; Yuan, L; Liu, J; Hood, R; Yin, F; Adamson, J

    2014-06-15

    -arc VMAT plans for low-risk prostate are relatively insensitive to many potential delivery errors.

  7. Estimation of an unexpected-overlooking error by means of the single eye fixation related potential analysis with wavelet transform filter.

    PubMed

    Matsuo, N; Ohkita, Y; Tomita, Y; Honda, S; Matsunaga, K

    2001-04-01

    An unexpected-overlooking error that caused failure to notice near the peripheral vision is one of the accident factors in driving behavior. We estimated how the unexpected-overlooking error affected the amplitude of the lambda wave in the eye fixation related potential (EFRP). Four subjects participated in the experiment. Each subject was required press the right or left switch according to the given task, which was that he/she pressed the right switch when the blue dot appeared in the right detected area or he/she pressed the left switch when the red dot appeared in the right. The single trial data from Pz, which referred to both earlobes, were analyzed by means of a wavelet transform (WT) filter. The difference of the lambda amplitude between the corrected data was applied for analysis of variance. Three subjects showed a significant effect (P<0.01 or P<0.05), and the remaining one subject did not show a significant consequence of only two errors. The unexpected-overlooking errors had a low amplitude compared to the mean of amplitude throughout the task. It was concluded that the amplitude of the lambda wave might reflect the attention level of a subject.

  8. Neural response to errors in combat-exposed returning veterans with and without post-traumatic stress disorder: a preliminary event-related potential study.

    PubMed

    Rabinak, Christine A; Holman, Alexis; Angstadt, Mike; Kennedy, Amy E; Hajcak, Greg; Phan, Kinh Luan

    2013-07-30

    Post-traumatic stress disorder (PTSD) is characterized by sustained anxiety, hypervigilance for potential threat, and hyperarousal. These symptoms may enhance self-perception of one's actions, particularly the detection of errors, which may threaten safety. The error-related negativity (ERN) is an electrocortical response to the commission of errors, and previous studies have shown that other anxiety disorders associated with exaggerated anxiety and enhanced action monitoring exhibit an enhanced ERN. However, little is known about how traumatic experience and PTSD would affect the ERN. To address this gap, we measured the ERN in returning Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) veterans with combat-related PTSD (PTSD group), combat-exposed OEF/OIF veterans without PTSD [combat-exposed control (CEC) group], and non-traumatized healthy participants [healthy control (HC) group]. Event-related potential and behavioral measures were recorded while 16 PTSD patients, 18 CEC, and 16 HC participants completed an arrow version of the flanker task. No difference in the magnitude of the ERN was observed between the PTSD and HC groups; however, in comparison with the PTSD and HC groups, the CEC group displayed a blunted ERN response. These findings suggest that (1) combat trauma itself does not affect the ERN response; (2) PTSD is not associated with an abnormal ERN response; and (3) an attenuated ERN in those previously exposed to combat trauma but who have not developed PTSD may reflect resilience to the disorder, less motivation to do the task, or a decrease in the significance or meaningfulness of 'errors,' which could be related to combat experience.

  9. Wind measurement accuracy for the NASA scatterometer

    NASA Astrophysics Data System (ADS)

    Long, David G.; Oliphant, Travis

    1997-09-01

    The NASA Scatterometer (NSCAT) is designed to make measurements of the normalized radar backscatter coefficient ((sigma) o) of the ocean's surface. The measured (sigma) o is a function of the viewing geometry and the surface roughness due to wind-generated waves. By making multiple measurements of the same location from different azimuth angles it is possible to retrieve the near-surface wind speed and direction with the aid of a Geophysical Model Function (GMF) which relates wind and (sigma) o. The wind is estimated from the noisy (sigma) o measurements using maximum likelihood techniques. The probability density of the measured (sigma) o is assumed to be Gaussian with a variance that depends on the true (sigma) o and therefore the wind through the GMF and the measurements from different azimuth angles are assumed independent in estimating the wind. In order to estimate the accuracy of the retrieved wind, we derive the Cramer-Reo (CR) bound for wind estimation from scatterometer measurements. We show that the CR bound can be used as an error bar on the estimated wind. The role of geophysical modeling error in the GMF is considered and shown to play a significant role in the wind accuracy. Estimates of the accuracy of NSCAT measurements are given along with other scatterometer geometries and types.

  10. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  11. ANTERIOR CHAMBER DEPTH, LENS THICKNESS, AND RELATED MEASURES IN AFRICAN-AMERICAN FEMALES WITH LONG ANTERIOR ZONULES: A MATCHED STUDY WITH CONTROL FOR REFRACTIVE ERROR

    PubMed Central

    Roberts, Daniel K.; Teitelbaum, Bruce A.; Castells, David D.; Winters, Janis E.; Wilensky, Jacob T.

    2014-01-01

    Purpose To investigate anterior chamber depth (ACD), lens thickness (LT), vitreous body length (VBL), and axial length (AL) in African-American females with long anterior zonules (LAZ) while controlling for refractive error. Methods The eyes of 50 African-American females with LAZ were compared to 50 controls matched on age, race, sex, and refractive error. Central ACD, LT, VBL, and AL measurements were obtained in a masked fashion using a-scan ultrasonography. Results LAZ cases had a mean age ± SD (range) = 67.1 ± 7.6 years (52–85 years) and a mean refractive error = +1.85 ± 1.41D (−1.75 to +4.75D). Parameters were similar for controls. Mean ACD for cases was 2.45 ± 0.34 mm and 2.57 ± 0.38 mm for controls. Mean LT for cases was 4.94 ± 0.43 mm and 4.83 ± 0.45 mm for controls. Mean VBL for cases was 15.00 ± 0.72 mm and 15.17 ± 0.76 mm for controls. Mean AL for cases was 22.39 ± 0.82 mm and 22.57 ± 0.76 mm for controls. Using multiple logistic regression to control for any residual differences in age and refractive error, no significant differences were present between LAZ eyes and control eyes relative to the a-scan variables (P>0.1). Conclusions When refractive error was controlled for, this group of African-American females with LAZ did not exhibit clinically significant differences in ACD, LT, VBL, and AL as compared to controls. PMID:25093521

  12. Pre-Departure Clearance (PDC): An Analysis of Aviation Safety Reporting System Reports Concerning PDC Related Errors

    NASA Technical Reports Server (NTRS)

    Montalyo, Michael L.; Lebacqz, J. Victor (Technical Monitor)

    1994-01-01

    Airlines operating in the United States are required to operate under instrument flight rules (EFR). Typically, a clearance is issued via voice transmission from clearance delivery at the departing airport. In 1990, the Federal Aviation Administration (FAA) began deployment of the Pre-Departure Clearance (PDC) system at 30 U.S. airports. The PDC system utilizes aeronautical datalink and Aircraft Communication and Reporting System (ACARS) to transmit departure clearances directly to the pilot. An objective of the PDC system is to provide an immediate reduction in voice congestion over the clearance delivery frequency. Participating airports report that this objective has been met. However, preliminary analysis of 42 Aviation Safety Reporting System (ASRS) reports has revealed problems in PDC procedures and formatting which have caused errors in the proper execution of the clearance. It must be acknowledged that this technology, along with other advancements on the flightdeck, is adding more responsibility to the crew and increasing the opportunity for error. The present study uses these findings as a basis for further coding and analysis of an additional 82 reports obtained from an ASRS database search. These reports indicate that clearances are often amended or exceptions are added in order to accommodate local ATC facilities. However, the onboard ACARS is limited in its ability to emphasize or highlight these changes which has resulted in altitude and heading deviations along with increases in ATC workload. Furthermore, few participating airports require any type of PDC receipt confirmation. In fact, 35% of all ASRS reports dealing with PDC's include failure to acquire the PDC at all. Consequently, this study examines pilots' suggestions contained in ASRS reports in order to develop recommendations to airlines and ATC facilities to help reduce the amount of incidents that occur.

  13. Combination of TOPEX/POSEIDON Data with a Hydrographic Inversion for Determination of the Oceanic General Circulation and its Relation to Geoid Accuracy

    NASA Technical Reports Server (NTRS)

    Ganachaud, Alexandre; Wunsch, Carl; Kim, Myung-Chan; Tapley, Byron

    1997-01-01

    A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceanographic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations. Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.

  14. Radiative flux and forcing parameterization error in aerosol-free clear skies

    SciTech Connect

    Pincus, Robert; Oreopoulos, Lazaros; Ackerman, Andrew S.; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A.; Cady-Pereira, Karen E.; Cole, Jason N. S.; Dufresne, Jean -Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J.; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M.

    2015-07-03

    This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentially unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.

  15. Reverse-polynomial dilution calibration methodology extends lower limit of quantification and reduces relative residual error in targeted peptide measurements in blood plasma.

    PubMed

    Yau, Yunki Y; Duo, Xizi; Leong, Rupert W L; Wasinger, Valerie C

    2015-02-01

    Matrix effect is the alteration of an analyte's concentration-signal response caused by co-existing ion components. With electrospray ionization (ESI), matrix effects are believed to be a function of the relative concentrations, ionization efficiency, and solvation energies of the analytes within the electrospray ionization droplet. For biological matrices such as plasma, the interactions between droplet components is immensely complex and the effect on analyte signal response not well elucidated. This study comprised of three sequential quantitative analyses: we investigated whether there is a generalizable correlation between the range of unique ions in a sample matrix (complexity); the amount of matrix components (concentration); and matrix effect, by comparing an E. coli digest matrix (∼2600 protein proteome) with phospholipid depleted human blood plasma, and unfractionated, nondepleted human plasma matrices (∼10(7) proteome) for six human plasma peptide multiple reaction monitoring assays. Our data set demonstrated analyte-specific interactions with matrix complexity and concentration properties resulting in significant ion suppression for all peptides (p < 0.01), with nonuniform effects on the ion signals of the analytes and their stable-isotope analogs. These matrix effects were then assessed for translation into relative residual error and precision effects in a low concentration (∼0-250 ng/ml) range across no-matrix, complex matrix, and highly complex matrix, when a standard addition stable isotope dilution calibration method was used. Relative residual error (%) and precision (CV%) by stable isotope dilution were within <20%; however, error in phospholipid-depleted and nondepleted plasma matrices were significantly higher compared with no-matrix (p = 0.006). Finally a novel reverse-polynomial dilution calibration method with and without phospholipid-depletion was compared with stable isotope dilution for relative residual error and precision

  16. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  17. Accommodation: The role of the external muscles of the eye: A consideration of refractive errors in relation to extraocular malfunction.

    PubMed

    Hargrave, B K

    2014-11-01

    Speculation as to optical malfunction has led to dissatisfaction with the theory that the lens is the sole agent in accommodation and to the suggestion that other parts of the eye are also conjointly involved. Around half-a-century ago, Robert Brooks Simpkins suggested that the mechanical features of the human eye were precisely such as to allow for a lengthening of the globe when the eye accommodated. Simpkins was not an optical man but his theory is both imaginative and comprehensive and deserves consideration. It is submitted here that accommodation is in fact a twofold process, and that although involving the lens, is achieved primarily by means of a give - and - take interplay between adducting and abducting external muscles, whereby an elongation of the eyeball is brought about by a stretching of the delicate elastic fibres immediately behind the cornea. The three muscles responsible for convergence (superior, internal and inferior recti) all pull from in front backwards, while of the three abductors (external rectus and the two obliques) the obliques pull from behind forwards, allowing for an easy elongation as the eye turns inwards and a return to its original length as the abducting muscles regain their former tension, returning the eye to distance vision. In refractive errors, the altered length of the eyeball disturbs the harmonious give - and - take relationship between adductors and abductors. Such stresses are likely to be perpetuated and the error exacerbated. Speculation is not directed towards a search for a possible cause of the muscular imbalance, since none is suspected. Muscles not used rapidly lose tone, as evidenced after removal of a limb from plaster. Early attention to the need for restorative exercise is essential and results usually impressive. If flexibility of the external muscles of the eyes is essential for continuing good sight, presbyopia can be avoided and with it the supposed necessity of glasses in middle life. Early attention

  18. Reading skill and neural processing accuracy improvement after a 3-hour intervention in preschoolers with difficulties in reading-related skills.

    PubMed

    Lovio, Riikka; Halttunen, Anu; Lyytinen, Heikki; Näätänen, Risto; Kujala, Teija

    2012-04-11

    This study aimed at determining whether an intervention game developed for strengthening phonological awareness has a remediating effect on reading skills and central auditory processing in 6-year-old preschool children with difficulties in reading-related skills. After a 3-hour training only, these children made a greater progress in reading-related skills than did their matched controls who did mathematical exercises following comparable training format. Furthermore, the results suggest that this brief intervention might be beneficial in modulating the neural basis of phonetic discrimination as an enhanced speech-elicited mismatch negativity (MMN) was seen in the intervention group, indicating improved cortical discrimination accuracy. Moreover, the amplitude increase of the vowel-elicited MMN significantly correlated with the improvement in some of the reading-skill related test scores. The results, albeit obtained with a relatively small sample, are encouraging, suggesting that reading-related skills can be improved even by a very short intervention and that the training effects are reflected in brain activity. However, studies with larger samples and different subgroups of children are needed to confirm the present results and to determine how children with different dyslexia subtypes benefit from the intervention.

  19. Accuracy evaluation of a new three-dimensional reproduction method of edentulous dental casts, and wax occlusion rims with jaw relation.

    PubMed

    Yuan, Fu-Song; Sun, Yu-Chun; Wang, Yong; Lü, Pei-Jun

    2013-09-01

    The article introduces a new method for three-dimensional reproduction of edentulous dental casts, and wax occlusion rims with jaw relation by using a commercial high-speed line laser scanner and reverse engineering software and evaluates the method's accuracy in vitro. The method comprises three main steps: (i) acquisition of the three-dimensional stereolithography data of maxillary and mandibular edentulous dental casts and wax occlusion rims; (ii) acquisition of the three-dimensional stereolithography data of jaw relations; and (iii) registration of these data with the reverse engineering software and completing reconstruction. To evaluate the accuracy of this method, dental casts and wax occlusion rims of 10 edentulous patients were used. The lengths of eight lines between common anatomic landmarks were measured directly on the casts and occlusion rims by using a vernier caliper and on the three-dimensional computerized images by using the software measurement tool. The direct data were considered as the true values. The paired-samples t-test was used for statistical analysis. The mean differences between the direct and the computerized measurements were mostly less than 0.04 mm and were not significant (P>0.05). Statistical significance among 10 patients was assessed using one-way analysis of variance (P<0.05). The result showed that the 10 patients were considered statistically no significant. Therefore, accurate three-dimensional reproduction of the edentulous dental casts, wax occlusion rims, and jaw relations was achieved. The proposed method enables the visualization of occlusion from different views and would help to meet the demand for the computer-aided design of removable complete dentures.

  20. Accuracy evaluation of a new three-dimensional reproduction method of edentulous dental casts, and wax occlusion rims with jaw relation

    PubMed Central

    Yuan, Fu-Song; Sun, Yu-Chun; Wang, Yong; Lü, Pei-Jun

    2013-01-01

    The article introduces a new method for three-dimensional reproduction of edentulous dental casts, and wax occlusion rims with jaw relation by using a commercial high-speed line laser scanner and reverse engineering software and evaluates the method's accuracy in vitro. The method comprises three main steps: (i) acquisition of the three-dimensional stereolithography data of maxillary and mandibular edentulous dental casts and wax occlusion rims; (ii) acquisition of the three-dimensional stereolithography data of jaw relations; and (iii) registration of these data with the reverse engineering software and completing reconstruction. To evaluate the accuracy of this method, dental casts and wax occlusion rims of 10 edentulous patients were used. The lengths of eight lines between common anatomic landmarks were measured directly on the casts and occlusion rims by using a vernier caliper and on the three-dimensional computerized images by using the software measurement tool. The direct data were considered as the true values. The paired-samples t-test was used for statistical analysis. The mean differences between the direct and the computerized measurements were mostly less than 0.04 mm and were not significant (P>0.05). Statistical significance among 10 patients was assessed using one-way analysis of variance (P<0.05). The result showed that the 10 patients were considered statistically no significant. Therefore, accurate three-dimensional reproduction of the edentulous dental casts, wax occlusion rims, and jaw relations was achieved. The proposed method enables the visualization of occlusion from different views and would help to meet the demand for the computer-aided design of removable complete dentures. PMID:23907676

  1. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  2. The Quantitative Relationship Between ISO 15197 Accuracy Criteria and Mean Absolute Relative Difference (MARD) in the Evaluation of Analytical Performance of Self-Monitoring of Blood Glucose (SMBG) Systems.

    PubMed

    Pardo, Scott; Simmons, David A

    2016-09-01

    The relationship between International Organization for Standardization (ISO) accuracy criteria and mean absolute relative difference (MARD), 2 methods for assessing the accuracy of blood glucose meters, is complex. While lower MARD values are generally better than higher MARD values, it is not possible to define a particular MARD value that ensures a blood glucose meter will satisfy the ISO accuracy criteria. The MARD value that ensures passing the ISO accuracy test can be described only as a probabilistic range. In this work, a Bayesian model is presented to represent the relationship between ISO accuracy criteria and MARD. Under the assumptions made in this work, there is nearly a 100% chance of satisfying ISO 15197:2013 accuracy requirements if the MARD value is between 3.25% and 5.25%.

  3. Millisecond accuracy video display using OpenGL under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  4. Thematic accuracy of the NLCD 2001 land cover for the conterminous United States

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Fry, J.A.; Smith, J.H.; Homer, C.G.

    2010-01-01

    The land-cover thematic accuracy of NLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD 1992 were 58% and 80%. Forest and cropland were two classes showing substantial improvements in accuracy in NLCD 2001 relative to NLCD 1992. NLCD 2001 forest and cropland user's accuracies were 87% and 82%, respectively, compared to 80% and 43% for NLCD 1992. Accuracy results are reported for 10 geographic regions of the United States, with regional overall accuracies ranging from 68% to 86% for Level II and from 79% to 91% at Level I. Geographic variation in class-specific accuracy was strongly associated with the phenomenon that regionally more abundant land-cover classes had higher accuracy. Accuracy estimates based on several definitions of agreement are reported to provide an indication of the potential impact of reference data error on accuracy. Drawing on our experience from two NLCD national accuracy assessments, we discuss the use of designs incorporating auxiliary data to more seamlessly quantify reference data quality as a means to further advance thematic map accuracy assessment.

  5. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement.

  6. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-03-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  7. Investigation of Error Patterns in Geographical Databases

    NASA Technical Reports Server (NTRS)

    Dryer, David; Jacobs, Derya A.; Karayaz, Gamze; Gronbech, Chris; Jones, Denise R. (Technical Monitor)

    2002-01-01

    The objective of the research conducted in this project is to develop a methodology to investigate the accuracy of Airport Safety Modeling Data (ASMD) using statistical, visualization, and Artificial Neural Network (ANN) techniques. Such a methodology can contribute to answering the following research questions: Over a representative sampling of ASMD databases, can statistical error analysis techniques be accurately learned and replicated by ANN modeling techniques? This representative ASMD sample should include numerous airports and a variety of terrain characterizations. Is it possible to identify and automate the recognition of patterns of error related to geographical features? Do such patterns of error relate to specific geographical features, such as elevation or terrain slope? Is it possible to combine the errors in small regions into an error prediction for a larger region? What are the data density reduction implications of this work? ASMD may be used as the source of terrain data for a synthetic visual system to be used in the cockpit of aircraft when visual reference to ground features is not possible during conditions of marginal weather or reduced visibility. In this research, United States Geologic Survey (USGS) digital elevation model (DEM) data has been selected as the benchmark. Artificial Neural Networks (ANNS) have been used and tested as alternate methods in place of the statistical methods in similar problems. They often perform better in pattern recognition, prediction and classification and categorization problems. Many studies show that when the data is complex and noisy, the accuracy of ANN models is generally higher than those of comparable traditional methods.

  8. Orbit IMU alignment: Error analysis

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  9. Age-related differences in strategy knowledge updating: blocked testing produces greater improvements in metacognitive accuracy for younger than older adults.

    PubMed

    Price, Jodi; Hertzog, Christopher; Dunlosky, John

    2008-09-01

    Age-related differences in updating knowledge about strategy effectiveness after task experience have not been consistently found, perhaps because the magnitude of observed knowledge updating has been rather meager for both age groups. We examined whether creating homogeneous blocks of recall tests based on two strategies used at encoding (imagery and repetition) would enhance people's learning about strategy effects on recall. Younger and older adults demonstrated greater knowledge updating (as measured by questionnaire ratings of strategy effectiveness and by global judgments of performance) with blocked (versus random) testing. The benefit of blocked testing for absolute accuracy of global predictions was smaller for older than younger adults. However, individual differences in correlations of strategy effectiveness ratings and postdictions showed similar upgrades for both age groups. Older adults learn about imagery's superior effectiveness but do not accurately estimate the magnitude of its benefit, even after blocked testing.

  10. Age-related Differences in Strategy Knowledge Updating: Blocked Testing Produces Greater Improvements in Metacognitive Accuracy for Younger than Older Adults

    PubMed Central

    Price, Jodi; Hertzog, Christopher; Dunlosky, John

    2008-01-01

    Age-related differences in updating knowledge about strategy effectiveness after task experience have not been consistently found, perhaps because the magnitude of observed knowledge updating has been rather meager for both age groups. We examined whether creating homogeneous blocks of recall tests based on two strategies used at encoding (imagery and repetition) would enhance people’s learning about strategy effects on recall. Younger and older adults demonstrated greater knowledge updating (as measured by questionnaire ratings of strategy effectiveness and by global judgments of performance) with blocked (vs. random) testing. The benefit of blocked testing for absolute accuracy of global predictions was smaller for older than younger adults. However, individual differences in correlations of strategy effectiveness ratings and postdictions showed similar upgrades for both age groups. Older adults learn about imagery’s superior effectiveness but do not accurately estimate the magnitude of its benefit, even after blocked testing. PMID:18608048

  11. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    PubMed

    Arnold, Denis; Tomaschek, Fabian; Sering, Konstantin; Lopez, Florence; Baayen, R Harald

    2017-01-01

    Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.

  12. Lumbar repositioning error in sitting: healthy controls versus people with sitting-related non-specific chronic low back pain (flexion pattern).

    PubMed

    O'Sullivan, Kieran; Verschueren, Sabine; Van Hoof, Wannes; Ertanir, Faik; Martens, Lien; Dankaerts, Wim

    2013-12-01

    Studies examining repositioning error (RE) in non-specific chronic low back pain (NSCLBP) demonstrate contradictory results, with most studies not correlating RE deficits with measures of pain, disability or fear. This study examined if RE deficits exist among a subgroup of patients with NSCLBP whose symptoms are provoked by flexion, and how such deficits relate to measures of pain, disability, fear-avoidance and kinesiophobia. 15 patients with NSCLBP were matched (age, gender, and body mass index) with 15 painfree participants. Lumbo-pelvic RE, pain, functional disability, fear-avoidance and kinesiophobia were evaluated. Participants were asked to reproduce a target position (neutral lumbo-pelvic posture) after 5 s of slump sitting. RE in each group was compared by evaluating constant error (CE), absolute error (AE) and variable error (VE). Both AE (p = 0.002) and CE (p = 0.006) were significantly larger in the NSCLBP group, unlike VE (p = 0.165) which did not differ between the groups. There were significant, moderate correlations in the NSCLBP group between AE and functional disability (r = 0.601, p = 0.018), and between CE and fear-avoidance (r = -0.577, p = 0.0024), but all other correlations were weak (r < 0.337, rs < 0.377) or non-significant (p > 0.05). The results demonstrate increased lumbo-pelvic RE in a subgroup of NSCLBP patients, with the selected subgroup undershooting the target position. Overall, RE was only weakly to moderately correlated with measures of pain, disability or fear. The deficits observed are consistent with findings of altered motor control in patients with NSCLBP. The mechanisms underlying these RE deficits, and the most effective method of addressing these deficits, require further study.

  13. Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Minnett, P. J.

    2014-12-01

    Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.

  14. Accuracy of automatic syndromic classification of coded emergency department diagnoses in identifying mental health-related presentations for public health surveillance

    PubMed Central

    2014-01-01

    Background Syndromic surveillance in emergency departments (EDs) may be used to deliver early warnings of increases in disease activity, to provide situational awareness during events of public health significance, to supplement other information on trends in acute disease and injury, and to support the development and monitoring of prevention or response strategies. Changes in mental health related ED presentations may be relevant to these goals, provided they can be identified accurately and efficiently. This study aimed to measure the accuracy of using diagnostic codes in electronic ED presentation records to identify mental health-related visits. Methods We selected a random sample of 500 records from a total of 1,815,588 ED electronic presentation records from 59 NSW public hospitals during 2010. ED diagnoses were recorded using any of ICD-9, ICD-10 or SNOMED CT classifications. Three clinicians, blinded to the automatically generated syndromic grouping and each other’s classification, reviewed the triage notes and classified each of the 500 visits as mental health-related or not. A “mental health problem presentation” for the purposes of this study was defined as any ED presentation where either a mental disorder or a mental health problem was the reason for the ED visit. The combined clinicians’ assessment of the records was used as reference standard to measure the sensitivity, specificity, and positive and negative predictive values of the automatic classification of coded emergency department diagnoses. Agreement between the reference standard and the automated coded classification was estimated using the Kappa statistic. Results Agreement between clinician’s classification and automated coded classification was substantial (Kappa = 0.73. 95% CI: 0.58 - 0.87). The automatic syndromic grouping of coded ED diagnoses for mental health-related visits was found to be moderately sensitive (68% 95% CI: 46%-84%) and highly specific at 99% (95% CI: 98

  15. Sensitivity of LIDAR Canopy Height Estimate to Geolocation Error

    NASA Astrophysics Data System (ADS)

    Tang, H.; Dubayah, R.

    2010-12-01

    Many factors affect the quality of canopy height structure data derived from space-based lidar such as DESDynI. Among these is geolocation accuracy. Inadequate geolocation information hinders subsequent analyses because a different portion of the canopy is observed relative to what is assumed. This is especially true in mountainous terrain where the effects of slope magnify geolocation errors. Mission engineering design must trade the expense of providing more accurate geolocation with the potential improvement in measurement accuracy. The objective of our work is to assess the effects of small errors in geolocation on subsequent retrievals of maximum canopy height for a varying set of canopy structures and terrains. Dense discrete lidar data from different forest sites (from La Selva Biological Station, Costa Rica, Sierra National Forest, California, and Hubbard Brook and Bartlett Experimental Forests in New Hampshire) are used to simulate DESDynI height retrievals using various geolocation accuracies. Results show that canopy height measurement errors generally increase as the geolocation error increases. Interestingly, most of the height errors are caused by variation of canopy height rather than topography (slope and aspect).

  16. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    selection for optimal accuracy is closely related to the intensity gradients of the underlying images. Also, the result that the DIR algorithms produce much lower errors in heterogeneous lung regions relative to homogeneous (low intensity gradient) regions, suggests that feature-based evaluation of deformable image registration accuracy must be viewed cautiously.

  17. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  18. Prospective issues for error detection.

    PubMed

    Blavier, Adélaïde; Rouy, Emmanuelle; Nyssen, Anne-Sophie; de Keyser, Véronique

    2005-06-10

    From the literature on error detection, the authors select several concepts relating error detection mechanisms and prospective memory features. They emphasize the central role of intention in the classification of the errors into slips/lapses/mistakes, in the error handling process and in the usual distinction between action-based and outcome-based detection. Intention is again a core concept in their investigation of prospective memory theory, where they point out the contribution of intention retrievals, intention persistence and output monitoring in the individual's possibilities for detecting their errors. The involvement of the frontal lobes in prospective memory and in error detection is also analysed. From the chronology of a prospective memory task, the authors finally suggest a model for error detection also accounting for neural mechanisms highlighted by studies on error-related brain activity.

  19. Evaluating the accuracy of selenodesic reference grids

    NASA Technical Reports Server (NTRS)

    Koptev, A. A.

    1974-01-01

    Estimates were made of the accuracy of reference point grids using the technique of calculating the errors from theoretical analysis. Factors taken into consideration were: telescope accuracy, number of photographs, and libration amplitude. To solve the problem, formulas were used for the relationship between the coordinates of lunar surface points and their images on the photograph.

  20. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  1. Accuracy of overlay measurements: tool and mark asymmetry effects

    NASA Astrophysics Data System (ADS)

    Coleman, Daniel J.; Larson, Patricia J.; Lopata, Alexander D.; Muth, William A.; Starikov, Alexander

    1990-06-01

    Results of recent Investigations uncovering significant errors in overlay (O/L) measurements are reported. The two major contributors are related to the failures of symmetry of the overlay measurement tool and of the mark. These may result In measurement errors on the order of 100 nm. Methodology based on the conscientious verification of assumptions of symmetry is shown to be effective in identifying the extent and sources of such errors. This methodology can be used to arrive at an estimate of the relative accuracy of the O/L measurements, even in absence of certified O/L reference materials. Routes to improve the accuracy of O/L measurements are outlined and some examples of improvements are given. Errors in O/L measurements associated with the asymmetry of the metrology tool can be observed by comparing the O/L measurements taken at 0 and 180 degree orientations of the sample in reference to the tool. Half the difference of these measurements serves as an estimate of such tool related bias in estimating O/L. This is called tool induced shift (TIS). Errors of this kind can be traced to asymmetries of tool components, e. g., camera, illumination misalignment, residual asymmetric aberrations etc. Tool asymmetry leads to biased O/L estimates even on symmetric O/L measurement marks. Its impact on TIS depends on the optical properties of the structure being measured, the measurement procedure and on the combination of tool and sample asymmetries. It is also a function of design and manufacture of the O/L metrology tool. In the absence of certified O/L samples, measurement accuracy and repeatability may be improved by demanding that TIS be small for all tools on all structures.

  2. Mars gravitational field estimation error

    NASA Technical Reports Server (NTRS)

    Compton, H. R.; Daniels, E. F.

    1972-01-01

    The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

  3. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  4. Error prediction for probes guided by means of fixtures

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, J. Michael

    2012-02-01

    Probe guides are surgical fixtures that are rigidly attached to bone anchors in order to place a probe at a target with high accuracy (RMS error < 1 mm). Applications include needle biopsy, the placement of electrodes for deep-brain stimulation (DBS), spine surgery, and cochlear implant surgery. Targeting is based on pre-operative images, but targeting errors can arise from three sources: (1) anchor localization error, (2) guide fabrication error, and (3) external forces and torques. A well-established theory exists for the statistical prediction of target registration error (TRE) when targeting is accomplished by means of tracked probes, but no such TRE theory is available for fixtured probe guides. This paper provides that theory and shows that all three error sources can be accommodated in a remarkably simple extension of existing theory. Both the guide and the bone with attached anchors are modeled as objects with rigid sections and elastic sections, the latter of which are described by stiffness matrices. By relating minimization of elastic energy for guide attachment to minimization of fiducial registration error for point registration, it is shown that the expression for targeting error for the guide is identical to that for weighted rigid point registration if the weighting matrices are properly derived from stiffness matrices and the covariance matrices for fiducial localization are augmented with offsets in the anchor positions. An example of the application of the theory is provided for ear surgery.

  5. On the relation between orbital-localization and self-interaction errors in the density functional theory treatment of organic semiconductors.

    PubMed

    Körzdörfer, T

    2011-03-07

    It is commonly argued that the self-interaction error (SIE) inherent in semilocal density functionals is related to the degree of the electronic localization. Yet at the same time there exists a latent ambiguity in the definitions of the terms "localization" and "self-interaction," which ultimately prevents a clear and readily accessible quantification of this relationship. This problem is particularly pressing for organic semiconductor molecules, in which delocalized molecular orbitals typically alternate with localized ones, thus leading to major distortions in the eigenvalue spectra. This paper discusses the relation between localization and SIEs in organic semiconductors in detail. Its findings provide further insights into the SIE in the orbital energies and yield a new perspective on the failure of self-interaction corrections that identify delocalized orbital densities with electrons.

  6. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  7. High Accuracy of Common HIV-Related Oral Disease Diagnoses by Non-Oral Health Specialists in the AIDS Clinical Trial Group

    PubMed Central

    Shiboski, Caroline H.; Chen, Huichao; Secours, Rode; Lee, Anthony; Webster-Cyriaque, Jennifer; Ghannoum, Mahmoud; Evans, Scott; Bernard, Daphné; Reznik, David; Dittmer, Dirk P.; Hosey, Lara; Sévère, Patrice; Aberg, Judith A.

    2015-01-01

    Objective Many studies include oral HIV-related endpoints that may be diagnosed by non-oral-health specialists (non-OHS) like nurses or physicians. Our objective was to assess the accuracy of clinical diagnoses of HIV-related oral lesions made by non-OHS compared to diagnoses made by OHS. Methods A5254, a cross-sectional study conducted by the Oral HIV/AIDS Research Alliance within the AIDS Clinical Trial Group, enrolled HIV-1-infected adults participants from six clinical trial units (CTU) in the US (San Francisco, New York, Chapel Hill, Cleveland, Atlanta) and Haiti. CTU examiners (non-OHS) received standardized training on how to perform an oral examination and make clinical diagnoses of specific oral disease endpoints. Diagnoses by calibrated non-OHS were compared to those made by calibrated OHS, and sensitivity and specificity computed. Results Among 324 participants, the majority were black (73%), men (66%), and the median CD4+ cell count 138 cells/mm3. The overall frequency of oral mucosal disease diagnosed by OHS was 43% in US sites, and 90% in Haiti. Oral candidiasis (OC) was detected in 153 (47%) by OHS, with erythematous candidiasis (EC) the most common type (39%) followed by pseudomembranous candidiasis (PC; 26%). The highest prevalence of OC (79%) was among participants in Haiti, and among those with CD4+ cell count ≤ 200 cells/mm3 and HIV-1 RNA > 1000 copies/mL (71%). The sensitivity and specificity of OC diagnoses by non-OHS were 90% and 92% (for EC: 81% and 94%; PC: 82% and 95%). Sensitivity and specificity were also high for KS (87% and 94%, respectively), but sensitivity was < 60% for HL and oral warts in all sites combined. The Candida culture confirmation of OC clinical diagnoses (as defined by ≥ 1 colony forming unit per mL of oral/throat rinse) was ≥ 93% for both PC and EC. Conclusion Trained non-OHS showed high accuracy of clinical diagnoses of OC in comparison with OHS, suggesting their usefulness in studies in resource-poor settings

  8. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  9. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  10. SU-E-T-599: The Variation of Hounsfield Unit and Relative Electron Density Determination as a Function of KVp and Its Effect On Dose Calculation Accuracy

    SciTech Connect

    Ohl, A; Boer, S De

    2014-06-01

    Purpose: To investigate the differences in relative electron density for different energy (kVp) settings and the effect that these differences have on dose calculations. Methods: A Nuclear Associates 76-430 Mini CT QC Phantom with materials of known relative electron densities was imaged by one multi-slice (16) and one single-slice computed tomography (CT) scanner. The Hounsfield unit (HU) was recorded for each material with energies ranging from 80 to 140 kVp and a representative relative electron density (RED) curve was created. A 5 cm thick inhomogeneity was created in the treatment planning system (TPS) image at a depth of 5 cm. The inhomogeneity was assigned HU for various materials for each kVp calibration curve. The dose was then calculated with the analytical anisotropic algorithm (AAA) at points within and below the inhomogeneity and compared using the 80 kVp beam as a baseline. Results: The differences in RED values as a function of kVp showed the largest variations of 580 and 547 HU for the Aluminum and Bone materials; the smallest differences of 0.6 and 3.0 HU were observed for the air and lung inhomogeneities. The corresponding dose calculations for the different RED values assigned to the 5 cm thick slab revealed the largest differences inside the aluminum and bone inhomogeneities of 2.2 to 6.4% and 4.3 to 7.0% respectively. The dose differences beyond these two inhomogeneities were between 0.4 to 1.6% for aluminum and 1.9 to 2.2 % for bone. For materials with lower HU the calculated dose differences were less than 1.0%. Conclusion: For high CT number materials the dose differences in the phantom calculation as high as 7.0% are significant. This result may indicate that implementing energy specific RED curves can increase dose calculation accuracy.

  11. Elimination of 'ghost'-effect-related systematic error in metrology of X-ray optics with a long trace profiler

    SciTech Connect

    Yashchuk, Valeriy V.; Irick, Steve C.; MacDowell, Alastair A.

    2005-04-28

    A data acquisition technique and relevant program for suppression of one of the systematic effects, namely the ''ghost'' effect, of a second generation long trace profiler (LTP) is described. The ''ghost'' effect arises when there is an unavoidable cross-contamination of the LTP sample and reference signals into one another, leading to a systematic perturbation in the recorded interference patterns and, therefore, a systematic variation of the measured slope trace. Perturbations of about 1-2 {micro}rad have been observed with a cylindrically shaped X-ray mirror. Even stronger ''ghost'' effects show up in an LTP measurement with a mirror having a toroidal surface figure. The developed technique employs separate measurement of the ''ghost''-effect-related interference patterns in the sample and the reference arms and then subtraction of the ''ghost'' patterns from the sample and the reference interference patterns. The procedure preserves the advantage of simultaneously measuring the sample and reference signals. The effectiveness of the technique is illustrated with LTP metrology of a variety of X-ray mirrors.

  12. The measurement accuracy of passive radon instruments.

    PubMed

    Beck, T R; Foerster, E; Buchröder, H; Schmidt, V; Döring, J

    2014-01-01

    This paper analyses the data having been gathered from interlaboratory comparisons of passive radon instruments over 10 y with respect to the measurement accuracy. The measurement accuracy is discussed in terms of the systematic and the random measurement error. The analysis shows that the systematic measurement error of the most instruments issued by professional laboratory services can be within a range of ±10 % from the true value. A single radon measurement has an additional random measurement error, which is in the range of up to ±15 % for high exposures to radon (>2000 kBq h m(-3)). The random measurement error increases for lower exposures. The analysis especially applies to instruments with solid-state nuclear track detectors and results in proposing criteria for testing the measurement accuracy. Instruments with electrets and charcoal have also been considered, but the low stock of data enables only a qualitative discussion.

  13. Potential errors in relative dose measurements in kilovoltage photon beams due to polarity effects in plane-parallel ionisation chambers.

    PubMed

    Dowdell, S; Tyler, M; McNamara, J; Sloan, K; Ceylan, A; Rinks, A

    2016-11-15

    Plane-parallel ionisation chambers are regularly used to conduct relative dosimetry measurements for therapeutic kilovoltage beams during commissioning and routine quality assurance. This paper presents the first quantification of the polarity effect in kilovoltage photon beams for two types of commercially available plane-parallel ionisation chambers used for such measurements. Measurements were performed at various depths along the central axis in a solid water phantom and for different field sizes at 2 cm depth to determine the polarity effect for PTW Advanced Markus and Roos ionisation chambers (PTW-Freiburg, Germany). Data was acquired for kilovoltage beams between 100 kVp (half-value layer (HVL)  =  2.88 mm Al) and 250 kVp (HVL  =  2.12 mm Cu) and field sizes of 3-15 cm diameter for 30 cm focus-source distance (FSD) and 4  ×  4 cm(2)-20  ×  20 cm(2) for 50 cm FSD. Substantial polarity effects, up to 9.6%, were observed for the Advanced Markus chamber compared to a maximum 0.5% for the Roos chamber. The magnitude of the polarity effect was observed to increase with field size and beam energy but was consistent with depth. The polarity effect is directly influenced by chamber design, with potentially large polarity effects for some plane-parallel ionisation chambers. Depending on the specific chamber used, polarity corrections may be required for output factor measurements of kilovoltage photon beams. Failure to account for polarity effects could lead to an incorrect dose being delivered to the patient.

  14. Potential errors in relative dose measurements in kilovoltage photon beams due to polarity effects in plane-parallel ionisation chambers

    NASA Astrophysics Data System (ADS)

    Dowdell, S.; Tyler, M.; McNamara, J.; Sloan, K.; Ceylan, A.; Rinks, A.

    2016-12-01

    Plane-parallel ionisation chambers are regularly used to conduct relative dosimetry measurements for therapeutic kilovoltage beams during commissioning and routine quality assurance. This paper presents the first quantification of the polarity effect in kilovoltage photon beams for two types of commercially available plane-parallel ionisation chambers used for such measurements. Measurements were performed at various depths along the central axis in a solid water phantom and for different field sizes at 2 cm depth to determine the polarity effect for PTW Advanced Markus and Roos ionisation chambers (PTW-Freiburg, Germany). Data was acquired for kilovoltage beams between 100 kVp (half-value layer (HVL)  =  2.88 mm Al) and 250 kVp (HVL  =  2.12 mm Cu) and field sizes of 3-15 cm diameter for 30 cm focus-source distance (FSD) and 4  ×  4 cm2-20  ×  20 cm2 for 50 cm FSD. Substantial polarity effects, up to 9.6%, were observed for the Advanced Markus chamber compared to a maximum 0.5% for the Roos chamber. The magnitude of the polarity effect was observed to increase with field size and beam energy but was consistent with depth. The polarity effect is directly influenced by chamber design, with potentially large polarity effects for some plane-parallel ionisation chambers. Depending on the specific chamber used, polarity corrections may be required for output factor measurements of kilovoltage photon beams. Failure to account for polarity effects could lead to an incorrect dose being delivered to the patient.

  15. Variations on a theme: Songbirds, variability, and sensorimotor error correction.

    PubMed

    Kuebrich, B D; Sober, S J

    2015-06-18

    Songbirds provide a powerful animal model for investigating how the brain uses sensory feedback to correct behavioral errors. Here, we review a recent study in which we used online manipulations of auditory feedback to quantify the relationship between sensory error size, motor variability, and vocal plasticity. We found that although inducing small auditory errors evoked relatively large compensatory changes in behavior, as error size increased the magnitude of error correction declined. Furthermore, when we induced large errors such that auditory signals no longer overlapped with the baseline distribution of feedback, the magnitude of error correction approached zero. This pattern suggests a simple and robust strategy for the brain to maintain the accuracy of learned behaviors by evaluating sensory signals relative to the previously experienced distribution of feedback. Drawing from recent studies of auditory neurophysiology and song discrimination, we then speculate as to the mechanistic underpinnings of the results obtained in our behavioral experiments. Finally, we review how our own and other studies exploit the strengths of the songbird system, both in the specific context of vocal systems and more generally as a model of the neural control of complex behavior.

  16. Sensitivity of grass and alfalfa reference evapotranspiration to weather station sensor accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1991 to 2008 from an autom...

  17. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  18. Accuracy of the Modified Somatic Perception Questionnaire and Pain Disability Index in the detection of malingered pain-related disability in chronic pain.

    PubMed

    Bianchini, Kevin J; Aguerrevere, Luis E; Guise, Brian J; Ord, Jonathan S; Etherton, Joseph L; Meyers, John E; Soignier, R Denis; Greve, Kevin W; Curtis, Kelly L; Bui, Joy

    2014-01-01

    The Modified Somatic Perception Questionnaire (MSPQ) and the Pain Disability Index (PDI) are both popular clinical screening instruments in general orthopedic, rheumatologic, and neurosurgical clinics and are useful for identifying pain patients whose physical symptom presentations and disability may be non-organic. Previous studies found both to accurately detect malingered pain presentations; however, the generalizability of these results is not clear. This study used a criterion groups validation design (retrospective cohort of patients with chronic pain, n = 328) with a simulator group (college students, n = 98) to determine the accuracy of the MSPQ and PDI in detecting Malingered Pain Related Disability. Patients were grouped based on independent psychometric evidence of MPRD. Results showed that MSPQ and PDI scores were not associated with objective medical pathology. However, they accurately differentiated Not-MPRD from MPRD cases. Diagnostic statistics associated with a range of scores are presented for application to individual cases. Data from this study can inform the clinical management of chronic pain patients by screening for psychological overlay and malingering, thus alerting clinicians to the possible presence of psychosocial obstacles to effective treatment and triggering further psychological assessment and/or treatment.

  19. ERP evidence of adaptive changes in error processing and attentional control during rhythm synchronization learning.

    PubMed

    Padrão, Gonçalo; Penhune, Virginia; de Diego-Balaguer, Ruth; Marco-Pallares, Josep; Rodriguez-Fornells, Antoni

    2014-10-15

    The ability to detect and use information from errors is essential during the acquisition of new skills. There is now a wealth of evidence about the brain mechanisms involved in error processing. However, the extent to which those mechanisms are engaged during the acquisition of new motor skills remains elusive. Here we examined rhythm synchronization learning across 12 blocks of practice in musically naïve individuals and tracked changes in ERP signals associated with error-monitoring and error-awareness across distinct learning stages. Synchronization performance improved with practice, and performance improvements were accompanied by dynamic changes in ERP components related to error-monitoring and error-awareness. Early in learning, when performance was poor and the internal representations of the rhythms were weaker we observed a larger error-related negativity (ERN) following errors compared to later learning. The larger ERN during early learning likely results from greater conflict between competing motor responses, leading to greater engagement of medial-frontal conflict monitoring processes and attentional control. Later in learning, when performance had improved, we observed a smaller ERN accompanied by an enhancement of a centroparietal positive component resembling the P3. This centroparietal positive component was predictive of participant's performance accuracy, suggesting a relation between error saliency, error awareness and the consolidation of internal templates of the practiced rhythms. Moreover, we showed that during rhythm learning errors led to larger auditory evoked responses related to attention orientation which were triggered automatically and which were independent of the learning stage. The present study provides crucial new information about how the electrophysiological signatures related to error-monitoring and error-awareness change during the acquisition of new skills, extending previous work on error processing and cognitive

  20. Sound source localization identification accuracy: bandwidth dependencies.

    PubMed

    Yost, William A; Zhong, Xuan

    2014-11-01

    Sound source localization accuracy using a sound source identification task was measured in the front, right quarter of the azimuth plane as rms (root-mean-square) error (degrees) for stimulus conditions in which the bandwidth (1/20 to 2 octaves wide) and center frequency (250, 2000, 4000 Hz) of 200-ms noise bursts were varied. Tones of different frequencies (250, 2000, 4000 Hz) were also used. As stimulus bandwidth increases, there is an increase in sound source localization identification accuracy (i.e., rms error decreases). Wideband stimuli (>1 octave wide) produce best sound source localization accuracy (~6°-7° rms error), and localization accuracy for these wideband noise stimuli does not depend on center frequency. For narrow bandwidths (<1 octave) and tonal stimuli, accuracy does depend on center frequency such that highest accuracy is obtained for low-frequency stimuli (centered on 250 Hz), worse accuracy for mid-frequency stimuli (centered on 2000 Hz), and intermediate accuracy for high-frequency stimuli (centered on 4000 Hz).

  1. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions - Effect of Velocity

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2013-01-01

    Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to

  2. Impact of the glucocorticoid receptor BclI polymorphism on reward expectancy and prediction error related ventral striatal reactivity in depressed and healthy individuals.

    PubMed

    Ham, Byung-Joo; Greenberg, Tsafrir; Chase, Henry W; Phillips, Mary L

    2016-01-01

    There is evidence that reward-related neural reactivity is altered in depressive disorders. Glucocorticoids influence dopaminergic transmission, which is widely implicated in reward processing. However, no studies have examined the effect of glucocorticoid receptor gene polymorphisms on reward-related neural reactivity in depressed or healthy individuals. Fifty-nine depressed individuals with major depressive disorder (n=33) or bipolar disorder (n=26), and 32 healthy individuals were genotyped for the glucocorticoid receptor BclI G/C polymorphism, and underwent functional magnetic resonance imaging during a monetary reward task. We examined the effect of the glucocorticoid receptor BclI G/C polymorphism on reward expectancy (RE; expected outcome value) and prediction error (PE; discrepancy between expected and actual outcome) related ventral striatal reactivity. There was a significant interaction between reward condition and BclI genotype (p=0.007). C-allele carriers showed higher PE than RE-related right ventral striatal reactivity (p<0.001), whereas no such difference was observed in G/G homozygotes. Accordingly, C-allele carriers showed a greater difference between PE and RE-related right ventral striatal reactivity than G/G homozygotes (p<0.005), and also showed lower RE-related right ventral striatal reactivity than G/G homozygotes (p=0.011). These findings suggest a slowed transfer from PE to RE-related ventral striatal responses during reinforcement learning in C-allele carriers, regardless of diagnosis, possibly due to altered dopamine release associated with increased sensitivity to glucocorticoids.

  3. Development and evaluation of a Kalman-filter algorithm for terminal area navigation using sensors of moderate accuracy

    NASA Technical Reports Server (NTRS)

    Kanning, G.; Cicolani, L. S.; Schmidt, S. F.

    1983-01-01

    Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.

  4. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass

    PubMed Central

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-01-01

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass. PMID:27886153

  5. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.

    PubMed

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-11-24

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.

  6. Characterizing geometric accuracy and precision in image guided gated radiotherapy

    NASA Astrophysics Data System (ADS)

    Tenn, Stephen Edward

    gating level. Finally, the entire IGGRT process is able, on average, to place fields within 1.1 mm of the target. Anomalously large offsets (up to 9 mm) were observed and attributed to reconstruction errors in 4DCT. The magnitude of error in IGGRT delivery accuracy was closely related to the amount of geometric distortion in the CT images used for image guidance.

  7. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  8. Screening for osteoporosis using easily obtainable biometrical data: diagnostic accuracy of measured, self-reported and recalled BMI, and related costs of bone mineral density measurements.

    PubMed

    van der Voort, D J; Brandon, S; Dinant, G J; van Wersch, J W

    2000-01-01

    The aims of the present study were: to determine the diagnostic accuracy of objectively measured, self-reported and recalled body mass index (BMI) for osteoporosis and osteopenia; to determine the diagnostic costs, in terms of bone mineral density (BMD) measurements, per osteoporotic or osteopenic patient detected, using different BMI tests; and to determine the extent to which the results can be used within the framework of the current screening program for breast cancer in The Netherlands. Within the framework of a cross-sectional study on the prevalence of osteoporosis in the south of The Netherlands, 1155 postmenopausal women aged 50-80 years were asked for their present height and their weight at age 20-30 years. Subsequently their actual weight, height and BMD of the lumbar spine (DXA) were measured. The BMD cutoff was 0.800 g/cm2 for osteoporosis and 0.970 g/cm2 for low BMD (osteoporosis + osteopenia). After receiver operating characteristic analysis, age was cut off at 60 years and BMI at 27 kg/m2. Diagnostic accuracies of objectively measured, self-reported and recalled BMI were evaluated using predictive values (PV) and odds ratios. The resulting 'true positive' and 'false positive' rates were used to calculate diagnostic costs (i.e., DXA) for each osteoporotic patient or low-BMD patient detected. The prevalence of osteoporosis in the study population was 25%, that of low BMD 65%. Only the age-BMI tests 'age > or = 60, BMI < or = 27' showed PVs for osteoporosis (31-41%) and for low BMD (71-81%) that were higher than the prior probabilities for these conditions. Related odds ratios were 2.14-3.18 (osteoporosis) and 1.87-3.04 (low BMD). The objective BMI test detected 50% of the osteoporotic patients. Using the self-reported BMI test and the recalled BMI test, detection rates increased to 55% and 69%, respectively. Concomitant costs per osteoporotic patient detected rose by 24%. Detection of patients with a low BMD increased from 38% for objective BMI and

  9. Analysis of Solar Two Heliostat Tracking Error Sources

    SciTech Connect

    Jones, S.A.; Stone, K.W.

    1999-01-28

    This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

  10. Test Execution Variation in Peritoneal Lavage Cytology Could Be Related to Poor Diagnostic Accuracy and Stage Migration in Patients with Gastric Cancer

    PubMed Central

    Ki, Young-Jun; Ji, Sun-Hee; Min, Jae Seok; Park, Sunhoo; Yu, Hang-Jong; Bang, Ho-Yoon; Lee, Jong-Inn

    2013-01-01

    Purpose Peritoneal lavage cytology is part of the routine staging workup for patients with advanced gastric cancer. However, no quality assurance study has been conducted to show variations or biases in peritoneal lavage cytology results. The aim of this study was to demonstrate a test execution variation in peritoneal lavage cytology between investigating surgeons. Materials and Methods A prospective cohort study was designed for determination of the positive rate of peritoneal lavage cytology using a liquid-based preparation method in patients with potentially curable advanced gastric cancer (cT2~4/N0~2/M0). One hundred thirty patients were enrolled and underwent laparotomy, peritoneal lavage cytology, and standard gastrectomy, which were performed by 3 investigating surgeons. Data were analyzed using the chi-square test and a logistic regression model. Results The overall positive peritoneal cytology rate was 10.0%. Subgroup positive rates were 5.3% in pT1 cancer, 2.0% in pT2/3 cancer, 11.1% in pT4a cancer, and 71.4% in pT4b cancer. In univariate analysis, positive peritoneal cytology showed significant correlation with pT stage, lymphatic invasion, vascular invasion, ascites, and the investigating surgeon. We found the positive rate to be 2.1% for surgeon A, 10.2% for surgeon B, and 20.6% for surgeon C (P=0.024). Multivariate analysis identified pT stage, ascites, and the investigating surgeon to be significant risk factors for positive peritoneal cytology. Conclusions The peritoneal lavage cytology results were significantly affected by the investigating surgeon, providing strong evidence of test execution variation that could be related to poor diagnostic accuracy and stage migration in patients with advanced gastric cancer. PMID:24511417

  11. Correction method for the error of diamond tool's radius in ultra-precision cutting

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  12. Collective animal decisions: preference conflict and decision accuracy.

    PubMed

    Conradt, Larissa

    2013-12-06

    Social animals frequently share decisions that involve uncertainty and conflict. It has been suggested that conflict can enhance decision accuracy. In order to judge the practical relevance of such a suggestion, it is necessary to explore how general such findings are. Using a model, I examine whether conflicts between animals in a group with respect to preferences for avoiding false positives versus avoiding false negatives could, in principle, enhance the accuracy of collective decisions. I found that decision accuracy nearly always peaked when there was maximum conflict in groups in which individuals had different preferences. However, groups with no preferences were usually even more accurate. Furthermore, a relatively slight skew towards more animals with a preference for avoiding false negatives decreased the rate of expected false negatives versus false positives considerably (and vice versa), while resulting in only a small loss of decision accuracy. I conclude that in ecological situations in which decision accuracy is crucial for fitness and survival, animals cannot 'afford' preferences with respect to avoiding false positives versus false negatives. When decision accuracy is less crucial, animals might have such preferences. A slight skew in the number of animals with different preferences will result in the group avoiding that type of error more that the majority of group members prefers to avoid. The model also indicated that knowing the average success rate ('base rate') of a decision option can be very misleading, and that animals should ignore such base rates unless further information is available.

  13. Design and accuracy analysis of a metamorphic CNC flame cutting machine for ship manufacturing

    NASA Astrophysics Data System (ADS)

    Hu, Shenghai; Zhang, Manhui; Zhang, Baoping; Chen, Xi; Yu, Wei

    2016-09-01

    The current research of processing large size fabrication holes on complex spatial curved surface mainly focuses on the CNC flame cutting machines design for ship hull of ship manufacturing. However, the existing machines cannot meet the continuous cutting requirements with variable pass conditions through their fixed configuration, and cannot realize high-precision processing as the accuracy theory is not studied adequately. This paper deals with structure design and accuracy prediction technology of novel machine tools for solving the problem of continuous and high-precision cutting. The needed variable trajectory and variable pose kinematic characteristics of non-contact cutting tool are figured out and a metamorphic CNC flame cutting machine designed through metamorphic principle is presented. To analyze kinematic accuracy of the machine, models of joint clearances, manufacturing tolerances and errors in the input variables and error models considering the combined effects are derived based on screw theory after establishing ideal kinematic models. Numerical simulations, processing experiment and trajectory tracking experiment are conducted relative to an eccentric hole with bevels on cylindrical surface respectively. The results of cutting pass contour and kinematic error interval which the position error is from-0.975 mm to +0.628 mm and orientation error is from-0.01 rad to +0.01 rad indicate that the developed machine can complete cutting process continuously and effectively, and the established kinematic error models are effective although the interval is within a `large' range. It also shows the matching property between metamorphic principle and variable working tasks, and the mapping correlation between original designing parameters and kinematic errors of machines. This research develops a metamorphic CNC flame cutting machine and establishes kinematic error models for accuracy analysis of machine tools.

  14. Racial/Ethnic Difference in HIV-related Knowledge among Young Men who have Sex with Men and their Association with Condom Errors

    PubMed Central

    Garofalo, Robert; Gayles, Travis; Bottone, Paul Devine; Ryan, Dan; Kuhns, Lisa M; Mustanski, Brian

    2014-01-01

    Objective HIV disproportionately affects young men who have sex with men, and knowledge about HIV transmission is one factor that may play a role in high rate of infections for this population. This study examined racial/ethnic differences in HIV knowledge among young men who have sex with men in the USA and its correlation to condom usage errors. Design Participants included an ethnically diverse sample of 344 young men who have sex with men screened from an ongoing longitudinal cohort study. Eligible participants were between the ages of 16 and 20 years, born male, and had previously had at least one sexual encounter with a man and/or identify as gay or bisexual. This analysis is based on cross-sectional data collected at the baseline interview using computer assisted self-interviewing (CASI) software. Setting Chicago, IL, USA Method We utilised descriptive and inferential statistics, including ANOVA and Tukey’s Post hoc analysis to assess differences in HIV knowledge by level of education and race/ethnicity, and negative binomial regression to determine if HIV knowledge was associated with condom errors while controlling for age, education and race/ethnicity. Results The study found that Black men who have sex with men scored significantly lower (average score=67%; p<.05) than their White counterparts (average score=83%) on a measure of HIV knowledge (mean difference=16.1%, p<.001). Participants with less than a high school diploma and those with a high school diploma/GED only had lower knowledge scores, on average (66.4%, 69.9%, respectively) than participants who had obtained post-high school education (78.1%; mean difference=11.7%, 8.2% respectively, ps<.05). In addition, controlling for age, race and level of education, higher HIV knowledge scores were associated with fewer condom errors (Exp B =.995, CI 0.992-0.999, p<0.05). Conclusion These findings stress the need to for increased attention to HIV transmission-related educational activities targeting

  15. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  16. Navigation Accuracy Guidelines for Orbital Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2003-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation's semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver we, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  17. Gender Influences on Brain Responses to Errors and Post-Error Adjustments

    PubMed Central

    Fischer, Adrian G.; Danielmeier, Claudia; Villringer, Arno; Klein, Tilmann A.; Ullsperger, Markus

    2016-01-01

    Sexual dimorphisms have been observed in many species, including humans, and extend to the prevalence and presentation of important mental disorders associated with performance monitoring malfunctions. However, precisely which underlying differences between genders contribute to the alterations observed in psychiatric diseases is unknown. Here, we compare behavioural and neural correlates of cognitive control functions in 438 female and 436 male participants performing a flanker task while EEG was recorded. We found that males showed stronger performance-monitoring-related EEG amplitude modulations which were employed to predict subjects’ genders with ~72% accuracy. Females showed more post-error slowing, but both samples did not differ in regard to response-conflict processing and coupling between the error-related negativity (ERN) and consecutive behavioural slowing. Furthermore, we found that the ERN predicted consecutive behavioural slowing within subjects, whereas its overall amplitude did not correlate with post-error slowing across participants. These findings elucidate specific gender differences in essential neurocognitive functions with implications for clinical studies. They highlight that within- and between-subject associations for brain potentials cannot be interpreted in the same way. Specifically, despite higher general amplitudes in males, it appears that the dynamics of coupling between ERN and post-error slowing between men and women is comparable. PMID:27075509

  18. Error monitoring in musicians

    PubMed Central

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  19. Study of geopotential error models used in orbit determination error analysis

    NASA Technical Reports Server (NTRS)

    Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

    1991-01-01

    The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

  20. How the brain prevents a second error in a perceptual decision-making task

    PubMed Central

    Perri, Rinaldo Livio; Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco

    2016-01-01

    In cognitive tasks, error commission is usually followed by a performance characterized by post-error slowing (PES) and post-error improvement of accuracy (PIA). Three theoretical accounts were hypothesized to support these post-error adjustments: the cognitive, the inhibitory, and the orienting account. The aim of the present ERP study was to investigate the neural processes associated with the second error prevention. To this aim, we focused on the preparatory brain activities in a large sample of subjects performing a Go/No-go task. The main results were the enhancement of the prefrontal negativity (pN) component -especially on the right hemisphere- and the reduction of the Bereitschaftspotential (BP) -especially on the left hemisphere- in the post-error trials. The ERP data suggested an increased top-down and inhibitory control, such as the reduced excitability of the premotor areas in the preparation of the trials following error commission. The results were discussed in light of the three theoretical accounts of the post-error adjustments. Additional control analyses supported the view that the adjustments-oriented components (the post-error pN and BP) are separated by the error-related potentials (Ne and Pe), even if all these activities represent a cascade of processes triggered by error-commission. PMID:27534593

  1. Hemispheric Asymmetries in Striatal Reward Responses Relate to Approach-Avoidance Learning and Encoding of Positive-Negative Prediction Errors in Dopaminergic Midbrain Regions.

    PubMed

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2015-10-28

    Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world.

  2. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  3. The requirements for the future e-beam mask writer: statistical analysis of pattern accuracy

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hee; Choi, Jin; Kim, Hee Bom; Kim, Byung Gook; Cho, Han-Ku

    2011-11-01

    As semiconductor features shrink in size and pitch, the extreme control of CD uniformity, MTT and image placement is needed for mask fabrication with e-beam lithography. Among the many sources of CD and image placement error, the error resulting from e-beam mask writer becomes more important than before. CD and positioning error by e-beam mask writer is mainly related to the imperfection of e-beam deflection accuracy in optic system and the charging and contamination of column. To avoid these errors, the e-beam mask writer should be designed taking into account for these effects. However, the writing speed is considered for machine design with the highest priority, because the e-beam shot count is increased rapidly due to design shrink and aggressive OPC. The increment of shot count can make the pattern shift problem due to statistical issue resulting from e-beam deflection error and the total shot count in layout. And it affects the quality of CD and image placement too. In this report, the statistical approach on CD and image placement error caused by e-beam shot position error is presented. It is estimated for various writing conditions including the intrinsic e-beam positioning error of VSB writer. From the simulation study, the required e-beam shot position accuracy to avoid pattern shift problem in 22nm node and beyond is estimated taking into account for total shot count. And the required local CD uniformity is calculated for various e-beam writing conditions. The image placement error is also simulated for various conditions including e-beam writing field position error. Consequently, the requirements for the future e-beam mask writer and the writing conditions are discussed. And in terms of e-beam shot noise, LER caused by exposure dose and shot position error is studied for future e-beam mask writing for 22nm node and beyond.

  4. TU-F-17A-08: The Relative Accuracy of 4D Dose Accumulation for Lung Radiotherapy Using Rigid Dose Projection Versus Dose Recalculation On Every Breathing Phase

    SciTech Connect

    Lamb, J; Lee, C; Tee, S; Lee, P; Iwamoto, K; Low, D; Valdes, G; Robinson, C

    2014-06-15

    Purpose: To investigate the accuracy of 4D dose accumulation using projection of dose calculated on the end-exhalation, mid-ventilation, or average intensity breathing phase CT scan, versus dose accumulation performed using full Monte Carlo dose recalculation on every breathing phase. Methods: Radiotherapy plans were analyzed for 10 patients with stage I-II lung cancer planned using 4D-CT. SBRT plans were optimized using the dose calculated by a commercially-available Monte Carlo algorithm on the end-exhalation 4D-CT phase. 4D dose accumulations using deformable registration were performed with a commercially available tool that projected the planned dose onto every breathing phase without recalculation, as well as with a Monte Carlo recalculation of the dose on all breathing phases. The 3D planned dose (3D-EX), the 3D dose calculated on the average intensity image (3D-AVE), and the 4D accumulations of the dose calculated on the end-exhalation phase CT (4D-PR-EX), the mid-ventilation phase CT (4D-PR-MID), and the average intensity image (4D-PR-AVE), respectively, were compared against the accumulation of the Monte Carlo dose recalculated on every phase. Plan evaluation metrics relating to target volumes and critical structures relevant for lung SBRT were analyzed. Results: Plan evaluation metrics tabulated using 4D-PR-EX, 4D-PR-MID, and 4D-PR-AVE differed from those tabulated using Monte Carlo recalculation on every phase by an average of 0.14±0.70 Gy, - 0.11±0.51 Gy, and 0.00±0.62 Gy, respectively. Deviations of between 8 and 13 Gy were observed between the 4D-MC calculations and both 3D methods for the proximal bronchial trees of 3 patients. Conclusions: 4D dose accumulation using projection without re-calculation may be sufficiently accurate compared to 4D dose accumulated from Monte Carlo recalculation on every phase, depending on institutional protocols. Use of 4D dose accumulation should be considered when evaluating normal tissue complication

  5. Complexity, Accuracy, and Fluency as Properties of Language Performance: The Development of the Multiple Subsystems over Time and in Relation to Each Other

    ERIC Educational Resources Information Center

    Vercellotti, Mary Lou

    2012-01-01

    Applied linguists have identified three components of second language (L2) performance: complexity, accuracy, and fluency (CAF) to measure L2 development. Many studies researching CAF found trade-off effects (in which a higher performance in one component corresponds to lower performance in another) during tasks, often in online oral language…

  6. Self-identification and empathy modulate error-related brain activity during the observation of penalty shots between friend and foe

    PubMed Central

    Ganesh, Shanti; van Schie, Hein T.; De Bruijn, Ellen R. A.; Bekkering, Harold

    2009-01-01

    The ability to detect and process errors made by others plays an important role is many social contexts. The capacity to process errors is typically found to rely on sites in the medial frontal cortex. However, it remains to be determined whether responses at these sites are driven primarily by action errors themselves or by the affective consequences normally associated with their commission. Using an experimental paradigm that disentangles action errors and the valence of their affective consequences, we demonstrate that sites in the medial frontal cortex (MFC), including the ventral anterior cingulate cortex (vACC) and pre-supplementary motor area (pre-SMA), respond to action errors independent of the valence of their consequences. The strength of this response was negatively correlated with the empathic concern subscale of the Interpersonal Reactivity Index. We also demonstrate a main effect of self-identification by showing that errors committed by friends and foes elicited significantly different BOLD responses in a separate region of the middle anterior cingulate cortex (mACC). These results suggest that the way we look at others plays a critical role in determining patterns of brain activation during error observation. These findings may have important implications for general theories of error processing. PMID:19015079

  7. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  8. Digital phase-locked-loop speed sensor for accuracy improvement in analog speed controls. [feedback control and integrated circuits

    NASA Technical Reports Server (NTRS)

    Birchenough, A. G.

    1975-01-01

    A digital speed control that can be combined with a proportional analog controller is described. The stability and transient response of the analog controller were retained and combined with the long-term accuracy of a crystal-controlled integral controller. A relatively simple circuit was developed by using phase-locked-loop techniques and total error storage. The integral digital controller will maintain speed control accuracy equal to that of the crystal reference oscillator.

  9. The optimization of accuracy ratio of the two-group diffusion constants in simulation model of RBMK-1000 core

    NASA Astrophysics Data System (ADS)

    Bolsunov, A. A.; Karpov, S. A.

    2013-12-01

    The relative ratio of individual accuracies of the two-group diffusion constants in a dynamic simulation model of a reactor core is optimized. This is done to minimize calculation errors of neutron flux, power, or reactivity distributions in the model. The problem is solved under the assumption that the overall accuracy of the representation of constants is limited by the resources allocated for the approximation of the constants.

  10. Estimation and Accuracy after Model Selection

    PubMed Central

    Efron, Bradley

    2013-01-01

    Classical statistical theory ignores model selection in assessing estimation accuracy. Here we consider bootstrap methods for computing standard errors and confidence intervals that take model selection into account. The methodology involves bagging, also known as bootstrap smoothing, to tame the erratic discontinuities of selection-based estimators. A useful new formula for the accuracy of bagging then provides standard errors for the smoothed estimators. Two examples, nonparametric and parametric, are carried through in detail: a regression model where the choice of degree (linear, quadratic, cubic, …) is determined by the Cp criterion, and a Lasso-based estimation problem. PMID:25346558

  11. Subjective and model-estimated reward prediction: association with the feedback-related negativity (FRN) and reward prediction error in a reinforcement learning task.

    PubMed

    Ichikawa, Naho; Siegle, Greg J; Dombrovski, Alexandre; Ohira, Hideki

    2010-12-01

    In this study, we examined whether the feedback-related negativity (FRN) is associated with both subjective and objective (model-estimated) reward prediction errors (RPE) per trial in a reinforcement learning task in healthy adults (n=25). The level of RPE was assessed by 1) subjective ratings per trial and by 2) a computational model of reinforcement learning. As results, model-estimated RPE was highly correlated with subjective RPE (r=.82), and the grand-averaged ERP waves based on the trials with high and low model-estimated RPE showed the significant difference only in the time period of the FRN component (p<.05). Regardless of the time course of learning, FRN was associated with both subjective and model-estimated RPEs within subject (r=.47, p<.001; r=.40, p<.05) and between subjects (r=.33, p<.05; r=.41, p<.005) only in the Learnable condition where the internal reward prediction varied enough with a behavior-reward contingency.

  12. Error sensitivity to refinement: a criterion for optimal grid adaptation

    NASA Astrophysics Data System (ADS)

    Luchini, Paolo; Giannnetti, Flavio; Citro, Vincenzo

    2016-11-01

    Most indicators used for automatic grid refinement are suboptimal, in the sense that they do not really minimize the global solution error. This paper concerns with a new indicator, related to the sensitivity map of global stability problems, suitable for an optimal grid refinement that minimizes the global solution error. The new criterion is derived from the properties of the adjoint operator and provides a map of the sensitivity of the global error (or its estimate) to a local mesh refinement. Examples are presented for both a scalar partial differential equation and for the system of Navier-Stokes equations. In the last case, we also present a grid-adaptation algorithm based on the new estimator and on the FreeFem++ software that improves the accuracy of the solution of almost two order of magnitude by redistributing the nodes of the initial computational mesh.

  13. [The approaches to factors which cause medication error--from the analyses of many near-miss cases related to intravenous medication which nurses experienced].

    PubMed

    Kawamura, H

    2001-03-01

    Given the complexity of the intravenous medication process, systematic thinking is essential to reduce medication errors. Two thousand eight hundred cases of 'Hiyari-Hatto' were analyzed. Eight important factors which cause intravenous medication error were clarified as a result. In the following I summarize the systematic approach for each factor. 1. Failed communication of information: illegible handwritten orders, and inaccurate verbal orders and copying cause medication error. Rules must be established to prevent miscommunication. 2. Error-prone design of the hardware: Look-alike packaging and labeling of drugs and the poor design of infusion pumps cause errors. The human-hardware interface should be improved by error-resistant design by manufacturers. 3. Patient names similar to simultaneously operating surgical procedures and interventions: This factor causes patient misidentification. Automated identification devices should be introduced into health care settings. 4. Interruption in the middle of tasks: The efficient assignment of medical work and business work should be made. 5. Inaccurate mixing procedure and insufficient mixing space: Mixing procedures must be standardized and the layout of the working space must be examined. 6. Time pressure: Mismatch between workload and manpower should be improved by reconsidering the work to be done. 7. Lack of information about high alert medications: The pharmacist should play a greater role in the medication process overall. 8. Poor knowledge and skill of recent graduates: Training methods and tools to prevent medication errors must be developed.

  14. Scout trajectory error propagation computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1982-01-01

    Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.

  15. Characteristics and error estimation of stratospheric ozone and ozone-related species over Poker Flat (65° N, 147° W), Alaska observed by a ground-based FTIR spectrometer from 2001 to 2003

    NASA Astrophysics Data System (ADS)

    Kagawa, A.; Kasai, Y.; Jones, N. B.; Yamamori, M.; Seki, K.; Murcray, F.; Murayama, Y.; Mizutani, K.; Itabe, T.

    2007-07-01

    It is important to obtain the year-to-year trend of stratospheric minor species in the context of global changes. An important example is the trend in global ozone depletion. The purpose of this paper is to report the accuracy and precision of measurements of stratospheric chemical species that are made at our Poker Flat site in Alaska (65° N, 147° W). Since 1999, minor atmospheric molecules have been observed using a Fourier-Transform solar-absorption infrared Spectrometer (FTS) at Poker Flat. Vertical profiles of the abundances of ozone, HNO3, HCl, and HF for the period from 2001 to 2003 were retrieved from FTS spectra using Rodgers' formulation of the Optimal Estimation Method (OEM). The accuracy and precision of the retrievals were estimated by formal error analysis. Errors for the total column were estimated to be 5.3%, 3.4%, 5.9%, and 5.3% for ozone, HNO3, HCl, and HF, respectively. The ozone vertical profiles were in good agreement with profiles derived from collocated ozonesonde measurements that were smoothed with averaging kernel functions that had been obtained with the retrieval procedure used in the analysis of spectra from the ground-based FTS (gb-FTS). The O3, HCl, and HF columns that were retrieved from the FTS measurements were consistent with Earth Probe/Total Ozone Mapping Spectrometer (TOMS) and HALogen Occultation Experiment (HALOE) data over Alaska within the error limits of all the respective datasets. This is the first report from the Poker Flat FTS observation site on a number of stratospheric gas profiles including a comprehensive error analysis.

  16. Diagnostic errors in interactive telepathology.

    PubMed

    Stauch, G; Schweppe, K W; Kayser, K

    2000-01-01

    Telepathology (TP) as a service in pathology at a distance is now widely used. It is integrated in the daily workflow of numerous pathologists. Meanwhile, in Germany 15 departments of pathology are using the telepathology technique for frozen section service; however, a common recognised quality standard in diagnostic accuracy is still missing. In a first step, the working group Aurich uses a TP system for frozen section service in order to analyse the frequency and sources of errors in TP frozen section diagnoses for evaluating the quality of frozen section slides, the important components of image quality and their influences an diagnostic accuracy. The authors point to the necessity of an optimal training program for all participants in this service in order to reduce the risk of diagnostic errors. In addition, there is need for optimal cooperation of all partners involved in TP service.

  17. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  18. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  19. L2 Spelling Errors in Italian Children with Dyslexia.

    PubMed

    Palladino, Paola; Cismondo, Dhebora; Ferrari, Marcella; Ballagamba, Isabella; Cornoldi, Cesare

    2016-05-01

    The present study aimed to investigate L2 spelling skills in Italian children by administering an English word dictation task to 13 children with dyslexia (CD), 13 control children (comparable in age, gender, schooling and IQ) and a group of 10 children with an English learning difficulty, but no L1 learning disorder. Patterns of difficulties were examined for accuracy and type of errors, in spelling dictated short and long words (i.e. disyllables and three syllables). Notably, CD were poor in spelling English words. Furthermore, their errors were mainly related with phonological representation of words, as they made more 'phonologically' implausible errors than controls. In addition, CD errors were more frequent for short than long words. Conversely, the three groups did not differ in the number of plausible ('non-phonological') errors, that is, words that were incorrectly written, but whose reading could correspond to the dictated word via either Italian or English rules. Error analysis also showed syllable position differences in the spelling patterns of CD, children with and English learning difficulty and control children. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  1. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  2. Optimal error intervals for properties of the quantum state

    NASA Astrophysics Data System (ADS)

    Li, Xikun; Shang, Jiangwei; Ng, Hui Khoon; Englert, Berthold-Georg

    2016-12-01

    Quantum state estimation aims at determining the quantum state from observed data. Estimating the full state can require considerable efforts, but one is often only interested in a few properties of the state, such as the fidelity with a target state, or the degree of correlation for a specified bipartite structure. Rather than first estimating the state, one can, and should, estimate those quantities of interest directly from the data. We propose the use of optimal error intervals as a meaningful way of stating the accuracy of the estimated property values. Optimal error intervals are analogs of the optimal error regions for state estimation [New J. Phys. 15, 123026 (2013), 10.1088/1367-2630/15/12/123026]. They are optimal in two ways: They have the largest likelihood for the observed data and the prechosen size, and they are the smallest for the prechosen probability of containing the true value. As in the state situation, such optimal error intervals admit a simple description in terms of the marginal likelihood for the data for the properties of interest. Here, we present the concept and construction of optimal error intervals, report on an iterative algorithm for reliable computation of the marginal likelihood (a quantity difficult to calculate reliably), explain how plausible intervals—a notion of evidence provided by the data—are related to our optimal error intervals, and illustrate our methods with single-qubit and two-qubit examples.

  3. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  4. Research on controlling middle spatial frequency error of high gradient precise aspheric by pitch tool

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan; Zhong, Xianyun

    2016-09-01

    Extreme optical fabrication projects known as EUV and X-ray optic systems, which are representative of today's advanced optical manufacturing technology level, have special requirements for the optical surface quality. In synchroton radiation (SR) beamlines, mirrors of high shape accuracy is always used in grazing incidence. In nanolithograph systems, middle spatial frequency errors always lead to small-angle scattering or flare that reduces the contrast of the image. The slope error is defined for a given horizontal length, the increase or decrease in form error at the end point relative to the starting point is measured. The quality of reflective optical elements can be described by their deviation from ideal shape at different spatial frequencies. Usually one distinguishes between the figure error, the low spatial error part ranging from aperture length to 1mm frequencies, and the mid-high spatial error part from 1mm to 1 μm and from1 μm to some 10 nm spatial frequencies, respectively. Firstly, this paper will disscuss the relationship between slope error and middle spatial frequency error, which both describe the optical surface error along with the form profile. Then, experimental researches will be conducted on a high gradient precise aspheric with pitch tool, which aim to restraining the middle spatial frequency error.

  5. Improving Automatic English Writing Assessment Using Regression Trees and Error-Weighting

    NASA Astrophysics Data System (ADS)

    Lee, Kong-Joo; Kim, Jee-Eun

    The proposed automated scoring system for English writing tests provides an assessment result including a score and diagnostic feedback to test-takers without human's efforts. The system analyzes an input sentence and detects errors related to spelling, syntax and content similarity. The scoring model has adopted one of the statistical approaches, a regression tree. A scoring model in general calculates a score based on the count and the types of automatically detected errors. Accordingly, a system with higher accuracy in detecting errors raises the accuracy in scoring a test. The accuracy of the system, however, cannot be fully guaranteed for several reasons, such as parsing failure, incompleteness of knowledge bases, and ambiguous nature of natural language. In this paper, we introduce an error-weighting technique, which is similar to term-weighting widely used in information retrieval. The error-weighting technique is applied to judge reliability of the errors detected by the system. The score calculated with the technique is proven to be more accurate than the score without it.

  6. Influence of the process-induced asymmetry on the accuracy of overlay measurements

    NASA Astrophysics Data System (ADS)

    Shapoval, Tetyana; Schulz, Bernd; Itzkovich, Tal; Durran, Sean; Haupt, Ronny; Cangiano, Agostino; Bringoltz, Barak; Ruhm, Matthias; Cotte, Eric; Seltmann, Rolf; Hertzsch, Tino; Hajaj, Eitan; Hartig, Carsten; Efraty, Boris; Fischer, Daniel

    2015-03-01

    In the current paper we are addressing three questions relevant for accuracy: 1. Which target design has the best performance and depicts the behavior of the actual device? 2. Which metrology signal characteristics could help to distinguish between the target asymmetry related overlay shift and the real process related shift? 3. How does uncompensated asymmetry of the reference layer target, generated during after-litho processes, affect the propagation of overlay error through different layers? We are presenting the correlation between simulation data based on the optical properties of the measured stack and KLA-Tencor's Archer overlay measurements on a 28nm product through several critical layers for those accuracy aspects.

  7. Language comprehension errors: A further investigation

    NASA Astrophysics Data System (ADS)

    Clarkson, Philip C.

    1991-06-01

    Comprehension errors made when attempting mathematical word problems have been noted as one of the high frequency categories in error analysis. This error category has been assumed to be language based. The study reported here provides some support for the linkage of comprehension errors to measures of language competency. Further, there is evidence that the frequency of such errors is related to competency in both the mother tongue and the language of instruction for bilingual students.

  8. Context specificity of post-error and post-conflict cognitive control adjustments.

    PubMed

    Forster, Sarah E; Cho, Raymond Y

    2014-01-01

    There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control.

  9. Twenty questions about student errors

    NASA Astrophysics Data System (ADS)

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    Errors in science learning (errors in expression of organized, purposeful thought within the domain of science) provide a window through which glimpses of mental functioning can be obtained. Errors are valuable and normal occurrences in the process of learning science. A student can use his/her errors to develop a deeper understanding of a concept as long as the error can be recognized and appropriate, informative feedback can be obtained. A safe, non-threatening, and nonpunitive environment which encourages dialogue helps students to express their conceptions and to risk making errors. Pedagogical methods that systematically address common student errors produce significant gains in student learning. Just as the nature-nurture interaction is integral to the development of living things, so the individual-environment interaction is basic to thought processes. At a minimum, four systems interact: (1) the individual problem solver (who has a worldview, relatively stable cognitive characteristics, relatively malleable mental states and conditions, and aims or intentions), (2) task to be performed (including relative importance and nature of the task), (3) knowledge domain in which task is contained, and (4) the environment (including orienting conditions and the social and physical context).Several basic assumptions underlie research on errors and alternative conceptions. Among these are: Knowledge and thought involve active, constructive processes; there are many ways to acquire, organize, store, retrieve, and think about a given concept or event; and understanding is achieved by successive approximations. Application of these ideas will require a fundamental change in how science is taught.

  10. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  11. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  12. Comparative evaluation of ultrasound scanner accuracy in distance measurement

    NASA Astrophysics Data System (ADS)

    Branca, F. P.; Sciuto, S. A.; Scorza, A.

    2012-10-01

    The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.

  13. Spatial variability in sensitivity of reference crop ET to accuracy of climate data in the Texas High Plains

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...

  14. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  15. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  16. Partially supervised P300 speller adaptation for eventual stimulus timing optimization: target confidence is superior to error-related potential score as an uncertain label

    NASA Astrophysics Data System (ADS)

    Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom

    2016-04-01

    Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA

  17. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from t